On March 18, 2026, Sen. Marsha Blackburn released the Trump America AI Act, a nearly 300-page discussion draft seeking to establish a regulatory framework around AI. The proposed legislative framework incorporates provisions from previous proposed bills such as the Kids Online Safety Act and No Fakes Act. The Bill also seeks to preempt individual state AI regulatory frameworks. The draft has not yet been formally introduced or referred to a committee as of the date of this alert.
The proposal appears to be part of an ongoing White House-Congress negotiation over a national AI framework, and outside reporting indicates the administration may release related recommendations of its own. The Bill represents one of the most comprehensive efforts to advance a regulatory framework for AI. This alert focuses on key provisions of the Bill, particularly those concerning potential liabilities for AI developers and deployers.
Liabilities Created
The Bill would establish new, broad liability frameworks for AI developers and deployers. Developers would face negligence, strict liability, and warranty-based claims for AI systems that cause harm, including property damage, physical injury, financial harm, reputational injury, or psychological anguish. Where a product’s design is “manifestly unreasonable,” claimants would not need to prove a reasonable alternative design existed.
Deployers who substantially modify a product or intentionally misuse it would be treated as developers for liability purposes. Joint and several liability would apply where both parties are found to have contributed to an alleged harm. Contractual provisions that waive rights or unreasonably limit liability would not be enforceable.
Beyond traditional product liability causes of action, the Bill would impose a chatbot duty of care (Title I), with violations treated as unfair or deceptive acts under the Federal Trade Commission Act (15 U.S.C. 57a(a)(1)(B)). Violations of these provisions could give rise to potential federal criminal offenses for AI chatbots that solicit minors for sexually explicit conduct or encourage self-harm, with fines up to $100,000 per offense.
Advanced AI developers that fail to participate in mandatory federal evaluations could face fines of at least $1,000,000 per day. Civil liability would also extend to the unauthorized use of digital replicas, with statutory damages up to $750,000 per “work embodying the applicable unauthorized digital replica.” The Bill excludes AI training on copyrighted works from fair use and establishes infringement liability for AI-generated derivative content.
Potential Preemption
The Bill takes a title-by-title approach, but its overarching savings clause provides that nothing in the Bill shall preempt any “generally applicable law, such as a body of common law or a scheme of sectoral governance,” preserving room for potential concurrent state regulation.
Yet, certain provisions would supersede state law where that “conflicts with” federal provisions. Moreover, Titles IV and V expressly permit states to enact laws providing greater protections. The No Fakes Act, included in Bill as Title XII, would preempt state digital-replica causes of action while keeping a carve-out for pre-existing state laws and state statutes regulating sexually explicit or “election-related” replicas. Notably, Title III of Bill would fully repeal Section 230 of the Communications Act two years after enactment of Bill, eliminating the federal immunity framework for certain interactive computer services.
Protections for Children, Creator Rights, Communities, and Viewpoint Neutrality
Title IV of the Bill would impose a duty of care on covered platforms to prevent foreseeable harm to minors. Such harms include eating disorders, suicidal behaviors, and sexual exploitation. The Bill would further require safeguards such as communication limits, data-exposure protections, and opt-out settings for addictive design features. Further, Title V includes language that would prohibit minors from accessing AI companions entirely, require age verification for all chatbot users, and ban patterns that subvert parental safeguards.
Several provisions would introduce protections for “creators.” Title XII would establish a federal property right in each individual’s voice and visual likeness that survives death. This proposed right would be transferable and licensable. The Bill’s language excludes AI training from fair use and denies copyright protection eligibility to unauthorized AI-generated works. Title XIII would create a subpoena mechanism for copyright owners to compel disclosure of copyrighted works used in AI training.
Title XI, titled Consumer Protections for Data Center Infrastructure Costs, would require data center operators to pay for all new power delivery infrastructure upgrades and prohibit passing those costs to households. This provision codifies the administration’s commitment to shielding ratepayers from energy costs associated with AI infrastructure expansion. The covered data centers would be limited to those with a power demand of 20 megawatts or more that is not owned, operated, or maintained by certain agencies. Operators would also be required to hire locally and establish skills development programs or else risk losing eligibility for federal incentives.
The Bill would also require annual third-party audits of high-risk AI systems to detect viewpoint discrimination or political-affiliation-based bias. Title XVI directs federal agencies to procure only AI models developed in accordance with “unbiased artificial intelligence principles,” including ideological neutrality and nonpartisanship.
Audit Requirements
The Bill would mandate several audit regimes. Providers of high-risk AI systems would be required to undergo annual independent audits for bias and viewpoint discrimination, with reports submitted to the FTC within 180 days. Providers would also be required to provide annual ethics training to all relevant personnel on the “development, use, and deployment of high-risk artificial intelligence systems.” Covered platforms would need to issue annual transparency reports based on third-party audits that assess recommendation systems and minors’ experiences.
Also, the Secretary of Energy would be tasked with establishing an Advanced AI Evaluation Program featuring red-team testing, blind model evaluations, and containment protocols for advanced systems.
Innovation Initiatives
The Bill directs the National Institute of Standards and Technology (NIST) to establish a Center for AI Standards and Innovation, and, within 90 days, develop voluntary best practices, red-teaming capabilities, and synthetic content detection tools. NIST and the Department of Energy must jointly create an AI testbed program for public-private collaboration on system evaluations. The Bill would also establish a National AI Research Resource (NAIRR) to provide researchers, students, and small businesses with potential access to computational resources, datasets, and testbeds, including a free and subsidized tier of access. The Bill also authorizes international cooperation with allied nations on AI standards.
Initial coverage has focused on three pressure points: whether the draft truly preempts state AI laws; whether its liability and audit provisions are compatible with a pro-innovation agenda; and whether Congress can assemble a coalition around a package that mixes child safety, copyright, viewpoint-bias, workforce, and infrastructure provisions. Early expert reaction suggests the framework’s biggest flashpoints will be the practical limits of federal preemption, the breadth of new private and public enforcement tools, and the tension between the draft’s innovation-first rhetoric and its substantial compliance, reporting, and litigation burdens.
Takeaways
Although the proposal is still a discussion draft, it signals a federal approach that combines innovation policy with aggressive product-liability, child-safety, copyright, and platform-accountability provisions. Companies that build, fine-tune, deploy, or integrate AI should assess exposure not only to new audit and transparency duties, but also to potential changes in Section 230, content provenance, digital-replica rights, and AI-related contract-risk allocation.
Organizations should map existing AI governance, product-safety review, child-safety controls, provenance/content-labeling practices, vendor contracts, and litigation exposure against the draft’s liability, audit, and transparency concepts. While Bill’s preemption framework is largely permissive of state regulation, the proposed repeal of Section 230 and the creation of new federal causes of action would significantly alter the litigation landscape.