---
Unlike the European Union's centralized "AI Act," the United States has adopted a distributed, market-driven approach to AI governance. There is no single "US AI Law." Instead, enterprises face a patchwork of federal agency enforcement, state-level privacy statutes, and sweeping Executive Orders.
However, calling the US "unregulated" is a dangerous misconception. Through broad interpretations of existing consumer protection laws and new mandates for national security, the US compliance environment is becoming arguably more unpredictable than Europe's.
The Federal Approach: "No New Laws, Just New Enforcement"
The Biden Administration's strategy relies on empowering existing agencies to police AI using their statutory authority.
Executive Order 14110 (Safe, Secure, and Trustworthy AI)
Signed in October 2023, EO 14110 is the cornerstone of federal AI policy. While not a statute, it carries the force of law for government contractors and invokes the Defense Production Act (DPA) for private sector giants.
- The Computing Threshold: Companies training "dual-use foundation models" (specifically those using >10^26 FLOPS) must report training plans and safety test results to the Department of Commerce.
- Know Your Customer (KYC): US cloud providers (AWS, Azure, Google Cloud) are required to report foreign entities renting capacity to train large models.
The FTC: The De Facto AI Regulator
The Federal Trade Commission (FTC) has aggressively asserted jurisdiction over AI under Section 5 of the FTC Act (unfair and deceptive practices).
"Algorithmic Disgorgement": A novel penalty where the FTC forces companies to delete not just ill-gotten data, but the algorithms trained on that data (e.g., FTC v. ...* settlements).- Marketing Claims: Companies claiming their AI is "magical" or "bias-free" without evidence face enforcement for deceptive advertising.
NIST AI Risk Management Framework (AI RMF)
While voluntary, the NIST AI RMF (AI 100-1) is rapidly becoming the standard of care for US liability defense. Courts and regulators increasingly view adherence to NIST standards as evidence of due diligence.
The 4 Core Functions:| Function | Enterprise Objective |
|---|---|
| GOVERN | Cultivate a culture of risk management; executive accountability. |
| MAP | Contextualize risks; understand downstream impacts. |
| MEASURE | Quantitative and qualitative assessment (benchmarking). |
| MANAGE | Allocate resources to prioritize and mitigate mapped risks. |
State-Level Complexity: The "California Effect"
In the absence of Congress passing a federal AI bill, states are filling the vacuum.
- California (SB 1047 vetoed, others passing): While Governor Newsom vetoed the sweeping SB 1047, California has passed laws regarding neural data privacy and deepfake transparency (AB 2655).
- Colorado (Consumer Artificial Intelligence Act): The first comprehensive US state law explicitly regulating "high-risk" AI systems, effective 2026. It imposes a duty of care on developers to avoid algorithmic discrimination.
US vs. EU: A Compliance Matrix
Understanding the philosophical difference is key for multinational strategy.
| Feature | EU (AI Act) | US (EO 14110 + Agencies) |
|---|---|---|
| Structure | Single comprehensive regulation | Fragmented agency rules + Executive Orders |
| Enforcement | Centralized (AI Office) + National | Decentralized (FTC, DOJ, EEOC) |
| Primary Focus | Fundamental Rights & Safety | National Security & Consumer Protection |
| Foundation Models | Tiered Compliance (Systemic Risk) | Computation Thresholds (Defense Production Act) |
Enterprise Action Plan: Navigating the Fragmentation
The lack of a single law does not mean a lack of liability. Enterprises should adopt a "NIST-First" strategy to insulate against the unpredictable regulatory environment.
1. Adopt NIST AI RMF: Formally integrate the Map/Measure/Manage framework into product development lifecycles. This is your primary legal shield.
2. Review Agency Guidance: Specifically the EEOC (hiring algorithms) and FTC (marketing claims) joint statements on AI.
3. Monitor State Bills: If you sell nationally, align with Colorado's high-risk standards as a baseline, similar to how companies treat CCPA for privacy.
4. Compute Audits: If you are training foundation models, verify if you cross the 10^26 FLOPS reporting threshold under EO 14110.
For a direct comparison with Europe's approach, read our guide on The EU AI Act. To understand how to implement these controls, see The Compliance Tech Stack.