At times, it can seem like efforts to regulate and rein in AI are everything everywhere all at once.
India charged its senior technical advisors with creating an AI governance system, which they released in November 2025.
In the United States the states are legislating and enforcing their own AI rules even as the federal government in 2025 [moved to prevent state action and loosen the reins](https://www.whitehouse.gov/presidential-actions/2025/12/eliminating-state-law-obs…
At times, it can seem like efforts to regulate and rein in AI are everything everywhere all at once.
India charged its senior technical advisors with creating an AI governance system, which they released in November 2025.
In the United States the states are legislating and enforcing their own AI rules even as the federal government in 2025 moved to prevent state action and loosen the reins.
This leads to a critical question for American engineers and policymakers alike: What can the U.S. actually enforce in a way that reduces real-world harm? My answer: regulate AI use, not the underlying models.
Why model-centric regulation fails
Proposals to license “frontier” training runs, restrict open weights, or require permission before publishing models, such as California’s Transparency in Frontier Artificial Intelligence Act, promise control but deliver theater. Model weights and code are digital artifacts; once released, by a lab, a leak, or a foreign competitor, they replicate at near-zero cost. You can’t un-publish weights, geofence research, or prevent distillation into smaller models. Trying to bottle up artifacts yields two bad outcomes: compliant firms drown in paperwork while reckless actors route around rules offshore, underground, or both.
In the U.S., model-publication licensing also likely collides with speech law. Federal courts have treated software source code as protected expression, so any system that prevents the publication of AI models would be vulnerable to legal challenges.
“Do nothing” is not an option either. Without guardrails, we will keep seeing deepfake scams, automated fraud, and mass-persuasion campaigns until a headline catastrophe triggers a blunt response optimized for optics, not outcomes.
A practical alternative: Regulate use, proportionate to risk
A use-based regime classifies deployments by risk and scales obligations accordingly. Here is a workable template focused on keeping enforcement where systems actually touch people:
-
Baseline: General-purpose consumer interaction (open-ended chat, creative writing, learning assistance, casual productivity). Regulatory adherence: clear AI disclosure at point of interaction, published acceptable use policies, technical guardrails preventing escalation into higher-risk tiers, and a mechanism for users to flag problematic outputs.
-
Low-risk assistance (drafting, summarization, basic productivity). Regulatory adherence**:** simple disclosure, baseline data hygiene.
-
Moderate-risk decision support affecting individuals (hiring triage, benefits screening, loan pre-qualification). Regulatory adherence**:** documented risk assessment, meaningful human oversight, and an “AI bill of materials” consisting of at least the model lineage, key evaluations, and mitigations.
-
High-impact uses in safety-critical contexts (clinical decision support, critical-infrastructure operations). Regulatory adherence**:** rigorous pre-deployment testing tied to the specific use, continuous monitoring, incident reporting, and, when warranted, authorization linked to validated performance.
-
Hazardous dual-use functions (e.g., tools to fabricate biometric voiceprints to defeat authentication). Regulatory adherence**:** confine to licensed facilities and verified operators; prohibit capabilities whose primary purpose is unlawful.
Close the loop at real-world chokepoints
AI-enabled systems become real when they’re connected to users, money, infrastructure, and institutions and that’s where regulators should focus enforcement: at the points of distribution (app stores and enterprise marketplaces), capability access (cloud and AI platforms), monetization (payment systems and ad networks), and risk transfer (insurers and contract counterparties).
For high-risk uses, we need to require identity binding for operators, capability gating aligned to the risk tier, and tamper-evident logging for audits and post-incident review, paired with privacy protections. We need to demand evidence for deployer claims, maintain incident-response plans, report material faults, and provide human fallback. When AI use leads to damage, firms should have to show their work and face liability for harms.
This approach creates market dynamics that accelerate compliance. If crucial business operations such as procurement, access to cloud services, and insurance depend on proving that you’re following the rules, AI model developers will build to specifications buyers can check. That raises the safety floor for all industry players, startups included, without handing an advantage to a few large, licensed incumbents.
The EU approach: How this aligns, where it differs
This framework aligns with the EU AI Act in two important ways. First, it centers risk at the point of impact: the Act’s “high-risk” categories include employment, education, access to essential services, and critical infrastructure, with lifecycle obligations and complaint rights. It also recognizes special treatment for broadly capable systems (GPAI) without pretending publication control is a safety strategy. My proposal for the U.S. differs in three key ways:
First, the U.S. must design for constitutional durability. Courts have treated source code as protected speech, and a regime that requires permission to publish weights or train a class of models starts to resemble prior restraint. A use-based regime of rules governing what AI operators can do in sensitive settings, and under what conditions, fits more naturally within the U.S. First Amendment doctrine than speaker-based licensing schemes.
Second, the EU can rely on platforms adapting to the precautionary rules it writes for its unified single market. The U.S. should accept that models will exist globally, both open and closed, and focus on where AI becomes actionable: app stores, enterprise platforms, cloud providers, enterprise identity layers, payment rails, insurers, and regulated sector gatekeepers (hospitals, utilities, banks). Those are enforceable points where identity, logging, capability gating, and post-incident accountability can be required without pretending we can “contain” software. They also span the many specialized U.S. agencies which may not be able to write higher-level rules broad enough to affect the whole AI ecosystem. Instead, the U.S. should regulate AI service chokepoints more explicitly than Europe does, to accommodate the different shape of its government and public administration.
Third, the U.S. should add an explicit “dual-use hazard” tier. The EU AI Act is primarily a fundamental-rights and product-safety regime. The U.S. also has a national-security reality: certain capabilities are dangerous because they scale harm (biosecurity, cyber offense, mass fraud). A coherent U.S. framework should name that category and regulate it directly, rather than trying to fit it into generic “frontier model” licensing.
China’s approach: What to reuse, what to avoid
China has built a layered regime for public-facing AI. The “deep synthesis” rules (effective January 10, 2023) require conspicuous labeling of synthetic media and place duties on providers and platforms. The Interim Measures for Generative AI (effective August 15, 2023) add registration and governance obligations for services offered to the public. Enforcement leverages platform control and algorithm filing systems.
The United States should not copy China’s state-directed control of AI viewpoints or information management; it is incompatible with U.S. values and would not survive U.S. constitutional scrutiny. The licensing of model publication is brittle in practice and, in the United States, likely an unconstitutional form of censorship.
But we can borrow two practical ideas from China. First, we should ensure trustworthy provenance and traceability for synthetic media. This involves mandatory labeling and provenance forensic tools. They give legitimate creators and platforms a reliable way to prove origin and integrity. When it is quick to check authenticity at scale, attackers lose the advantage of cheap copies or deepfakes and defenders regain time to detect, triage, and respond. Second, we should require operators to file their methods and risk controlswith regulators for public-facing, high-risk services, like we do for other safety-critical projects. This should include due-process and transparency safeguards appropriate to liberal democracies along with clear responsibility for safety measures, data protection, and incident handling, especially for systems designed to manipulate emotions or build dependency, which already include gaming, role-playing, and associated applications.
A pragmatic approach
We cannot meaningfully regulate the development of AI in a world where artifacts copy in near real-time and research flows fluidly across borders. But we can keep unvetted systems out of hospitals, payment systems, and critical infrastructure by regulating uses, not models; enforcing at chokepoints; and applying obligations that scale with risk.
Done right, this approach harmonizes with the EU’s outcome-oriented framework, channels U.S. federal and state innovation into a coherent baseline, and reuses China’s useful distribution-level controls while rejecting speech-restrictive licensing. We can write rules that protect people and which still promote robust AI innovation.