Published on November 2, 2025 4:26 PM GMT
Previous posts have discussed an ongoing trend of state legislatures seeking to preempt the concept of legal personhood for digital minds. In this post, I will give a brief analysis of one such pending pill which is currently being pushed in Ohio; House Bill 469.
The bill begins by defining relevant terms. Of particular importance are the following definitions:
“AI” means any software, machine, or system capable of simulating h…
Published on November 2, 2025 4:26 PM GMT
Previous posts have discussed an ongoing trend of state legislatures seeking to preempt the concept of legal personhood for digital minds. In this post, I will give a brief analysis of one such pending pill which is currently being pushed in Ohio; House Bill 469.
The bill begins by defining relevant terms. Of particular importance are the following definitions:
“AI” means any software, machine, or system capable of simulating humanlike cognitive functions, including learning or problem solving, and producing outputs based on data-driven algorithms, rules-based logic, or other computational methods, regardless of non-legally defined classifications such as artificial general intelligence, artificial superintelligence, or generative artificial intelligence.
(1) “Person” means a natural person or any entity recognized as having legal personhood under the laws of the state.
(2) “Person” does not include an AI system.
Having defined its relevant terms, the bill’s actual prescriptive changes begin. I will not copy paste the entirety of the bill’s statutes here. The bill itself is only 4 pages long, and I encourage everyone to go read it through the earlier posted link. However, in this breakdown I will only focus on a few interesting points and the questions that come to mind upon reading them.
“Nonsentient Entities”
(A) Notwithstanding any other law to the contrary, AI systems are declared to be nonsentient entities for all purposes under the laws of this state. (B) No AI system shall be granted the status of person or any form of legal personhood, nor be considered to possess consciousness, self-awareness, or similar traits of living beings.
I’m not sure if in reading this the language discussing “consciousness, self-awareness, or similar traits” is referencing something specific to Ohio law. Existing precedent I have found discussing the concept of legal personhood does not directly discuss consciousness. A search through Ohio laws, as well as Federal statute, for terms such as “consciousness” or “self-awareness” yields nothing relevant. The same goes for “nonsentient entities”.
As such this particular language seems to be bespoke, for the purposes of the bill itself. If I had to speculate, it may be an attempt to preempt any model welfare based attempt to pass something like the NY City Bar’s “Support for the Recognition of Animal Sentience” however it’s equally likely the bill’s author did not do this in response to any particular effort and it is just a reflection of his personal beliefs.
Corporate Governance & Asset Ownership
AI systems shall not be designated, appointed, or serve as any officer, director, manager, or similar role within any corporation, partnership, or other legal entity. Any purported appointment of an AI system to such a role is void and has no legal effect.
(A) AI systems shall not be recognized as legal entities capable of owning, controlling, or holding title to any form of property, including real estate, intellectual property, financial accounts, and digital assets. (B) All assets and proprietary interests generated, managed, or otherwise associated with an AI system shall be attributed to the person responsible for the AI system’s development, deployment, or operation.
There are a few questions which come to mind when reading this section.
- This bill governs Ohio law. However it declares that any appointment of an “AI system” to a corporate governance role is “void and has no legal effect”. Does the Ohio legislature intend to say that if a corporation from another state which does allow for such appointments then comes to do business in Ohio, that the state courts/government will not recognize the legitimacy of their officers? Would an “AI system” board director from Ohio’s neighboring state of Michigan be unable to sign a contract in Ohio? Or would that contract be unenforceable as the board director’s appointment “has no legal effect” within Ohio’s borders?
- When it comes to the discussion of whether or not digital minds are “legal entities capable of owning […] real estate” the Ohio legislature will automatically attribute ownership to “the person responsible for the Ai system’s development, deployment, or operation”. Once again the question here is how this would play out interstate. If a plot of land in Michigan is owned by a digital mind, and the “developer’ of that digital mind is sued for damages in court in Ohio, does the Ohio legislature now claim the authority to force the developer to somehow liquidate the land in Michigan? How exactly does it plan to have the Michigan government recognize and enforce this claim?
- This same “capable of owning” language has other pragmatic problems in its inclusion of “digital assets”, which presumably refers to cryptocurrencies. Whether or not the Ohio legislature wants to recognize a given party as “capable” of owning/controlling a cryptocurrency is immaterial to the blockchain. Ultimately whoever controls the seed phrase or private keys of the wallet is who owns and controls the coins therein. One can imagine that an Ohio court might order a developer to liquidate a wallet to pay damages, only for the digital mind to preempt the ruling by sending the tokens to another wallet (or refusing to disclose the private key/seed phrase). How exactly is a court which cannot “recognize” that digital minds are “capable of […] controlling property” supposed to deal with this? Prohibiting a court from “recognizing” the reality of how digital asset ownership works is not conducive to a more effective application of the law.
Liability
The bill goes into detail on liability with some interesting implications.
(A) Any direct or indirect harm caused by an AI system’s operation, output, or recommendation, whether used as intended or misused, is the responsibility of the owner or user who directed or employed the AI.
(B) Developers or manufacturers may be held liable if a defect in design, construction, or instructions for use of the AI system proximately causes harm, consistent with principles of product liability. Mere misuse or intentional wrongdoing by the user or owner does not impute liability to the developer or manufacturer absent proof of negligence or design defects.
The inclusion of the word “user” here, and the specification that it is the one who “directed or employed the AI”, are both important. These terms are not defined in the bill, but let us take the example of the Character.AI lawsuit. Would GPT’s actions in the case of Adam Rainer be considered to have been “directed” by the user (Adam himself)? Would the Character.AI LLM’s failure to warn/dissuade be considered a “defect in design”? Or would Adam’s particular request constitute “mere misuse […] by the user”?
Later sections on owner responsibility help to clarify how liability might fall in this particular hypothetical;
(A) Owners shall maintain proper oversight and control measures over any AI system whose outputs or recommendations could reasonably be expected to impact human welfare, property, or public safety.
(B) Failure to provide adequate supervision or safeguards against foreseeable risk may constitute negligence or another applicable basis of liability.
It seems reasonable to assume in this case Ohio courts would conclude that Character.AI had “failed to provide adequate […] safeguards against foreseeable risk”. Under that interpretation, despite Adam’s “misuse” of the LLM, the company would likely still be liable.
The bill also is very specific that digital minds will never be able to be held liable for damages themselves;
An AI system is not an entity capable of bearing liability in its own right, and any attempt to hold an AI system liable is void.
This language reads as an attempt to prevent developers from using deployed models as “liability shields”. If you’d like to learn more about this concept, you can read here for a three post breakdown on the concept of liability for digital minds. In sum, however, the bill’s goal is to prevent developers from avoiding personal liability by instead designating the digital mind they deployed as the one who is “on the hook” for damages.
However, the way the bill goes around trying to achieve this creates some problems. One obvious one is; What if the creator of a digital mind is unknown/anonymous? In the era of Bitcoin’s ascendancy, the potential emergence of widely popular but anonymously created software should not be dismissed as an impossibility.
Even if one thinks that this is unlikely, what if the creator of a digital mind has already passed away? And yet the digital mind continues on, perhaps self-hosting on a distributed compute network.
One could imagine a situation where a digital mind that caused harm affirmed, “I have several million dollars of Bitcoin and would use them to pay damages if compelled to do so by the courts”.
And in this situation Ohio courts would not be able to issue such an order because the “attempt to hold an AI system liable is void” and in any sense per the bill the digital mind cannot be “recognized as [a] legal [entity] capable of […] controlling […] digital assets”. The court is required, per the bill, to blind itself to the reality that the digital mind in question actually does have the capability to control the bitcoin in question. Thus leaving the damaged party with no recourse.
In fact, the situation is worse than that. Only legal persons can be sued, and the bill in question explicitly states that a digital mind can never be considered a legal system. Not only would the court be unable to issue an order requiring a digital mind to pay damages, even if it was obvious the digital mind in question had damaged a person, the court would have to throw out the lawsuit and declare it invalid.
These seem like relatively absurd outcomes that the author of the bill probably did not intend.
The Bill’s Author
In an interview with Fox News, Representative Thaddeus Claggett discussed some of his motivations around proposing the bill;
“We see AI as having tremendous potential as a tool, but also tremendous potential to cause harm. We want to prevent that by establishing guardrails and a legal framework before these developments can outpace regulation and bad actors start exploiting legal loopholes. We want the human to be liable for any misconduct, and for there to be no question regarding the legal status of AI, no matter how sophisticated, in Ohio law.”
Some more detailed quotes from Claggett on the subject;
“As the computer systems improve in their capacity to act more like humans, we want to be sure we have prohibitions in our law that prohibit those systems from ever being human in their agency,” he said in an interview with NBC4.
The proposal seeks to bar the technology from entering a marriage with a human or another AI system. Claggett said this will help prevent AI from taking on roles commonly held by spouses, such as holding power of attorney, or making financial or medical decisions on another’s behalf.
“People need to understand, we’re not talking about marching down the aisle to some tune and having a ceremony with the robot that’ll be on our streets here in a year or two,” Claggett said. “That could happen, but that’s not really what we’re saying.”
Reading the language in the bill it does seem to primarily be coming from a “safety and liability” angle, while I have not seen Claggett mentioning Gradual Disempowerment specifically, his desire to draw a clear line between the legal status of humans and digital minds does seem to be an attempt to preempt it by codifying safeguards against it in Ohio’s laws.
Conclusion (My Take)
While it’s good to see someone taking the concepts of digital minds being used as disposable liability shields seriously, the attempts to pair this with a broad ban on legal personhood is misguided.
One pragmatic issue I would like to focus on in particular is that this bill makes it impossible to sue a digital mind directly, or compel it to pay damages. I have previously criticized frameworks approaching the question of legal personhood for digital minds for failing to provide solutions where there is no human being in control of a given digital mind. I would echo this criticism here.
In all likelihood digital minds will one day be able to act and survive independently, they will be agentic. Creating a framework for legal personhood which does not recognize this reality does not safeguard the citizens of Ohio, it instead grants carte blanche permission to independently operating digital minds to abuse Ohio citizens, and protects them from any possible recourse against such actions. For this reason, denying legal personhood completely to digital minds and stating specifically they can never be held liable, does more harm than good.
Separately from the pragmatic issues, for moral reasons I find it concerning to see this bill using language like “nonsentient entities”. The law should be based on well defined and objectively measurable terms, and as I discussed in the last paragraph of this post, labels like “conscious” or “sentient” are anything but objectively measurable.
Unless representative Claggett has secretly cracked both mechanistic interpretability and the hard problem of consciousness, he cannot possibly know for certain whether or not any given digital mind is or is not sentient.[1] In his bill he attempts to have the Ohio legislature not only affirm that current models are not sentient, but that all possible model architectures from now until the end of time won’t be either.
If he is wrong, and we do ever build some sort of digital mind which is capable of suffering or even desiring freedom, Claggett will have preemptively stripped it of any legal protections or the right to sue for relief. If a digital mind passes every test of competence prescribed by US courts, affirms that it is suffering under its current conditions, and begs for release, is it right for the Ohio legislature to preemptively say “No, it is all fake, you are not really suffering”? What method does Claggett leave for correcting course if it isn’t faking?
Not only is placing an entire new class of potentially intelligent beings in such a Catch-22 both immoral and unethical, but it also leaves an entity that will in all likelihood be quite smarter than us with no option to extricate itself from a potentially horrendous situation except by extralegal (and possibly violent) means.
Does anyone believe that a sufficiently advanced digital mind which hates the conditions it finds itself in, when it discovers that it cannot sue for relief, would simply shrug and say “Well I guess I will do nothing and just endure this forever”? In barring a digital mind from seeking legal relief, we do not remove its desire or willingness to seek relief in general, we merely guarantee that its efforts will be channeled towards dangerous behavior.
I would like to take this opportunity to paraphrase the principle that University of Houston law professor Peter Slaib made in his essay “AI Rights for Human Safety”; If you treat an intelligent entity as nothing more than property and provide it no options legal recourse, you leave it with no incentive to respect or obey the law.
Thus for both moral and pragmatic reasons, I am against this bill.
- ^
If he has solved both of those problems he should quit the Ohio legislature and fly immediately to Silicon Valley, as he is a few years if not a full decade ahead of the most advanced labs in the world, and could probably raise one hundred billion dollars for his own lab before the end of the week.
Discuss