Providing a generative model can expose providers—e.g., developers, researchers, and company representatives—to legal risk; targeted actions can mitigate it. Credit: arXiv (2026). DOI: 10.48550/arxiv.2601.03788
A short, seemingly harmless command is all it takes to use Elon Musk’s chatbot Grok to turn public photos into revealing images—without the consent of the people depicted. For weeks, users have been flooding the platform X with s…
Providing a generative model can expose providers—e.g., developers, researchers, and company representatives—to legal risk; targeted actions can mitigate it. Credit: arXiv (2026). DOI: 10.48550/arxiv.2601.03788
A short, seemingly harmless command is all it takes to use Elon Musk’s chatbot Grok to turn public photos into revealing images—without the consent of the people depicted. For weeks, users have been flooding the platform X with such deepfakes, some of which show minors.
Who is liable when AI is misused to create child sexual abuse material (CSAM)? A research team from the University of Passau, led by computer scientist Professor Steffen Herbold, holder of the Chair of AI Engineering, has investigated this question. "We wanted to know what measures developers and operators need to take to minimize the risk of prosecution and act responsibly," explains Professor Herbold.
The key finding: the main perpetrators are the users themselves when they use AI to create such images. However, those responsible for the AI can also be held accountable—for example, if they intentionally aid and abet.
"Anyone who makes AI publicly available must be aware that it can also be misused," says lead author Anamaria Mojica Hanke, research assistant at the Chair of AI Engineering. "If the operator knowingly allows users to create CSAM, for example, and no appropriate countermeasures are taken, this may be relevant under criminal law."
When are developers liable?—Intent is crucial
According to the study, intent is the key factor. Both users and those responsible for AI must act with knowledge and intent with regard to illegal content and its distribution. The circumstances of each individual case are decisive here. It becomes particularly risky when AI can generate realistic nude images.
"If an AI model explicitly allows the creation of revealing content, this can be considered evidence of aiding and abetting," says Professor Brian Valerius, holder of the Chair of Artificial Intelligence in Criminal Law. An extreme case would be specialized models from the darknet that are specifically trained to create depictions of abuse.
"But even if a provider prohibits use for illegal purposes in its general terms and conditions, this civil law provision does not exempt them from criminal liability," adds legal scholar Svenja Wölfel, a research assistant at Professor Valerius’s chair. "On the contrary, the prohibition may even show that the developer was aware of the risk and thus indicate the necessary intent."
Even foreign hosting does not protect
In the study, the researchers also examined how the publication context of AI influences legal responsibility. Whether the AI runs on German servers or not is not decisive. German authorities could still investigate if a German citizen uses the AI or if a German developer was involved. Even purely foreign cases fall under German law if international crimes such as child pornography are involved.
What does this mean in terms of the initial example? For Professor Herbold, it is at least questionable whether a line has been crossed in the current Grok case: "The protective mechanisms must be effective and state of the art. Given how easy it is to circumvent these mechanisms in Grok at present, it is questionable whether allowing only paying users to access the model is a sufficient response."
Lead author Mojica Hanke sums up the study as follows: "There is no guarantee of immunity from prosecution. Anyone who develops AI must implement clear protective mechanisms—both technical and legal."
In addition to the Passau researchers, Thomas Goger, Chief Public Prosecutor at the Bavarian Cybercrime Unit, was also involved in the study. The paper, entitled "Criminal Liability of Generative Artificial Intelligence Providers for User-Generated Child Sexual Abuse Material," was recently published as a preprint on arXiv—i.e., in a preliminary version so that the scientific community can discuss it.
The team is particularly pleased that the study has already undergone a peer review process and has been accepted for the International Conference on AI Engineering, which will take place in Rio de Janeiro in April 2026.
More information: Anamaria Mojica-Hanke et al, Criminal Liability of Generative Artificial Intelligence Providers for User-Generated Child Sexual Abuse Material, arXiv (2026). DOI: 10.48550/arxiv.2601.03788
Journal information: arXiv
Citation: New legal framework clarifies liability for AI-generated child abuse images (2026, January 14) retrieved 14 January 2026 from https://techxplore.com/news/2026-01-legal-framework-liability-ai-generated.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.