- Philosopher: AI is not a tool, but an instrument of power
- Predictive Power and Structural Risks
- AI as an Instrument of Power and Ideology
- Human as a Verification Instance is Not Effective Protection
- AI Training by Meta and Co.
- Children Also Affected by Data Training
At the AI Week of the Baden-Württemberg Data Protection Commissioner, philosopher and ethicist Rainer Mühlhoff, Professor for “Ethics and Critical Theories of Artificial Intellig…
- Philosopher: AI is not a tool, but an instrument of power
- Predictive Power and Structural Risks
- AI as an Instrument of Power and Ideology
- Human as a Verification Instance is Not Effective Protection
- AI Training by Meta and Co.
- Children Also Affected by Data Training
At the AI Week of the Baden-Württemberg Data Protection Commissioner, philosopher and ethicist Rainer Mühlhoff, Professor for “Ethics and Critical Theories of Artificial Intelligence,” spoke about the societal, political, and ideological dimensions of current AI developments. Mühlhoff painted a comprehensive critical picture that questions the apparent neutrality of AI and understands the technology as a complex socioeconomic power instrument.
Ever since antiquity, according to Mühlhoff, there has been a fascination with “mechanizing the human.” However, while the imagination of technology is constantly growing, “the futuristic visions of AI [...] often go hand in hand with an astonishing social conservatism.” The visualizations of AI – for example, as a “thinking brain of light” – also show a problematic anthropomorphization and mythologization of the technology, according to Mühlhoff.
He argued for no longer understanding AI as “machine intelligence” in the sense of an autonomous system, but as “human-assisted AI,” which relies significantly on human participation – through data trails, click work, or everyday interactions. “Human collaboration, human cognitive performance is embedded in all AI systems,” said Mühlhoff. The systems create a new form of digital exploitation, characterized not only by globally unevenly distributed click work but also by enormous water and energy consumption in model development.
Predictive Power and Structural Risks
Mühlhoff named predictive power as the most current form of data power – i.e., the ability to “predict unknown information about any individuals.” The spectrum ranges from personalized advertising to the diagnosis of mental illnesses from speech patterns. This goes far beyond classic data protection concepts, as the predictions concern not only those whose data was used to develop the systems but also third parties about whom statements can be made based on the models. They have not given their consent. This aspect is currently hardly covered by applicable data protection law.
Using several examples, Mühlhoff showed how models can emerge from marketing campaigns or medical research that are then used for completely different purposes – for example, to evaluate job applicants. A dangerous secondary use arises here, which can no longer be controlled by classic principles such as anonymization.
In research projects, Mühlhoff, together with jurist Hanna Ruschemeier, proposes extending the purpose limitation principle of data protection to trained models as well: if an AI system is created in a medical context, it should not “migrate” to other contexts indefinitely. The researchers therefore call for a registration of models that documents and makes their original purpose limitation controllable.
AI as an Instrument of Power and Ideology
AI systems contribute significantly to the exercise of political power and propaganda, Mühlhoff explained. Generative models enable new forms of symbolic violence and disinformation. “Fake news” in the classic sense are replaced by “hyperreal images” – aesthetically elevated representations that do not deceive but deliberately show that they are artificially generated. According to Mühlhoff, these superreal images function as demonstrations of power, showing superiority and authority, among other things.
Furthermore, Mühlhoff explained the ideological roots of AI elites, which he locates in transhumanism and long-termism. In these worldviews, technological progress is seen as an evolutionary force that should “elevate humans beyond themselves.” According to Mühlhoff, eugenic, elitist, and anti-democratic tendencies are evident here. Entrepreneurs like Elon Musk or Peter Thiel, whose thinking oscillates between promises of technological salvation and political authoritarianism, are particularly critical. “In the entanglement of technological salvation promises and political power desires lies a certain fascistoid potential,” said Mühlhoff.
Human as a Verification Instance is Not Effective Protection
He does not believe that concepts like “Human in the Loop” offer effective protection. In practice, people lack the time for this. “AI is a new form of enclosure in exploitation apparatuses,” said Mühlhoff. It leads less to the replacement of human work than to its enclosure. This, in turn, enables more control and greater interchangeability.
AI is often referred to as a “mere tool,” which, according to Mühlhoff, is not the case. Technology itself influences the purposes of its use and is not neutral. Transparency is important, but not a guarantee, because in complex societies, not everyone can understand the internal mechanisms. What is crucial, however, is verifiability by independent actors and institutions.
Especially on topics like web scraping and data extraction, Mühlhoff advocated for a power-sensitive perspective. The question is not only whether the processing is proportionate, but who is carrying it out and what interests are at play. As long as the production and use of large AI models are practically only accessible to capital-strong corporations, little remains of “orientation towards the common good.” According to Mühlhoff, data protection can protect against abuse of power, inequality, and digital authoritarianism, thus securing the foundations of an open, democratic society.
AI Training by Meta and Co.
Large systems like ChatGPT learn from vast amounts of data and also store personal information. According to Junior Professor Paulina Jo Pesch from FAU Erlangen-Nuremberg, this is explosive from a data protection perspective, as she explained in another presentation, because models “store more than you want and then allow the extraction of information from the training data.”
It becomes particularly critical when so-called hallucinations lead to false or defamatory statements about real people. “This is a rather striking case of incorrect data coming from a large language model,” said Pesch, referring to a well-known example from 2024 where a journalist was falsely accused of being a criminal by ChatGPT. At the same time, Big Tech wants to use more and more data for training its AI models. “Meta wants personal information to be used for training,” Pesch emphasized.
Children Also Affected by Data Training
According to Pesch, particularly vulnerable groups such as children and adolescents can also be affected by data training – without effective technical protection mechanisms or easily implementable opt-out options. Although Meta has claimed not to use content from minors for training, this assurance is in vain. Young people often create accounts on Instagram or Facebook with a false age. Meta does not effectively control this. Thus, their content is also considered adult data and could potentially flow into AI training. Many people do not exercise their right to object to the use of their data for Meta’s AI training, according to Pesch, because Meta has made the process intentionally complicated and opaque.
The consumer advice center NRW had sued against Meta’s actions, but was unsuccessful before the Higher Regional Court of Cologne. According to Pesch, the court ignored central technical facts. It is particularly problematic that “the court apparently did not know what was actually being trained and that it is not only used for translations, transcriptions in WhatsApp and Instagram and Facebook, but that such an AI model is being created for general purposes.”
In her blog post on CR-online “AI hot mess – Meta at German courts and the troubling state of EU regulation,” Pesch criticizes the proceedings in Cologne as an example of a fundamental enforcement problem under the pressure of the AI hype. There is a risk of insufficient enforcement and erosion of legal protection mechanisms. A court that takes its role seriously would not allow Meta’s Llama language model to be trained with personal data, at least not at the current time.
(mack)
Don’t miss any news – follow us on Facebook, LinkedIn or Mastodon.
This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.