The dynamic development of generative artificial intelligence (AI), particularly exemplified by the large language models (LLMs) behind ChatGPT, Gemini, or Claude, presents states and administrations worldwide with a crucial strategic question: How can such instruments for text generation, knowledge discovery, and process support be used effectively without sacrificing digital sovereignty?
Powerful modern LLMs require vast amounts of data, expensive hardware, and significant energy – resources currently primarily controlled by a few, mostly non-European tech giants. Therefore, according to experts, it is crucial for the state to secure its ability to act, transparency, and control over this key technology.
The Competence Center Public IT (Öfit) at the Fraunhofer Institute FOKUS h…
The dynamic development of generative artificial intelligence (AI), particularly exemplified by the large language models (LLMs) behind ChatGPT, Gemini, or Claude, presents states and administrations worldwide with a crucial strategic question: How can such instruments for text generation, knowledge discovery, and process support be used effectively without sacrificing digital sovereignty?
Powerful modern LLMs require vast amounts of data, expensive hardware, and significant energy – resources currently primarily controlled by a few, mostly non-European tech giants. Therefore, according to experts, it is crucial for the state to secure its ability to act, transparency, and control over this key technology.
The Competence Center Public IT (Öfit) at the Fraunhofer Institute FOKUS has examined the LLM-based systems of the federal administration in a recently published study funded by the Federal Ministry of the Interior to assess their level of independence. Digital sovereignty, therefore, means that Germany, together with Europe, can independently design and operate central digital infrastructures, data, and computing infrastructures securely and according to individual rules.
The analysis of LLM projects was conducted along three strategic goals derived from the federal government’s digital policy: interchangeability, meaning the de facto availability of alternative solutions and the replaceability of system components. The researchers also looked at design capability, which includes, for example, in-house technical and organizational expertise for evaluating, operating, and further developing systems. Furthermore, they focused on influence over providers, ensured through market and negotiation power, for instance, in procurement.
In-house Developments Reduce Dependency
The good news from the study is that, in contrast to previously identified "pain points" in office software or database products, no critical singular dependency on a single major corporation was found in the area of LLMs. The federal administration has thus succeeded in developing in-house solutions for many typical use cases of LLM-based systems. Consequently, for the majority of everyday tasks, there is no imperative need to rely on the products of large, often non-European corporations. This inherently reduces the risk of entering into new interdependencies with third parties.
According to the scientists, the risks to the state’s ability to act are manageable from today’s perspective, as the developed solutions currently exclusively serve to support administrative staff. An outage would not immediately jeopardize the state’s operational capacity. Technically, the fact that the LLMs mostly run on proprietary hardware and can be replaced with little to moderate effort if needed contributes to sovereignty.
Open Source as a European Opportunity
At the level of the language models themselves, the federal administration predominantly relies on non-European open-source models operated within internal administrative infrastructure. While this strengthens interchangeability, as the LLMs can be hosted on proprietary infrastructure and replaced as needed, a strategic gap remains: given the evolving understanding of open source in the AI context, the authors strongly recommend considering whether the development of a proprietary, openly available European LLM should be pursued. The goal must be to achieve lasting independence from market-dominant LLM providers and to anchor the models on an independent European value and norm basis.
Relevant LLM projects in authorities also face hurdles that hinder further growth and reusability. According to the study, these include legal AI regulations perceived as too complex, which delay developments and require extensive legal expertise within the agencies. These uncertainties and the sometimes perceived low legal competence limit the publication of developments as open source, it states. Furthermore, surveyed project managers repeatedly expressed a desire for an AI-specific cloud infrastructure equipped with appropriately trained personnel to simplify operations.
The study contains various recommendations for action to ensure digital sovereignty sustainably. These include expanding common LLM infrastructures across departmental boundaries and strengthening open-source approaches. Additionally, uniform legal frameworks should be established, for example, through a mandatory "sovereignty check" for critical LLM projects. Procurement should be consolidated across federal levels to enforce digital sovereignty criteria and strengthen negotiation power against large providers. Federal Digital Minister Karsten Wildberger (CDU) views the results as confirmation "that we are already on the right track to creating a solid foundation for independent AI solutions in the federal administration."
(nen)
Don’t miss any news – follow us on Facebook, LinkedIn or Mastodon.
This article was originally published in German. It was translated with technical assistance and editorially reviewed before publication.