How can we provide ethical insights for Large Language Models?
6 min readJust now
–
Press enter or click to view image in full size
Image by author, generated using OpenAI DALL·E tool
In just a few years, LLMs have become the center of our lives. We can often see people interacting with them. Whenever we are curious about something from a simple question to an academic search, we generally ask these models first. They evolve so quickly that it can feel hard to keep up, and it brings with it some questions that will occupy our minds:
Are they safe? Are they trustworthy?
We aren’t the only ones concerned about this. In fact, the European Union published the AI Act in 2024 [1]. It classifies AI systems into three categories: **Prohibited AI, High-Risk AI and General-…
How can we provide ethical insights for Large Language Models?
6 min readJust now
–
Press enter or click to view image in full size
Image by author, generated using OpenAI DALL·E tool
In just a few years, LLMs have become the center of our lives. We can often see people interacting with them. Whenever we are curious about something from a simple question to an academic search, we generally ask these models first. They evolve so quickly that it can feel hard to keep up, and it brings with it some questions that will occupy our minds:
Are they safe? Are they trustworthy?
We aren’t the only ones concerned about this. In fact, the European Union published the AI Act in 2024 [1]. It classifies AI systems into three categories: Prohibited AI, High-Risk AI and General-Purpose AI (GPAI).
LLMs are in General-Purpose AI (GPAI). GPAI** **can be used in many fields. However, they must follow ethical principles. The European Union had already published the Ethical Principles for Trustworthy Artificial Intelligence in 2019 [2]. They outline seven ethical principles that artificial intelligence models should follow. That document remains the foundation of trustworthy AI today.
In this article, I discuss seven key areas that responsible LLMs must follow.
1. Human Agency and Oversight
Can LLMs affect our own decisions?
LLMs must be developed to support human decisions. We should be able to question and refuse decisions when we interact with them. While they can support fundamental rights, they can also violate them. In this regard, LLMs must remain under meaningful control of humans with proper human oversight and agency in the system’s decision, design and command cycle.
Press enter or click to view image in full size
Image by author, generated using OpenAI DALL·E tool
2. Technical Robustness and Safety
Can we trust LLMs for all the answers they provide?
LLMs can only be trusted if they are built with security. Security is one of the most important thing for LLMs, so they must have general safety mechanisms. These mechanisms provide protection for LLMs from cyberattacks. This includes security measures such as data poisoning, model leaks, or system manipulation. These examples can also negatively affect the reliability and accuracy of models. Additionally, LLMs must have a fallback plan to protect themselves from these examples.
Image by author, generated using OpenAI DALL·E tool
3. Privacy and Data Governance
What happens to our personal data when we share it with LLMs?
We can choose what personal data we share with LLMs, but we cannot know what happens after we share it. When we consider the development of ethical LLMs, we should first think about privacy and data protection. LLMs must maintain the privacy of their interactions with users. Users also should be informed about who can access their information and for what purposes. To develop fair LLMs, we have to make sure that high-quality and unbiased datasets should be used. These datasets also should be checked by using a testing and documenting process.
Image by author, generated using OpenAI DALL·E tool
4. Transparency
How can we build trust between LLMs and users?
Transparency means everyone should be able to understand the logic behind the LLMs. Why did it give that response? How did the LLMs produce that response? Transparency is the most important step in building trust between LLMs and users. When we do that, we must be careful with the traceability of models by ensuring clear and well-documented datasets, model decisions, and design processes. Systems should be explainable, which means they must provide clear outputs that ensure humans can understand and audit them. Users must be clearly informed when interacting** **with LLMs. Furthermore, we must be aware of their capabilities and limitations when we interact with LLMs.
Image by author, generated using OpenAI DALL·E tool
5. Diversity, Non-discrimination and Fairness
How can we make sure that LLMs treat everyone equally?
LLMs can make sure fairness if they are not developed with biased datasets. Biased datasets can include against people from different ages, genders, abilities and backgrounds. Therefore, LLMs must avoid biased datasets. If biases exist in the dataset, they must be identified and eliminated. LLMs should be developed based on accessibility and universal design principles, which means they should be usable by all people and everyone has the right to access LLMs equally. On the other hand, stakeholder participation is essential. Developers, users and all people interacting with LLMs must share responsibility during the development process. Their participation is essential to ensure that LLMs are built with a healthy process and ethical responsibility.
Image by author, generated using OpenAI DALL·E tool
6. Societal and Environmental Well-being
Can LLMs be used to build a better society for future generations?
LLMs have great potential for environmental and social impact and there has been a noticeable increase in articles and projects for the benefit of the environment and society. They are also paying attention to sustainability goals and the development process of models. For example, the training processes of LLMs can be very challenging for the environment. However, sustainable and environmentally friendly AI includes using fewer resources and less energy consumption. At the same time, we need to look at how LLMs affect society and democracy. It seems clear that responsible LLMs should not only serve current generations but should also provide benefits for future generations and the planet as a whole.
Image by author, generated using OpenAI DALL·E tool
7. Accountability
Who is responsible when an artificial intelligence system causes harm, and what precautions are taken?
One of the most fundamental questions we face is this: Can AI truly cause serious harm? And if it does, who should be held responsible?Accountability means that responsibility should be shared by developers, organizations, and regulators. Additionally, ensuring auditability can prevent potential harms and provide a mechanism for keeping AI systems under meaningful human control by minimizing and reporting negative impacts. On the other hand, knowing that redress is available when unexpected situations occur is essential to maintaining trust.
Image by author, generated using OpenAI DALL·E tool
Conclusion
LLMs will continue to grow and be an integral part of our lives. However, it is important that LLMs are always developed under human control and are human-centered. In this context, a big responsibility falls on organizations, developers and regulators, and it is significant for our lives that developments take place within ethical frameworks and even place ethics at the center. In this blog, we have underlined both the precautions that can be taken and where the responsibilities lie with strong references.
It is a great opportunity to witness such a huge technical evolution but it must also be used wisely for our future generations, for all people and for our planet.
References
[1] European Commission, Artificial Intelligence Act: High-Level Summary (2024), https://artificialintelligenceact.eu/high-level-summary/
[2] European Commission, Ethics Guidelines for Trustworthy AI (2019), European Commission, https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
[3] European Commission, Ethics Guidelines for Trustworthy AI (Futurium — AI Alliance Consultation) (2019), European Commission, https://ec.europa.eu/futurium/en/ai-alliance-consultation/guidelines/1.html