5 min readJust now
–
The EU classified over** 50 types of AI applications as “high-risk.”** If you use AI for hiring, customer service, or risk assessment, you’re probably on that list. Even if you’re a US company that’s never set foot in Europe. Here’s what you need to know, and what to do about it.
The AI Act applies to you if:
- You provide AI systems used in the EU (regardless of where you’re based)
- You import or distribute AI systems in the EU
- You use AI systems in your business operations in the EU
In other words: if your AI touches the EU market in any way, you’re covered. Anytime an EU user consumes your product, you’re accountable.
The most common misconception I see is founders thinking “we’re just a small startup, this doesn’t apply to us.” Again, siz…
5 min readJust now
–
The EU classified over** 50 types of AI applications as “high-risk.”** If you use AI for hiring, customer service, or risk assessment, you’re probably on that list. Even if you’re a US company that’s never set foot in Europe. Here’s what you need to know, and what to do about it.
The AI Act applies to you if:
- You provide AI systems used in the EU (regardless of where you’re based)
- You import or distribute AI systems in the EU
- You use AI systems in your business operations in the EU
In other words: if your AI touches the EU market in any way, you’re covered. Anytime an EU user consumes your product, you’re accountable.
The most common misconception I see is founders thinking “we’re just a small startup, this doesn’t apply to us.” Again, size doesn’t matter *use case *does.
The law stipulates a few risk categories in which the AI systems you deploy can fall into.
The** highest level of risk** is considered unacceptable and its uses are simply prohibited. Prohibited AI systems include the recognition of learners’ emotions in educational institutions, manipulation to influence human behavior in a subliminal way, social scoring as well as real-time biometric surveillance.
The second to the top level of risks,** “high-risk,”** is subject to the most detailed supervision. This concerns uses that may have a negative impact on people’s fundamental rights. A list of use cases considered high-risk is included in Annex III of the EUAIA. This includes critical infrastructure such as AI used in transportation or energy, like predictive maintenance for trains, but also biometrics such as remote biometric identification systems, systems related to education and training, with systems that determine access to education or assess student performance, essential public services, law enforcement, the administration of justice and democratic processes, but also employment, such as automated recruitment like resume screening AI (yes, even basic filters). And that’s something I hear a lot, people think “We just use AI for resume screening that’s not high-risk, everybody does it!” Wrong. Resume screening is explicitly listed. Even basic keyword filters count.
Then, there are the uses assessed as** limited risk** that are simply submitted to a principle of transparency. Examples include the use of chatbots or personal tutors for trainees in an LMS or digital learning environment. In such cases, users must be aware that they are interacting with a machine.
The last category would concern the systems that pose minimal risk, those are not regulated by the text. They include: spam filters, video games, photo apps.
Press enter or click to view image in full size
Most US companies assume they’re in the green zone (minimal risk). But if you use AI for hiring, lending, or customer risk assessment, you’re likely in the orange or red zones.
Before high-risks AI systems can be marketed in EU, providers will need to obtain a CE marking certifying that products sold in the EEA have been assessed to meet high standards of safety, health, and environmental protection. They will also need to register with the European Union database.
They will also have to implement a risk management system and ensure data governance to guarantee that data is not biased.
Comprehensive technical documentation will be required, as well as a quality management system to ensure compliance at all stages of the product lifecycle, including post-market surveillance.
Companies will have to draft a statement of compliance, guarantee the traceability and transparency of AI systems, and ensure human oversight of decisions made by AI.
Finally, they will have to ensure the robustness, accuracy, and cybersecurity of the high-risk AI systems they provide.
Quick self-assessment: red flags to check
Ask yourself these questions:
□ Does your AI make automated decisions about people? If so, this would be a potential high-risk
□ Does it operate in critical infrastructure (transportation, energy, healthcare)? If so, this would be a potential high-risk
□ Does it use biometric identification? If so, it’s likely high-risk or prohibited
□ Can a human easily override or intervene? If not, you have compliance problem)
□ Do you have documentation on accuracy and robustness? If not, you’ll need it
□ Can you explain how decisions are made? If not, you have a transparency issue
What should you do now?
If your AI system might be high-risk:
- Check Annex III against your use case (https://artificialintelligenceact.eu/fr/annex/3/)
- Document your AI system’s intended purpose and limitations
- Start building your compliance roadmap
- Be aware of the timeline:
August 1, 2024: AI Act entered into force
February 2, 2025: Prohibited AI practices banned
August 2, 2025: General-purpose AI rules apply
Starting August 2, 2026 the Act will fully apply to high-risk AI systems.
The clock is ticking. If you’re high-risk, start now.
Final note: Consider the regulatory sandbox option
August 2, 2026 also marks the start of regulatory sandboxes to support businesses. You should consider taking advantage of this regulatory sandbox opportunity, which is a controlled environment where companies can test their products or services under real-world conditions and under the supervision of a regulator. The goal is to enable companies to develop their projects while complying with the rules.
Through these mechanisms, companies can:
- Test AI systems in a controlled environment with regulatory support.
- Gain legal certainty, thereby facilitating compliance with standards.
- Access the market without risking administrative penalties for minor infringements, as long as they operate in good faith within the testing framework.
Key Articles of the Act:
- Article 5: Prohibited AI practices
- Article 14: Human oversight requirements
- Article 15: Accuracy, robustness and cybersecurity
- Article 50: Transparency obligations
- Article 57: Regulatory sandboxes
- Article 71: EU database for high-risk AI systems
- Chapter 3: High-risk AI systems
- Annex III: List of high-risk AI use cases
About the Author
Lea Leu is a tech and privacy lawyer specializing in AI governance, IP licensing, and data protection. Licensed in New York, she advises on navigating GDPR, CCPA/CPRA, and the EU AI Act.
📧 Email: lleumassart@gmail.com 🔗 LinkedIn: Connect with Lea
Feel free to reach out.
Disclaimer: This article is for informational purposes only and does not constitute legal advice. Consult with qualified legal counsel for your specific situation.