I have evaluated 10 Berlin AI startups in the last 3 months. All of them classified themselves wrong.
Here is the pattern I see every time:
Founder: "We are a chatbot. That is minimal risk, right?"
Me: "What data do you process?"
Founder: "User messages, behavior, location history..."
Me: "Do you make recommendations based on their profile?"
Founder: "...yes."
Me: "You are high-risk."
This happens 8 times out of 10. And August 2026 is when the EU AI Act enforcement starts.
The three mistakes everyone makes
Mistake 1: Confusing the tool with the risk
Founders think:
- We are a chatbot = minimal risk
- We analyze documents = minimal risk
- We are B2B = minimal risk
But risk is not about the tool. It is about what you do with it.
Examples:
- Chatbot answering FAQs = limited risk
- Chatbot profiling users and recommending products = high risk
- Document analyzer for research = minimal risk
- Document analyzer for hiring decisions = high risk
Same tool. Different risk. You might not realize you are high-risk until August 2026. By then, you need 12 months of documentation.
Mistake 2: Not reading the regulation
The EU AI Act lists high-risk categories. I show founders the list. They say "none apply to us."
Then I ask one question: "Do you process personal data in a way that affects their work, economic situation, or service eligibility?"
Answer: Yes. 7 out of 10 times.
High-risk examples from the law:
- Employment: AI that decides job suitability, rates performance, assigns tasks
- Money: AI that assesses credit, prices insurance, judges benefit eligibility
- Education: AI that evaluates learning, determines admission, monitors behavior
- Immigration: AI that assesses visa applications
- Law enforcement: AI that predicts criminal risk
The pattern: if your AI makes a decision about a person, and that decision affects their life materially, you are probably high-risk.
Mistake 3: Waiting until August 2026
This is the biggest mistake.
High-risk AI obligations start August 2026. But obligations apply retroactively. If you launched a high-risk AI today and do not evaluate it until August 2026, you are suddenly non-compliant with 12 months of past obligations.
Smart move: evaluate now. If you are high-risk, start documenting now. By August 2026, you are ready.
Dumb move: launch today as "minimal risk," realize in August you are high-risk, then spend 12 months doing emergency documentation.
How to classify yourself correctly
Answer these three questions:
- Use case: Does your AI make decisions about people? (hiring, credit, education, law enforcement, services, immigration)
- Data: Do you process personal data, especially sensitive stuff? (behavior, economic situation, health, location)
- Impact: Does your decision materially affect someone life? (job offer, loan denial, service access)
If YES to all three: You are high-risk.
If YES to one or two: Probably high-risk or limited-risk. Get legal advice.
If NO: Likely minimal-risk. But read the regulation to be sure.
Need to classify your AI system?
Do not wait until August 2026. Get ahead of compliance now.
Book a consultation