Why Most Startups Get EU AI Act Classification Wrong

August 2026 enforcement is coming. Here is what founders are missing

I have evaluated 10 Berlin AI startups in the last 3 months. All of them classified themselves wrong.

Here is the pattern I see every time:

Founder: "We are a chatbot. That is minimal risk, right?"

Me: "What data do you process?"

Founder: "User messages, behavior, location history..."

Me: "Do you make recommendations based on their profile?"

Founder: "...yes."

Me: "You are high-risk."

This happens 8 times out of 10. And August 2026 is when the EU AI Act enforcement starts.

The three mistakes everyone makes

Mistake 1: Confusing the tool with the risk

Founders think:

But risk is not about the tool. It is about what you do with it.

Examples:

Same tool. Different risk. You might not realize you are high-risk until August 2026. By then, you need 12 months of documentation.

Mistake 2: Not reading the regulation

The EU AI Act lists high-risk categories. I show founders the list. They say "none apply to us."

Then I ask one question: "Do you process personal data in a way that affects their work, economic situation, or service eligibility?"

Answer: Yes. 7 out of 10 times.

High-risk examples from the law:

The pattern: if your AI makes a decision about a person, and that decision affects their life materially, you are probably high-risk.

Mistake 3: Waiting until August 2026

This is the biggest mistake.

High-risk AI obligations start August 2026. But obligations apply retroactively. If you launched a high-risk AI today and do not evaluate it until August 2026, you are suddenly non-compliant with 12 months of past obligations.

Smart move: evaluate now. If you are high-risk, start documenting now. By August 2026, you are ready.

Dumb move: launch today as "minimal risk," realize in August you are high-risk, then spend 12 months doing emergency documentation.

How to classify yourself correctly

Answer these three questions:

  1. Use case: Does your AI make decisions about people? (hiring, credit, education, law enforcement, services, immigration)
  2. Data: Do you process personal data, especially sensitive stuff? (behavior, economic situation, health, location)
  3. Impact: Does your decision materially affect someone life? (job offer, loan denial, service access)

If YES to all three: You are high-risk.

If YES to one or two: Probably high-risk or limited-risk. Get legal advice.

If NO: Likely minimal-risk. But read the regulation to be sure.

Need to classify your AI system?

Do not wait until August 2026. Get ahead of compliance now.

Book a consultation