Skip to content
BlackCoffeeAI
← Insights
Insight · compliance

What the EU AI Act actually means for Dutch SMEs

22 April 2026 Read: 7 min

The short version

If you run a Dutch SME and you’re using AI to draft messages, classify leads, generate product copy, capture receipts, or schedule appointments, the EU AI Act applies to you in exactly two ways: an obligation to provide AI literacy training to your staff (Article 4), and the requirement to disclose to end users that they’re interacting with an AI when that fact isn’t otherwise obvious (Article 50).

That’s it. You’re not a “high-risk” system per Article 6. You’re not banned per Article 5. You don’t need a CE mark, a conformity assessment, or a notified body. The compliance burden for the workflows we build at BlackCoffee is roughly two paragraphs in your privacy policy plus a half-day literacy session.

What “literacy training” actually has to cover

Article 4 doesn’t prescribe a curriculum. It says staff who deploy or use AI systems must have a sufficient level of AI literacy “considering their training and experience and the context in which the AI system will be used.” For a small clinic running an AI scheduler that prompts patients about no-shows, sufficient means:

  1. What the system does and doesn’t do. “It drafts reminder messages and predicts no-show risk based on past patterns. It does not diagnose, treat, or store sensitive health data.”
  2. Where the human-in-the-loop is. “Every reminder we send was either approved by a receptionist or follows a template you approved last quarter.”
  3. What to do when it gets it wrong. “Flag the message in the dashboard. We retrain weekly on flagged cases.”
  4. What rights end users have. “Patients can opt out of AI-assisted reminders by replying STOP.”

That’s the whole training. Two hours, half slide deck, half walk-through of the actual system. We deliver this as part of every pilot, not because regulators are asking, but because clients who understand the system are clients who use it well.

What “disclosure to end users” actually has to look like

If a customer of a salon sends a WhatsApp message and gets an instant reply drafted by AI, they need to know that. The law doesn’t say where or how — just that it has to be “in a clear and distinguishable manner, at the latest at the time of the first interaction or exposure.”

In practice this is one line in the auto-reply: “This message was drafted by our AI assistant — a real person reviews every reply before send.” Or, for chatbots that don’t pause for human review, “You’re chatting with our AI assistant. Type HUMAN at any time to reach a person.”

We bake this disclosure into every customer-facing template by default. If you’ve signed off on the reply pattern, the disclosure is in it.

What you should ignore in the headlines

The AI Act headlines focus on three areas that don’t apply to most Dutch SMEs:

  • Foundation models / GPAI rules. These obligate the providers (OpenAI, Anthropic, Mistral). You’re a deployer; the upstream provider’s compliance flows down to your contract with us.
  • Article 6 high-risk systems. Critical infrastructure, education, employment-eligibility decisions, biometrics, law enforcement. A WhatsApp auto-reply is not these.
  • Conformity assessments. Required for high-risk systems. Not required for the productized pilots we run.

If you’re worried about whether your specific workflow is high-risk, the European Commission’s AI Office maintains a classification guide that’s clearer than most legal commentary. For the seven pilots we offer, none of them touch high-risk territory.

What we put in our pilot DPAs

Every BlackCoffee pilot starts with a Data Processing Agreement signed before any client data moves. The DPA covers the GDPR side; the AI Act side is covered by:

  1. A line item under “scope of processing” naming the AI capability.
  2. Confirmation that no personal data leaves EU jurisdiction.
  3. The literacy training schedule.
  4. The disclosure language we’ll add to customer-facing templates.
  5. A right-to-explanation clause: customers can ask why a particular reply or recommendation was generated, and we have to be able to answer.

The whole document is four pages. We’ve never had a Dutch SME push back on it, because it’s mostly things they’d want anyway.

The real risk

The AI Act is a relatively gentle piece of legislation for SMEs who deploy productized AI sensibly. The real risk is the Dutch competitor down the road who uses AI to send quotes faster, reply to leads faster, draft documents faster — and who gets to your customers first.

That’s the only race that matters. The literacy session takes a morning. Get on with it.


If you want a copy of our standard DPA + literacy curriculum as a starting point for your own deployment, send us a message — we’ll send it back the same day.

← Insights to all