Skip to content

The EU AI Act becomes enforceable on 2 August 2026. Here's what SMBs actually need to know.

11 May 2026·6 min read·Keloa
euai-actgdprcompliance

A founder in Utrecht asked us last week whether her AI support setup was going to be "AI Act compliant" by August. She'd been emailed by three different consultancies offering AI Act readiness packages, all of them quoting four-figure project fees and 90-day timelines. We told her what we tell everyone: read the actual deadlines before paying anyone, because the consultancy market is busy turning regulatory anxiety into invoices.

This post is the version of that conversation we wish we could just hand to people. Here's what's actually happening on 2 August 2026, what it means for a European SMB running AI customer support, and which questions are worth asking your vendor.

What happens on 2 August 2026

The AI Act has been on a staggered timeline since it was finalised. The two dates that matter for customer support tooling:

  • 2 August 2025. Obligations for providers of general-purpose AI models (GPAI) entered into application. Model providers, the companies actually training the frontier models, had to start producing technical documentation, copyright compliance statements, and transparency summaries.
  • 2 August 2026. The European Commission's AI Office gets full enforcement powers. From this date, the Commission can request information, order mitigations, mandate recalls of non-compliant models, and impose fines up to 3% of worldwide turnover (15 million for the largest tier).

The August 2026 date is the one the consultancies are leaning on. It is significant. It is also less significant for SMBs deploying AI customer support than the marketing makes it sound.

Here's why. The obligations the Commission can now enforce are, almost entirely, obligations on the upstream model providers. If you're an SMB using an AI customer support tool, you are a deployer, not a provider. Your direct obligations are much narrower, and most of them are obligations you already have under GDPR and the existing consumer-protection regime.

What you actually have to do as a deployer

The AI Act's deployer obligations for general-purpose AI applications (which is what almost all customer support AI is) fall into a small set of practical asks. Translated out of regulatory language:

  • Tell customers when they're talking to AI. Article 50 transparency. If a person is interacting with an AI system, they have to know. In practice this is a single sentence in your chat widget's opening message, or a clear bot avatar, or a "hello, I'm Keloa's AI assistant" line. Most tools handle this by default. Check that yours does.
  • Don't let the AI make decisions that legally affect customers without human review. This is the line between "AI suggested a refund" and "AI executed a refund." For most SMB support setups, you already work this way because you don't trust the bot to issue refunds without a human nod. Keep doing that.
  • Keep records of the AI's decisions, especially in higher-risk contexts. Logs of conversations, what the AI was given to work from, what it responded. Standard logging.
  • Don't use AI in the explicitly prohibited categories. Social scoring, predictive policing, real-time biometric ID in public, etc. Customer support is not on this list. You're fine.

None of this requires a 90-day project. It requires a one-page internal note documenting how you handle the four bullets above, attached to your existing GDPR records of processing. If your vendor cannot tell you, in writing, how their tool helps you meet each of those four, that's the conversation to have, not the consultant.

The bit nobody talks about: sub-processor sprawl

The thing that actually trips up European SMBs in 2026 isn't the AI Act itself. It's the gap between the AI Act, GDPR, and how AI customer support tools are usually built.

The typical AI support stack has, somewhere in it, a frontier language model, a retrieval system, a vector index, an email infrastructure provider, a cloud host, a monitoring service, an analytics pipeline, and a payment processor. Many of those are in the United States. Several of them rely on each other in chains that aren't always visible from the vendor's website.

Under GDPR Article 28, every one of those that processes EU personal data is a sub-processor. You're required to know who they are, where they sit, and on what legal basis any data leaves the EU. After Schrems II, the bar for non-EU transfers is high enough that most European compliance officers prefer to avoid them entirely when there's an EU-resident alternative.

This is where the AI Act adds a new wrinkle. Once enforcement is live in August, the AI Office can investigate where models are trained, deployed, and inferenced. Vendors that have been vague about this are about to find that being vague is no longer free.

The practical question for you, as an SMB, is: where does my data actually go when a customer sends a chat message. We're upfront about ours because the answer is short. Your data is processed in our EU cloud infrastructure (Amsterdam primary, Dublin for backups). Inference runs against frontier language models with EU-resident endpoints. Our sub-processors page lists every entity that touches the data, with purpose and location. You can read it in a couple of minutes.

The vendors who can't show you a list like this, or who have one but most of the entries are American, are not necessarily breaking the law. The Data Privacy Framework provides a legal basis for transfers to certified US recipients. But you'll have to defend that posture to your own customers, your own auditors, and increasingly, your own buyers' procurement teams. EU residency is becoming the simpler answer to a question that's getting asked more often.

What "EU-hosted" actually means (and doesn't)

Worth flagging because the term has been getting abused. "EU-hosted" has at least three meanings in vendor marketing:

  • Marketing site is in the EU. Meaningless. The site is just a brochure.
  • A specific tenant is in the EU, but inference happens in the US. Common with American vendors who added an EU region under pressure but still route AI model calls overseas. Read the small print. If the model inference isn't EU-resident, neither is the AI part of your service.
  • Everything is in the EU, including model inference, retrieval, queues, and operational tooling. This is the version that gives you the simple answer to a Schrems II question.

The default for most American AI vendors is to route inference wherever capacity is cheapest, which is usually not the EU. We chose the third option from day one because it was simpler to explain and easier to audit. It cost us a bit of latency and a bit of margin, both of which we think are reasonable prices for the conversation we get to have with European buyers.

What this means for you

If a consultancy is offering you an AI Act readiness package, ask them which deployer obligation specifically you currently fail. Most SMBs running a well-configured AI support tool will struggle to name one. The work to be done is mostly documentation: a paragraph on how you disclose AI, a paragraph on what your AI is and isn't allowed to do without human review, a record of what your vendor processes and where. Attach that to your existing GDPR ROPA. You're done in an afternoon.

What's worth spending time on, before August and after, is the sub-processor question. Get your vendor's list. Read it. If half the entries are American, decide whether you can defend that. If you can't, that's the call to make, not the readiness package.

If you want to see how we handle this end-to-end, our security page covers data residency, encryption, sub-processors, and the DPA we offer every customer. Or book a demo and we'll walk through your specific compliance shape. We're not consultants and we won't bill you for the conversation.

Want to see how this works in our product?

Free Starter plan, 50 AI replies, no credit card. Set up in ten minutes.