GDPR and AI Chatbots: A 2026 Compliance Guide for Italian SMBs
What an Italian SMB actually needs to do — legally and practically — to deploy an AI chatbot that complies with GDPR, the EU AI Act, and Garante guidance in 2026.
Most Italian SMBs deploying an AI chatbot in 2026 are touching three regulatory regimes at once: the GDPR, the EU AI Act (in force since August 2024, with article-by-article application running through 2026 and 2027), and the Italian Garante's evolving guidance on AI and personal data. The good news: a customer-facing chatbot that answers questions about your products is generally a low-risk system. The bad news: "low-risk" still requires a real Data Processing Agreement, a transparent privacy notice, and a defensible position on training data, retention, and international transfers.
This guide walks through what actually applies to a 5-50 person Italian business running a chatbot on its website or WhatsApp Business, with the legal references and the practical actions side by side.
What changed between 2024 and 2026
Three things matter for an SMB in 2026 that did not matter in 2023.
The EU AI Act is law. Regulation (EU) 2024/1689 entered into force on 1 August 2024. Prohibitions on unacceptable-risk AI (e.g. social scoring, untargeted scraping of facial images) applied from 2 February 2025; obligations on general-purpose AI models applied from 2 August 2025; the bulk of the obligations on high-risk and limited-risk systems apply from 2 August 2026. AI Act implementation timeline
The Garante has actively enforced. Italy's data-protection authority blocked ChatGPT in March 2023, fined OpenAI €15 million in December 2024 for unlawful processing during training, and has issued repeated guidance on chatbot transparency, training-data minimisation, and the role of legitimate interest. Garante OpenAI decision, Dec 2024
Schrems II is settled but still load-bearing. The EU-US Data Privacy Framework (in force since July 2023) gives a recognised path for transfers to certified US providers, but you still owe a documented transfer assessment and an SCC-aligned DPA for any vendor outside the EEA.
What a chatbot processes (the part most SMBs miss)
The first compliance question is not "is my chatbot safe?" — it is "what personal data does it actually process?" In practice, a typical Italian SMB chatbot processes three categories.
Conversation content. The user's questions and the bot's answers. These almost always contain personal data once a real customer starts using it: an email address asking about an order, a name introduced in a complaint, a phone number left for a callback. Treat conversation logs as personal data by default, even if your "intended" use was anonymous browsing.
Identifiers and metadata. IP address, browser fingerprint, session ID, timestamps, page URLs. Under GDPR these are personal data when they identify a natural person, directly or indirectly.
Training and retrieval inputs. The documents you load into the bot's knowledge base. If those documents contain personal data — staff names in a contact page, customer testimonials with photos, internal policies that reference identified employees — that processing needs its own legal basis. This is the part that most "we just uploaded the website" deployments overlook.
- Data controller (titolare del trattamento)
The party that determines the purposes and means of processing personal data. When you deploy a chatbot on your own site, you are the controller for the conversations that happen there. Your chatbot vendor is typically a processor (responsabile) acting on your instructions — but only if you have a written DPA in place.
The data flow you need to document
A standard hosted-chatbot data flow looks like this. The end user types a question on your website. The widget posts the message to the chatbot vendor's API. The vendor retrieves matching passages from the knowledge base you loaded, sends the question plus those passages to an LLM provider (OpenAI, Anthropic, Mistral, etc.), receives a generated answer, logs the exchange, and returns the answer to the widget. Each arrow is a processing operation that needs a basis, a purpose, and a retention rule.
The red border on the LLM provider matters: if the LLM call leaves the EEA, you have an international transfer to document. EU-only LLM routes (Mistral, Anthropic via AWS Frankfurt, OpenAI Azure EU) eliminate the transfer assessment entirely. US-hosted LLMs are still legal under the EU-US Data Privacy Framework but require explicit DPF certification on the vendor side and a transfer-impact assessment on yours.
Step 1 — Pick your legal basis (and write it down)
Article 6 of the GDPR offers six bases. For a customer-facing chatbot, two are realistic and one is a trap.
Legitimate interest (Art. 6(1)(f)) is the default for a chatbot that helps an existing or prospective customer get answers about your products. It works because (a) the user initiated the contact, (b) the processing is necessary to provide the response, and (c) a reasonable user would expect the interaction. The catch: legitimate interest requires a written balancing test (LIA) on file. Three paragraphs in a Notion page is enough — the act of writing it is the compliance act.
Contract performance (Art. 6(1)(b)) applies when the user is already a customer using the chatbot for support on a contracted product. Cleaner than legitimate interest where it fits.
Consent (Art. 6(1)(a)) is the trap. SMBs reach for it because it feels safest, but consent for a chatbot creates two problems: it must be freely given (you cannot block access to the page until they accept), and it can be withdrawn, which then forces deletion of the entire conversation. Use consent only for genuinely optional features (e.g. logging conversations to train a future bot).
- Legitimate Interest Assessment (LIA)
A short written test you perform before relying on Art. 6(1)(f). It documents three things: (1) the legitimate interest you are pursuing, (2) why processing is necessary to achieve it, and (3) why the data subject's rights and interests do not override yours. It is not filed with anyone — but it must exist in writing if the Garante asks.
Step 2 — Get a real Data Processing Agreement
Article 28 GDPR requires a written contract between you (controller) and the chatbot vendor (processor). A real DPA covers:
- Sub-processor list and notification. Who else touches the data — the LLM provider, hosting infrastructure, analytics. The vendor must publish the list and notify you of changes.
- Data residency. Where is data stored at rest? Where is it processed? "Frankfurt" is a meaningful answer; "the cloud" is not.
- International transfers. If any sub-processor is outside the EEA, the DPA must reference Standard Contractual Clauses (Module 3 for processor-to-processor) or DPF certification.
- Security measures. Encryption at rest and in transit, access controls, audit logs.
- Breach notification. The vendor must notify you within a defined window (24-72 hours is standard) so you can hit the GDPR's 72-hour clock.
- Deletion at end of contract. Hard delete, not just "deactivation."
If your vendor only offers a Terms of Service and no separate DPA, that is a red flag. Reputable EU-targeted vendors (ChatAziendale, Customerly, Userlike, Crisp) publish their DPAs openly. ChatAziendale DPA
Step 3 — Update your privacy notice
The Garante has been explicit since 2023: AI processing must be visible in the public privacy notice, with enough detail that a user understands what is happening. Minimum content for a chatbot:
- That the site uses an AI chatbot, who the vendor is, and what data the chatbot processes.
- The purposes and legal bases (e.g. "answering product questions, on the basis of legitimate interest").
- Retention period for conversation logs (90 days is a defensible default for support purposes; 30 days is safer if you do not need them for product improvement).
- Whether and how conversations are used for model training. If the answer is "not at all," say so explicitly — that is a key Garante concern.
- The sub-processors involved, including any non-EEA LLM provider, with the transfer mechanism in plain language.
- The user's rights and how to exercise them, including the right to object to legitimate-interest processing.
L'informativa fornita agli utenti deve essere completa, immediatamente accessibile e contenere ogni elemento idoneo a rendere edotti gli utenti circa le caratteristiche del trattamento posto in essere.
Step 4 — Address the EU AI Act transparency duty
For most SMB chatbots, the AI Act applies in one specific place: Article 50, the transparency obligation. From 2 August 2026, providers and deployers of AI systems that interact with natural persons must inform those persons that they are interacting with an AI system, unless it is obvious from context.
In practice this means a single-line disclosure in or near the chat widget — "Stai parlando con un assistente AI" / "You are chatting with an AI assistant" — and the same in your privacy notice. AI Act Article 50
A typical SMB chatbot is not a high-risk system under Annex III, so the heavyweight obligations (risk management, conformity assessment, CE marking, post-market monitoring) do not apply. Two cases where they could:
- Recruitment chatbots that screen candidates fall under Annex III, point 4 — high-risk. If your chatbot triages job applicants, this is a different conversation.
- Chatbots that handle consumer credit or insurance underwriting — also high-risk under Annex III.
If your chatbot just answers product questions, books appointments, or routes support enquiries, you are in the limited-risk tier and Article 50 is the main obligation.
Step 5 — Set retention and delete on request
GDPR data minimisation (Art. 5(1)(c)) and storage limitation (Art. 5(1)(e)) mean conversation logs cannot be kept indefinitely just because storage is cheap. Defensible defaults for an SMB:
- Conversation logs: 90 days for support-quality purposes, then automated deletion. 30 days if you do not analyse them.
- Analytics aggregates: indefinite, because they are anonymised.
- Lead data captured by the bot (email, phone): falls under your normal CRM retention, typically 24 months from last contact.
Build the deletion right into the platform. Most reputable vendors expose a per-conversation deletion API and a per-user "forget me" flow keyed by email or contact identifier — verify yours does, because under Art. 17 a user can ask you to erase their data and you must execute within 30 days.
Step 6 — Document training-data treatment
This is where the December 2024 Garante decision against OpenAI bites. The authority found that processing personal data scraped from the public web to train a model was unlawful absent a clear legal basis. For a typical SMB this matters in two flavours:
You as a customer of an LLM provider. Verify (in writing) that conversations between your users and your chatbot are not used to train the provider's foundation models. OpenAI's API by default does not train on API inputs; the same applies to Anthropic's API and Mistral's enterprise tier. Free or "ChatGPT Team" tiers may have different defaults — read the page.
You as a builder of a knowledge base. If your knowledge base contains personal data (employee names, customer testimonials, identified examples), document the legal basis for ingesting that data into the RAG system and for surfacing it in answers. Legitimate interest usually works for staff names on a public contact page; testimonials need consent if the testimonial-giver is identifiable.
A defensible compliance baseline
Below is a realistic baseline for a 5-50 person Italian SMB with a customer-facing chatbot. It is not legal advice, but it is a structure that will hold up to a routine Garante enquiry.
Frequently asked questions
Do I need a Data Protection Officer because I run a chatbot?
Not in itself. Article 37 requires a DPO only if your core activity involves large-scale, regular, systematic monitoring of data subjects, or large-scale processing of special-category data. A typical SMB customer-service chatbot does not cross that line. If you already have a DPO for other reasons, loop them in.
Does the EU AI Act apply to a small chatbot?
Yes, but only the limited-risk transparency obligations under Article 50. The heavy obligations on high-risk systems (Annex III) do not apply unless your chatbot is used in employment screening, credit, insurance, education access, or other listed use cases.
Is it legal to use a US-hosted LLM provider in 2026?
Yes, if the provider is certified under the EU-US Data Privacy Framework (most major providers are) and you have a DPA in place that references either DPF or Standard Contractual Clauses. You also need a transfer-impact assessment on file. EU-only routing (Mistral, OpenAI Azure EU, Anthropic via AWS Frankfurt) avoids the transfer entirely and is the safest baseline.
Do I need explicit consent before showing a chatbot widget?
Generally no — under legitimate interest you can show the widget. You do need consent for any non-essential cookies or trackers the widget sets, and the AI disclosure required by AI Act Article 50 is independent of consent.
What happens if I just deploy without doing any of this?
The Garante has not historically pursued individual SMBs for run-of-the-mill chatbot deployments — its focus has been the LLM providers and high-risk deployments. But you remain personally liable for any breach, and the moment a complaint reaches the Garante (an angry user, a competitor, a former employee) the absence of a DPA, LIA, and privacy notice update turns a minor incident into an open file. The cost of compliance is two days of work; the cost of an open Garante file is months.
Final take
The 2026 Italian compliance picture for SMB chatbots is less scary than the headlines suggest, but it is not optional. The work is mostly paperwork done once: a DPA, an LIA, a privacy-notice update, an AI disclosure on the widget, a documented retention policy, and a transfer-impact assessment if your LLM call leaves the EEA. The platforms that make this easiest are EU-hosted, publish their DPA, and route LLM calls through EU regions by default. The platforms that make it hardest hide the sub-processor list and route everything through US-hosted infrastructure with no DPF certification.
If you want a one-page review of your own setup against the baseline above, write to hello@chataziendale.it and we will send back a checklist with the gaps marked. We do this even for customers of competing platforms — the goal is fewer surprised SMBs, not more sales.