18 AI Privacy Violations: Real Examples (2026)
Table of Contents
AI privacy violations occur when artificial intelligence systems collect, process, or share personal data without proper consent, legal basis, or transparency. Common violations include unauthorized facial recognition, biometric data harvesting, AI training on scraped personal data, covert employee surveillance, and discriminatory automated decision-making.
Regulations like the EU Privacy Act have sprung governments into action - with many companies having to pay penalties for infractions. This guide covers 15 real cases, the regulations that apply, and what businesses must do to stay compliant.
AI Privacy Violations: Key Statistics (2026)
By the numbers
- $1,000–$5,000 — Per-person BIPA penalty in Illinois; with class actions covering millions of users, single cases have settled for $35M–$650M
- $2,500 — Per-violation exposure under California's CIPA , applied per website visit per user, making pixel-based tracking lawsuits capable of reaching nine figures
- 75% — Share of organizations reporting that managing AI-related privacy risk is now a top compliance priority
- 89% of B2B buyers now use generative AI tools in their purchasing process — meaning AI data handling practices are increasingly a procurement and vendor risk question, not just a legal one (Forrester, 2025)
- EUR 1.5 billion+ — Total GDPR fines issued since enforcement began in 2018, with AI and data processing cases driving a growing share of penalties
- 2025–2026 — The EU AI Act's phased enforcement begins; prohibited AI practices (real-time biometric surveillance, social scoring) were banned from August 2024, with high-risk AI obligations coming into force through 2026
What Are AI Privacy Violations?
An AI privacy violation happens when an AI system, including machine learning models, facial recognition tools, recommendation engines, or generative AI, processes personal data in a way that breaks privacy law or reasonable expectations of user consent.
Privacy violations in AI are especially dangerous because they often operate at scale. A single misconfigured model can process millions of people's data without their knowledge. That's why regulators worldwide have moved quickly to apply existing laws, such as the GDPR, CCPA, BIPA, and CIPA, to AI-driven data practices, and why new AI-specific regulation (like the EU AI Act) now adds a second layer of compliance obligations.
The privacy risks of AI fall into several broad categories:
- Unauthorized data collection — scraping personal data to train AI models without consent
- Biometric data harvesting — facial recognition, voiceprints, and fingerprints captured without disclosure
- Covert surveillance — AI monitoring employees or users without their knowledge
- Automated decision-making — AI making consequential decisions (hiring, lending, healthcare) without human review
- Data leakage in generative AI — LLMs memorizing and reproducing personal data from training sets
- Third-party tracking — AI-powered ad targeting using data collected beyond the scope of original consent
18 Real AI Privacy Violations (and What They Cost)
1. Clearview AI: Facial Recognition Without Consent (Ongoing, $75M+ in fines)
Clearview AI scraped billions of facial images from public websites — Facebook, LinkedIn, Instagram — without user consent to build a facial recognition database sold to law enforcement.
Why it's a violation: The company never obtained consent from the individuals photographed. Under GDPR, collecting biometric data requires explicit consent. The Illinois Biometric Information Privacy Act (BIPA) requires written consent before capturing a person's faceprint.
What happened: Clearview was fined by regulators in the UK (£7.5M), Italy (€20M), France (€20M), Greece (€20M), and Australia. In the US, a class action BIPA lawsuit resulted in a $52M settlement. The company continues to face enforcement actions across multiple jurisdictions.
Lesson for businesses: If your AI product uses facial recognition or any biometric data, you need explicit opt-in consent, a written retention policy, and a deletion process, before you collect a single image.
2. Google / DeepMind — NHS Patient Data for AI Training (UK, 2017 — Regulatory Precedent Still Active)
Google's DeepMind received 1.6 million NHS patient records to train an AI tool for detecting kidney disease. The data was shared without proper patient consent or knowledge.
Why it's a violation: UK data protection law requires patients to be informed when their medical data is used beyond the original purpose of their care. The NHS trust that handed over the data violated this principle.
What happened: The UK's Information Commissioner's Office (ICO) found the NHS trust in breach of the Data Protection Act. While the fine was relatively modest at the time, this case set a lasting precedent: healthcare AI training data requires active consent processes, not just anonymization.
Lesson: AI training data is not exempt from privacy law. The fact that data is "used for a good cause" doesn't eliminate consent requirements.
3. Amazon Alexa — Children's Voice Data Retained Without Deletion ($25M fine, 2023)
The FTC found that Amazon retained children's voice recordings through Alexa even after parents requested deletion. Amazon also allegedly used children's location data in ways that violated COPPA (Children's Online Privacy Protection Act).
Why it's a violation: COPPA prohibits collecting or retaining personal data from children under 13 without verifiable parental consent and requires deletion upon request. Amazon's AI-driven voice assistant stored data it was legally required to delete.
What happened: Amazon paid $25M in FTC penalties, with a separate $5.8M related to Ring camera privacy violations.
Lesson: Data deletion is an enforceable right, not a courtesy. If your AI system interacts with children or retains user data long-term, build automated deletion workflows from day one.
4. Meta — Facial Recognition Feature Without Consent ($650M BIPA Settlement, 2021)
Meta's "Tag Suggestions" feature used AI to automatically identify faces in uploaded photos and suggest tagging — without obtaining consent from Illinois residents as required by BIPA.
Why it's a violation: BIPA requires written consent before any entity collects a person's facial geometry. Meta collected facial recognition data for millions of Illinois users without this consent.
What happened: Meta settled a class action for $650 million, at the time, the largest BIPA settlement in history. Meta subsequently shut down the facial recognition feature globally.
Lesson: BIPA is the most aggressively litigated biometric privacy law in the US. Any AI feature that involves facial recognition, voiceprints, or fingerprints touching Illinois residents triggers BIPA compliance obligations.
5. TikTok — Biometric Data Collection Without Disclosure ($92M Settlement, 2021)
TikTok's AI systems collected faceprints and voiceprints from users without the explicit disclosure required by BIPA. The app used this data to personalize content recommendations and target advertising.
Why it's a violation: Illinois BIPA requires companies to inform users in writing about biometric data collection, the purpose, and the retention period — and to obtain written consent.
What happened: TikTok settled a class action lawsuit for $92 million. The settlement covered approximately 89 million US users.
Lesson: If your AI recommendation engine or personalization feature infers identity or behavioral attributes from biometric signals, you need a clear disclosure and consent mechanism — not buried in a privacy policy.
6. Snapchat / Snap — Biometric Data Without Consent ($35M BIPA Settlement, 2022)
Snap's augmented reality filters used facial geometry mapping to apply lenses and effects. The company failed to obtain BIPA-compliant consent from Illinois users before collecting facial geometry data.
What happened: $35M class action settlement. The case illustrates that BIPA applies to entertainment and creative AI features, not just security or authentication use cases.
7. Samsung Employees — ChatGPT Data Leak (2023)
Samsung engineers accidentally leaked confidential proprietary source code and meeting notes when they copied the information into ChatGPT prompts. The data was sent to OpenAI's servers and potentially incorporated into model training.
Why it's a violation: Samsung had not established an approved AI usage policy. Employees unknowingly shared trade secrets and internal business data with a third-party AI system not covered by Samsung's data processing agreements.
What happened: Samsung banned the use of ChatGPT and generative AI tools on company devices following the breach.
Lesson for businesses: If you don't have an AI usage policy that specifies what data employees can and cannot input into AI tools, this is your exposure. Confidential business data and customer PII entered into commercial AI systems may be retained, processed, or used for training.
8. Dinerstein v. Google — Patient Data Used to Train AI Without Consent (Settled 2023)
A University of Chicago Medicine patient sued Google after his medical records were used to train medical AI models. The lawsuit alleged the data wasn't properly anonymized and contained re-identifiable information.
Why it matters: Re-identification risk is a growing AI privacy problem. "Anonymized" data that can be combined with other datasets to re-identify individuals is not legally anonymous under GDPR or CCPA.
What happened: The case was dismissed on procedural grounds, but it set an important precedent about standing and harm in AI data cases. The legal exposure remains significant for healthcare AI companies.
9. Spotify, Apple, and Music Platform AI Recommendations — GDPR Enforcement (EU, Ongoing)
Multiple European music and media platforms have faced GDPR investigations related to how their AI recommendation engines process listening behavior, inferred political and religious preferences, and create user profiles beyond the scope of disclosed data processing.
Why it's a violation: Under GDPR Article 5, data must be processed for specified, explicit, and legitimate purposes — and not further processed in a way incompatible with those purposes. Using behavioral data to infer sensitive characteristics (politics, religion, health) goes beyond the original consent most users provided when signing up.
Lesson: AI-powered personalization engines frequently process data in ways the original privacy policy doesn't cover. If your AI system infers attributes beyond what users consented to share, you likely have a GDPR compliance gap.
10. Uber Eats / Delivery App Algorithmic Discrimination (EU, 2024)
Uber and other gig economy platforms faced scrutiny in the EU after their AI-driven systems were found to make automated decisions about worker pay, route assignment, and deactivation without meaningful human oversight.
Why it's a violation: GDPR Article 22 gives individuals the right not to be subject to purely automated decision-making that produces significant legal effects. Algorithmic termination of a gig worker's account is a "significant effect" — requiring human review.
What happened: The Dutch Data Protection Authority (AP) fined Uber €10M for GDPR violations related to driver data retention and transparency. The EU AI Act further classifies high-risk AI applications (employment, credit, essential services) as requiring mandatory human oversight and conformity assessments.
11. Telehealth Platforms — Pixel Tracking AI Without Consent (CIPA / HIPAA, 2023–2025)
Multiple US telehealth companies, including BetterHelp and GoodRx, were found using pixel-based AI tracking tools (Meta Pixel, Google Analytics enhanced conversion tracking) that transmitted patient session data to ad platforms without valid HIPAA authorization or CCPA disclosure.
Why it's a violation: The California Invasion of Privacy Act (CIPA) §631 prohibits the unauthorized interception of electronic communications, which courts have extended to cover web pixels that relay session content to third parties without user consent. HIPAA separately prohibits sharing protected health information with third parties without an appropriate Business Associate Agreement.
What happened: BetterHelp settled FTC charges for $7.8M. GoodRx settled for $1.5M. Dozens of additional CIPA class action lawsuits are active against healthcare and wellness companies in 2024–2026.
Why this matters for your business: CIPA lawsuits are now one of the fastest-growing areas of privacy litigation in the US. A website using standard analytics and advertising pixels, without proper consent management, can trigger CIPA exposure regardless of whether you're in healthcare or not. Enzuzo's consent management platform is built to block tracking pixels until consent is given, which is the primary legal defense against CIPA claims.
12. OpenAI — GDPR Complaints Across Europe (Ongoing, 2023–2026)
OpenAI's ChatGPT has been subject to investigations and temporary bans by data protection authorities in Italy, Spain, Poland, and France.
Why it's a violation: The GDPR concerns relate to several issues: (1) OpenAI's training data included personal data scraped from the web without legal basis; (2) ChatGPT can "hallucinate" false information about real individuals, violating accuracy rights under GDPR Article 5(d); (3) OpenAI's mechanisms for exercising data subject rights (access, deletion, correction) were deemed insufficient.
What happened: Italy's Garante temporarily banned ChatGPT in March 2023 before lifting the ban after OpenAI made changes. Investigations remain ongoing across the EU. The Hamburg DPA prohibited OpenAI from processing German users' data for training purposes in 2024.
Lesson: Generative AI companies face the most acute GDPR pressure in 2025–2026. If you're building on top of LLMs, your privacy policy and data processing agreements need to reflect the actual AI data flows.
13. IBM — Facial Recognition Training Data Scraped from Flickr (Investigated 2019, Precedent Ongoing)
IBM's AI research division used approximately 1 million publicly posted Flickr photos to train facial recognition models — without the photographers' or subjects' knowledge or consent.
Why it's a violation: Even when photos are publicly posted, individuals retain privacy rights under GDPR and CCPA regarding how their biometric data is processed. "Public" does not mean "available for AI training."
What happened: IBM faced significant criticism and Congressional inquiry. The case helped establish the principle that training AI on publicly scraped biometric data requires a legal basis beyond mere availability.
14. Teleperformance — Employee AI Surveillance Without Adequate Disclosure (GDPR, 2022–2023)
Business process outsourcing company Teleperformance deployed AI-powered monitoring on remote workers, including facial recognition to verify worker identity and track facial expressions during work hours.
Why it's a violation: Under GDPR, employee monitoring using biometric AI requires explicit consent or a compelling legitimate interest. Workers were not adequately informed about the scope of monitoring, particularly the facial expression analysis component.
What happened: The Colombian government launched an investigation; workers' unions filed complaints with EU regulators. The case highlighted that AI workplace surveillance must meet a high transparency bar, particularly when it involves biometric data.
Lesson: Employee privacy is a live issue in 2026. If your company deploys AI-based attendance, productivity monitoring, or identity verification, you need clear disclosure, a DPIA (Data Protection Impact Assessment), and a valid legal basis beyond mere employer interest.
15. Grok / X (Twitter) — AI Training on User Posts Without Opt-Out Disclosure (2024)
X (formerly Twitter) quietly updated its privacy settings to enable Grok, its AI model, to be trained on users' posts — without a clear opt-out process. Users were enrolled by default; the opt-out option was buried in settings.
Why it's a violation: Under GDPR, consent for new processing purposes must be freely given, specific, informed, and unambiguous. An opt-out buried in settings does not meet this standard. Several EU data protection authorities sent formal inquiries, and the Irish DPC (the lead EU regulator for X) launched an inquiry.
What happened: X added a more visible opt-out option following regulatory pressure. The incident reinforced that "consent by default" — where users are opted in unless they take action — does not meet GDPR's consent standard for new data uses.
16. Grammarly / Superhuman — AI Impersonation of Real Writers Without Consent (US, March 2026)
Grammarly's parent company Superhuman launched an "Expert Review" feature that sold users editing advice attributed to writers and journalists, including New York Times journalist Julia Angwin, without those individuals' knowledge or consent.
Why it's a violation: Angwin's lawsuit alleges the AI-generated suggestions attributed to her were often low quality and did not reflect her actual editing style. The case also raises broader questions about AI systems trained on a person's public work being used to simulate that person's professional judgment for commercial gain.
Why this is broader than one lawsuit: Grammarly's browser extension was also flagged in a separate January 2026 Incogni privacy risk study as among the most privacy-concerning extensions, collecting website content, personal communications, and user activity data. Together, the cases illustrate two distinct AI privacy risks: unauthorized commercial use of personal identity, and broad passive data collection through AI productivity tools.
Lesson for businesses: Two distinct lessons here. First, if your AI product attributes outputs to real people, advisors, experts, reviewers, those individuals must explicitly consent to that association. Second: employees routinely install AI tools that have broad access to everything they type, including confidential business data. An AI usage policy that addresses which tools are approved is now baseline enterprise risk control.
17. OpenAI / Scarlett Johansson — AI Voice Cloning Without Consent (US, May 2024)
When OpenAI launched GPT-4o in May 2024, one of its five AI voices, named "Sky", was immediately noted by the general public as sounding "eerily similar" to her voice. This was particularly striking given that Johansson had voiced an AI assistant in the 2013 film Her, and that OpenAI CEO Sam Altman had publicly called the film his favorite movie about AI, even tweeting the single word "her" during the GPT-4o demo.
Why it's a violation: Johansson had been approached twice by OpenAI to voice the system. She declined both times. Despite this, OpenAI released a voice that her closest friends could not distinguish from her own. Under California's right of publicity law, the unauthorized commercial use of a person's voice contravenes right of publicity laws.
What happened: Johansson hired legal counsel and sent two formal letters to Altman demanding disclosure of how "Sky" was created. OpenAI pulled the Sky voice, stating it "belongs to a different professional actress using her own natural speaking voice". OpenAI expressed that it was "sorry" it "didn't communicate better." A congressional subcommittee invited Johansson to testify on the broader implications for AI regulation and creative rights.
Lesson for businesses: The case exposed a significant legal gap: there is no single federal right of publicity law in the US, and state protections vary widely. But it established a clear ethical and reputational principle: AI systems that commercially exploit a real person's voice, identity, or persona without consent carry substantial legal risk. As AI voice cloning becomes cheaper and more accessible, this case is the benchmark regulators and courts will reference.
18. LinkedIn — AI Behavioral Profiling Without Consent (GDPR, €310M Fine, 2024)
LinkedIn's AI-powered recommendation system was found to have tracked not just explicit user activity, such as likes, posts, connections, but also passive behavioral signals: how long a user lingered on a post, how quickly they scrolled, and navigation patterns. These signals were used to infer personal characteristics (interest in changing jobs, likelihood of burnout, political leanings) and fed into predictive advertising algorithms and internal content ranking systems.
Why it's a violation: The Irish Data Protection Commission (DPC), which serves as LinkedIn's lead EU regulator, determined that this behavioral profiling was conducted without valid user consent and violated GDPR's core principles of transparency, fairness, and purpose limitation.
What happened: LinkedIn received a €310M GDPR fine for consent manipulation in 2024: one of the largest GDPR penalties issued to a B2B platform. It was regarded as a signal that regulators are scrutinizing AI-driven behavioral profiling in professional tools, not just consumer social media.
Lesson for businesses: If your platform uses AI to infer user characteristics beyond what users explicitly shared — job seeking intent, emotional state, political views, purchase propensity — you almost certainly need a fresh consent basis. "Using your data to improve your experience" does not cover AI-driven psychological profiling for advertising. This applies to any SaaS product with an ML-driven personalization or recommendation layer.
Book a call with an Enzuzo consent management expert to help guard your AI business against unauthorized consent
Which Privacy Laws Apply to AI?
AI privacy violations are prosecuted under existing privacy and data protection frameworks — and increasingly under new AI-specific regulations.
GDPR (EU / UK)
The General Data Protection Regulation applies to any organization processing data of EU or UK residents. Key provisions for AI:
- Lawful basis (Art. 6): Every data processing activity needs a legal basis; "legitimate interest" doesn't cover high-risk AI
- Special category data (Art. 9): Biometric data, health data, inferred sensitive attributes require explicit consent
- Automated decision-making (Art. 22): The right not to be subject to purely automated decisions with significant effects
- Data minimization (Art. 5): AI systems must not collect more data than necessary for the stated purpose
- Right to erasure (Art. 17): Individuals can request deletion of their data from AI training sets
CCPA / CPRA (California)
California's Consumer Privacy Act and its 2023 extension cover businesses processing California residents' data. Key AI obligations:
- Disclosure of automated decision-making in privacy notices
- The right to opt out of sale or sharing of personal information with AI/ad platforms
- Rules on "sensitive personal information" that restrict inference-based profiling
- Draft CPRA regulations on automated decision-making technology (ADMT) are under active development by the California Privacy Protection Agency (CPPA) as of 2026
BIPA (Illinois)
Illinois' Biometric Information Privacy Act is the most actively litigated US biometric law. Requirements:
- Written consent before collecting biometric data (faceprints, voiceprints, fingerprints, retina scans)
- Written retention and destruction schedule
- Prohibition on selling or profiting from biometric data
- Private right of action: $1,000 per negligent violation, $5,000 per intentional violation — multiplied per affected individual
CIPA (California)
California Invasion of Privacy Act §631 is increasingly used to challenge pixel-based tracking and AI-assisted surveillance on websites:
- Prohibits unauthorized interception of electronic communications
- Courts have extended this to web pixels that relay session data to third parties
- Class action exposure: $2,500 per violation (each website visit by each user)
EU AI Act (2025–2026 Phased Implementation)
The EU AI Act introduces a risk-tiered framework for AI systems:
- Prohibited AI: Social scoring, real-time biometric surveillance in public spaces, subliminal manipulation
- High-risk AI: HR/hiring, credit scoring, healthcare, critical infrastructure — requires conformity assessments, human oversight, and transparency
- Limited risk: Chatbots and generative AI — requires disclosure that the user is interacting with AI
How Do AI Privacy Violations Affect Consent Management?
Most AI privacy violations trace back to a consent problem: data was collected, processed, or shared without the user's knowledge or approval.
A Consent Management Platform (CMP) like Enzuzo addresses the most common consent-related AI privacy risks:
- Pixel blocking until consent is given: prevents tracking pixels (Meta Pixel, Google Ads, analytics tools) from firing before a user consents, which is the core CIPA and CCPA defense
- Google Consent Mode v2 compliance: sends consent status signals to Google before any tag fires, preserving ad measurement accuracy while remaining compliant
- Consent record-keeping: maintains timestamped logs of each user's consent decision, which is required for GDPR audit defense
- Geo-targeted consent banners: shows different consent experiences to EU visitors (GDPR opt-in) vs. US visitors (CCPA opt-out), using jurisdiction-appropriate language
If your website uses any AI-powered tools that collect user data, your cookie consent banner is the first line of legal defense.
Frequently Asked Questions About AI Privacy Violations
What is an example of AI violating privacy?
Clearview AI scraped billions of facial photos from public websites to build a biometric database without the individuals' consent, resulting in $75M+ in regulatory fines across the US, UK, and EU. Meta's Tag Suggestions feature similarly collected facial geometry from Illinois users without BIPA-required consent, resulting in a $650M settlement.
How does AI invade privacy?
AI invades privacy primarily through unauthorized data collection (scraping personal data for training), covert biometric surveillance (facial recognition without consent), behavioral profiling beyond disclosed purposes, automated decisions with significant effects without human review, and data leakage in generative AI systems that memorize training data.
Can AI systems monitor users without consent?
No. Monitoring users with AI, whether through website pixels, facial recognition, voice analysis, or behavioral tracking, requires either explicit consent (GDPR, BIPA) or clear disclosure with an opt-out right (CCPA, CIPA). Courts have consistently held that "publicly available" data does not eliminate consent obligations when used for AI training or surveillance.
What are the privacy risks of AI?
Key AI privacy risks include: biometric data captured without consent (BIPA, GDPR), pixel-based tracking transmitted to third-party AI platforms without disclosure (CIPA, CCPA), automated decisions on hiring or credit without human oversight (GDPR Art. 22), AI training on scraped personal data without legal basis (GDPR), and employees inadvertently leaking sensitive data into commercial AI tools like ChatGPT.
What is the biggest AI privacy violation?
The largest single AI privacy settlement to date is Meta's $650M BIPA settlement (2021) for collecting facial geometry from Illinois users without consent. Google has faced the largest cumulative GDPR fines ($8B+ across all violations, not all AI-specific). The EU AI Act enforcement, which began in 2025, is expected to produce significantly larger penalties for prohibited AI practices.
Which privacy laws apply to AI?
In the US: BIPA (Illinois biometrics), CIPA (California surveillance/pixels), CCPA/CPRA (California consumer data rights), COPPA (children's data), HIPAA (healthcare AI). In the EU/UK: GDPR, UK GDPR, and the EU AI Act. New state-level AI transparency laws passed in Colorado, Connecticut, and Texas in 2024–2025 also apply to automated decision-making systems.
How can a business avoid AI privacy violations?
The key steps: (1) audit all AI tools in use, including third-party SaaS that uses AI, and document what data each one processes; (2) implement a Consent Management Platform to control pixel and tracking tool activation based on user consent; (3) ensure your privacy policy accurately describes AI-driven data processing; (4) conduct Data Protection Impact Assessments (DPIAs) for high-risk AI use cases; (5) establish an employee AI usage policy that specifies what data can be input into tools like ChatGPT.
What is a DPIA and when is it required for AI?
A Data Protection Impact Assessment (DPIA) is a structured process for identifying and minimizing privacy risks in new data processing activities. Under GDPR, a DPIA is mandatory before deploying AI systems that process special category data (biometrics, health), involve large-scale behavioral profiling, or enable systematic monitoring of individuals. The EU AI Act similarly requires conformity assessments for high-risk AI systems.
The Bottom Line: AI and Privacy in 2026
AI privacy violations are accelerating because AI systems are deployed faster than privacy policies and consent processes are updated to reflect them. Regulators are not waiting.
The companies most at risk are those whose websites, apps, or internal tools:
- Use AI-powered analytics, advertising, or personalization tools with pixels that fire before consent
- Process any biometric data (facial recognition, voice, fingerprints) without explicit consent workflows
- Allow employees to use commercial AI tools (ChatGPT, Gemini, Copilot) without a data handling policy
- Make consequential automated decisions (credit, hiring, pricing) without a human review mechanism
The fastest, most cost-effective first step for most businesses is implementing a Consent Management Platform, because controlling what data your tracking pixels send, and when, addresses the most common AI privacy violation vector before a lawsuit or regulator does.
See how Enzuzo's CMP blocks tracking and keeps your consent records audit-ready →
Osman Husain
Osman is the content lead at Enzuzo. He has a background in data privacy management via a two-year role at ExpressVPN and extensive freelance work with cybersecurity and blockchain companies. Osman also holds an MBA from the Toronto Metropolitan University.