Archive for author: makoadmin

Roku accused of selling children’s data to advertisers and brokers

The state of Florida has accused Roku, which powers many smart TVs and streaming devices, of selling children’s data to third parties without their consent. According to the Florida Attorney General James Uthmeier, Roku collected viewing habits, voice recordings, and precise geolocation from kids without approval from parents.

Roku, which reaches around 145 million people across half of US households, allegedly gathered children’s data despite clear signals that the viewers were minors, the AG said.

After collecting the data, Roku made it available to advertisers and sold it to data brokers, including Kochava, according to the Florida government. Kochava is already facing its own lawsuit from the Federal Trade Commission, which claims the company sells highly sensitive consumer information.

Uthmeier’s office said in a news release:

“The State contends that Roku’s practices violated Florida’s privacy and consumer-protection laws by failing to obtain parental consent before selling or processing children’s data and by misrepresenting the effectiveness of its privacy controls and opt-out tools.”

In the complaint filed in court, the AG’s office accused Roku of turning a blind eye to the collection of minors’ data.

“Roku knows that some of its users are children but has consciously decided not to implement industry-standard user profiles to identify which of its users are children.”

The lawsuit claims Roku ignored obvious indicators, such as when users installed its Kids Screensaver or Kids Theme Pack products.

Uthmeier’s office also said that although Roku sells deidentified data to brokers (that is, data that has identifying information removed), it’s still possible for brokers like Kochava to reidentify users. Brokers often have troves of information of their own, such as device IDs linked to potentially identifying information, which can allow them to match records to specific people.

Florida has filed the lawsuit under the Florida Digital Bill of Rights (FDBR), which came into effect on July 1, 2024. The law protects Florida residents’ privacy, including children’s data rights, and gives parents the ability to opt out of data processing for their kids.

The penalty for violating the FDBR is up to $50,000 per violation, but that triples for violations where the consumer involved is a known child. That includes cases of “willful disregard of a child’s age.”

This isn’t the only case that Roku must navigate in court. In April, Michigan Attorney General Dana Nessel also sued Roku for similar violations, accusing it of violating laws including the Children’s Online Privacy Protection Act (COPPA), along with federal and state privacy laws. Roku is fighting the suit.

Smart TV advertising is big business in the US. So much, in fact, that Roku appears to sell its devices at a loss to power its platform revenues, which include not just subscriptions, but advertising. In fiscal 2024, it lost $80.3 million on device sales, up from $43.9 million in device-based losses the prior year. Yet it made $1.9 billion profit from its platform business, up from $1.567 billion in 2023.

According to reports, Roku’s Automatic Content Recognition (ARC) technology captures thousands of images each hour from smart TVs. These can be used to help track viewing activity.

In January, Roku launched its Data Cloud, a service that allows its partners to use the company’s proprietary TV data. It was the latest step in a multi-year strategy to build out its data offering. In 2022, it launched a ‘clean room’ product that allowed other companies to combine their data with Roku’s own, conducting queries about viewer behavior while preserving privacy (this is how companies access its Data Cloud). Then, in 2024, it launched Roku Exchange—an advertising hub for partners.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Mango discloses data breach at third-party provider

Mango has reported a data breach at one of its external marketing service providers. The Spanish fashion retailer says that only personal contact information has been exposed—no financial data.

The breach took place at the service provider and did not affect Mango’s own systems. According to the breach notification, the stolen information was limited to:

  • First name (not your last name)
  • Country
  • Postal code
  • Email address
  • Telephone number

“Under no circumstances has your banking information, credit cards, ID/passport, or login credentials or passwords been compromised.”

Because Mango operates in more than 100 countries, affected individuals could be located across multiple regions where Mango markets to customers through its external partner. As Mango has not named the third-party provider or disclosed how many customers were affected, we cannot precisely identify where these customers are located.

Mango has not released any details about the attackers behind the breach. Although the stolen data itself does not pose an immediate risk, cybercriminals often follow breaches like this with phishing campaigns, exploiting the limited personal information they obtained.

We’ll update this story if Mango releases more information about the breach or the customers impacted.

Protecting yourself after a data breach

Affected customers say they have received a data breach notification of which we have seen screenshots in Spanish and English.

If you think you have been the victim of a data breach, here are steps you can take to protect yourself:

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Check your digital footprint

Malwarebytes has a free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

TikTok scam sells you access to your own fake money

This scam starts in your TikTok DMs. A brand-new account drops a melodramatic message—terminal illness, last goodbye, “I left you some assets.” At the bottom: a ready-made username and password for a crypto site you’ve never used. It’s designed to feel urgent and personal so you tap before you think. The whole funnel is built for phones: big tap targets, short copy, sticky chat bubbles—perfect for someone arriving straight from TikTok.

The scam message

Thanks to our community for spotting this one. This exact scam was shared on our Malwarebytes subreddit by user Ok-Internal-2110, who posted a warning for TikTok users after encountering it firsthand.

I walked through the flow so you don’t have to.

What the site shows vs. what actually exists

The illusion:
The moment you log in with the credentials from that TikTok DM, a glossy, mobile-friendly dashboard flashes a huge balance. There’s motion (numbers “update”), a believable “history,” and a big Withdraw button right where your thumb expects it. On a small screen, it looks like a real account with real money.

A convincing dashboard
Fake "history"

The trap:
When you try to send that balance to your own wallet, the site asks for a withdrawal key belonging to the original account holder—the one from the DM. You don’t have that key, and support won’t give it to you. External withdrawals are a dead end by design.

The site asks for a withdrawal key

The detour they push you to take:
Support suggests using Internal Transfer instead. Conveniently, they also offer to help you create a new user “in seconds,” and this new account will have its own key (because you created it). That makes it feel like you’re finally doing something legitimate: “I’ll just transfer the funds to my new account and then withdraw.”

You need to be a "VIP Member"

The paywall you only meet once you’re invested:
Internal transfers only work on “VIP” accounts. To upgrade to VIP, you must pay for a membership. Many victims pay here, assuming it’s a one-time hurdle before they can finally withdraw.

Paywall

Why nothing real ever leaves the site:
After you upgrade and attempt the internal transfer, the site can:

  • demand another fee (a “limit lift,” “tax,” or “security key”),
  • fail silently and push you to support, or
  • “complete” the transfer inside the fake ledger while still blocking any external withdrawal.

Victims end up paying for the privilege of moving fake numbers between fake accounts—then paying again to “unlock” a withdrawal that never happens.

The scam in a nutshell

This scam is built for volume. DMs and comments via a huge platform like TikTok seed the same gift-inheritance story to thousands of people at once.

Two things do the heavy lifting:

  • Shock value: That huge, unexpected number on the dashboard delivers a little jolt of surprise mixed with excitement, which lowers skepticism and pushes you into fast, emotional decision-making.
  • Foot-in-the-door: Small steps (log in > try withdraw > hit a roadblock > “just upgrade to VIP”) nudge you toward paying a fee that now feels reasonable.

With borrowed authenticity from a big on-screen balance, the scammers sell you VIP access to move that fake balance around internally while keeping you forever one step away from a real, on-chain withdrawal.

Why do people keep paying up?

  • The balance looks real, so every new hurdle feels like bureaucracy, not fraud.
  • Paying once creates sunk cost: “I’ve already invested—one more step and I’m done.”
  • Internal movements inside their dashboard mimic progress, even though no on-chain transfer ever occurs.
  • A mobile flow encourages momentum—it’s always “one more tap” to finish.

Any system that makes you pay to receive money that allegedly already belongs to you is likely to be a scam.

The part most people miss is that you’re also handing over personal data. Even if you never send crypto, the site and the chat funnel collect a surprising amount of information, including your name, email, and phone number.

That data is valuable on its own and makes follow-up scams easier. Phishing that references the earlier “account,” extortion threats, fake “refund” offers that ask for remote access, SIM-swap attempts tied to your number, or simple resale of your details to other crews—and sadly, getting hooked once increases the odds you’ll be targeted again.

How to recognize this family of scams

  • You’re asked to log into a site with credentials someone else gave you.
  • A big balance appears instantly, but external withdrawals require a mystery key or never complete.
  • You’re told internal transfers are possible only after buying VIP or a membership.
  • The support bubble is quick to reply about upgrades and silent about on-chain withdrawals.
  • Any “proof” of funds exists only inside their dashboard—no public ledger, no small test withdrawal.

How to stay safe

There are safer ways to test claims (without losing money):

  1. Never pay to “unlock” money. If funds are yours, you don’t buy permission to move them.
  2. Ask for on-chain proof. Real balances live on a public ledger. If they can’t show it, it doesn’t exist.
  3. Attempt a tiny withdrawal first to a wallet you control—on legitimate platforms, that’s routine after verifying your identity (know you customer, or KYC) and enabling two-factor authentication (2FA).
  4. Search the flow, not just the brand. Scam kits change names and domains, but the “VIP to withdraw” mechanic stays the same.

What to do if you already engaged:

  • Stop sending funds. The next fee is not the last fee.
  • Lock down accounts: change passwords, enable 2FA, reset app passwords, and review recovery phone/email.
  • Reduce future targeting: consider a new email/number for financial accounts and remove your number from public profiles.
  • Document everything (screenshots, timestamps, any wallet addresses or TXIDs if you paid).
  • Report the TikTok account and the website, and file with your local cybercrime or consumer-protection channel.
  • Tell someone close to you. Shame keeps people quiet; silence helps the scammers.

If a platform says there’s a pile of crypto waiting for you but you must buy VIP to touch it, you’re not withdrawing funds; you’re buying a story. TikTok brings you in on your phone; the mobile UI keeps you tapping. Close the tab, report the DM, and remember: dashboards can be faked, public ledgers can’t.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Scammers are still sending us their fake Robinhood security alerts

A short while ago, our friends at Malwaretips wrote about a text scam impersonating Robinhood, a popular US-based investment app that lets people trade stocks and cryptocurrencies. The scam warns users about supposed “suspicious activity” on their accounts.

As if to demonstrate that this phishing campaign is still very much alive, one of our employees received one of those texts.

screenshot scam text message

“Alert!

Robinhood Securities Risk Warning:

Our automated security check system has detected anomalies in your account, indicating a potential theft. A dedicated security check link is required for review. Please click the link below to log in to your account and complete the security check.

Immediate Action: https://www-robinhood.cweegpsnko[.]net/Verify

(If the link isn’t clickable, reply Y and reopen this message to click the link, or copy it into your browser.)

Robinhood Securities Official Security Team”

As usual, we see some red flags:

  • Foreign number: The country code +243 belongs to the Democratic Republic of the Congo, not the US, where the real Robinhood is based.
  • Urgency: The phrase “Immediate Action” is designed to pressure you.
  • Fake domain: The URL that tries to look like the legitimate robinhood.com website.
  • Reply: The instructions to reply “Y” if a link isn’t clickable are a common phishing tactic.

But if the target follows the instructions to visit the link, they would find a reasonably convincing copy of Robinhood’s login page. It wouldn’t be automatically localized like the real one, but nobody in the US would know the difference. Logging in there hands the scammers your Robinhood login credentials and allows them to clean out your account.

According to Malwaretips, some of the fake websites even redirected you to the legitimate site after showing the “verification complete” message.

They also warned that some scammers will try to harvest additional personal data from the account, including:

  • Tax documents
  • Full name
  • Social Security Number (if on file)
  • Bank account information

How to stay safe

What to do if you receive texts like these

The best tip to stay safe is to make sure you’re aware of the latest scam tactics. Since you’re reading our blog, you’re off to a good start.

  • Never reply to or follow links in unsolicited tax refund texts, calls, or emails, even if they look urgent.
  • Never share your Social Security number or banking details with anyone claiming to process your tax refund.
  • Go direct. If in doubt, contact the company through official channels.
  • Use an up-to-date real-time anti-malware solution, preferably with a web protection component.

Pro tip: Did you know that you can submit suspicious messages like these to Malwarebytes Scam Guard, which instantly flags known scams?

What to do if you clicked the phishing link

Indicators of compromise (IOCs)

www-robinhood.cweegpsnko[.]net

www-robinhood.fflroyalty[.]com

robinhood-securelogin[.]com

robinhood-verification[.]net


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Satellites leak voice calls, text messages and more

Scientists from several US universities intercepted unencrypted broadcast through geostationary satellites using only off-the-shelf equipment on a university rooftop.

Geostationary satellites move at the same speed as the Earth’s rotation so it seems as though they are always above the same exact location. To maintain this position, they orbit at an altitude of roughly 22,000 miles (36,000 kilometers).

This makes them ideal for relaying phone calls, text messages, and internet data. Since these satellites can cover vast areas—including remote and hard-to-reach areas—they provide reliable connectivity for everything from rural cell towers to airplanes and ships, even where cables don’t reach.

That same stability makes them convenient for people who want to eavesdrop, because you only need to point your equipment once. The researchers who did this described their findings in a paper called “Don’t Look Up: There Are Sensitive Internal Links in the Clear on GEO Satellites.”

The team scanned the IP traffic on 39 GEO satellites across 25 distinct longitudes with 411 transponders using consumer-grade equipment. About half of the signals they captured contained clear text IP traffic.

This means there was no encryption at either the link layer or the network layer. This allowed the team to observe internal communications from organizations that rely on these satellites to connect remote critical infrastructure and field operations.

Among the intercepted data were private voice calls, text messages, and call metadata sent through cellular backhaul—the data that travels between cell towers and the central network.

Commercial and retail organizations transmitted inventory records, internal communications, and business data over these satellite links. Banks leaked ATM-related transactions and network management commands. Entertainment and aviation communications were also intercepted, including in-flight entertainment audio and aircraft data.

The researchers also captured industrial control signals for utility infrastructure, including job scheduling and grid monitoring commands. Military (from the US and Mexico) communications were exposed, revealing asset tracking information and operational details such as surveillance data for vessel movements.

The research reveals a pervasive lack of standardized encryption protocols, leaving much of this traffic vulnerable to interception by any technically capable individual with suitable equipment. They concluded that despite the sensitive nature of the data, satellite communication security is often neglected, creating substantial opportunities for eavesdropping, espionage, and potential misuse.

The researchers stated:

“There is a clear mismatch between how satellite customers expect data to be secured and how it is secured in practice; the severity of the vulnerabilities we discovered has certainly revised our own threat models for communications.”

After the scientists reported their findings, T-Mobile took steps to address the issue, but other unnamed providers have yet to patch the vulnerabilities.

This study highlights the importance of making sure your communications are encrypted before they leave your devices. Do not rely solely on providers to keep your data safe. Use secure communication apps like Signal or WhatsApp, choose voice-over-internet (VoIP) providers that encrypt calls and messages, and protect your internet data with a VPN that creates a secure, encrypted tunnel.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

AI-driven scams are preying on Gen Z’s digital lives​

Gone are the days when extortion was only the plot line of crime dramas—today, these threatening tactics target anyone with a smartphone. As AI makes fake voices and videos sound and look real, high-pressure plays like sextortion, deepfakes, and virtual kidnapping feel more believable than ever before, tricking even the most digitally savvy users. Gen Z and Millennials are most at risk, accounting for two in three victims of extortion scams. These scammers prey on what’s personal, wreaking havoc on their victims’ privacy, reputations, and peace of mind.

Extortion Image 1

Our latest research shows that one in three mobile users has been targeted by an extortion scam, and nearly one in five has fallen victim. Gen Z is hit hardest: more than half (58%) have been targets, and over 1 in 4 (28%) have been a victim. Sextortion—threatening to leak nude photos or videos or expose pornographic search history— is particularly notable, with one in six mobile users reporting they’ve been a target. Among Gen Z, that number jumps to 38%.

Five things to know about mobile extortion scams

1. Who’s most at risk: Gen Z and Millennials with a risk tolerant profile

Compared to victims and targets of other types of mobile scams, extortion victims tend to be younger, male, and mobile-first. Their profile:

  • Young: 69% of victims and 64% of targets are Gen Z or Millennial (vs. 52%/40% of victims and targets of other types of scams, respectively)
  • Male: 65% of victims and 60% of targets are male (vs. 48%/45%)
  • Parents: 45% of victims and 41% of targets are parents (vs. 36%/26%)
  • Minorities: 53% of victims are non-white (vs. 39%)
  • Mobile-first: 52% of victims and 46% of targets agree “I’m more likely to click a link on my phone than on my laptop” (vs. 42%/36%)

However, this simply shows how targets and victims skew. Behaviors typically play a bigger role in overall risk.

2. What the damage looks like: emotional and deeply personal

Extortion criminals use personal, high-stakes threats in their scams. Victims and targets of extortion scams in our survey report experiences ranging from scammers threatening to expose nude photos and videos to claims that a family member was in an accident.

These personalized, high-pressure threats make extortion victims especially vulnerable, and while victims of all mobile scams suffer serious emotional, financial, and functional fallout at the hands of their scammers, extortion victims experience outsized impact:

  • Nearly 9 in 10 extortion victims reported emotional harm because of the scam they experienced
  • 35% experienced blackmail or harassment
  • 21% experienced damage to their reputation
  • 19% faced consequences at work or school

Even when targets don’t fall victim, the threats alone can cause emotional harm:

“I didn’t lose anything, I was just scared because they wanted to inform all my friends, family, and employers how perverted I was because I supposedly watched porn.”   

—Gen Z survey respondent, DACH region

Extortion Image 2

3. Why it’s getting worse: AI is raising the stakes

AI is increasingly good at making fake feel real, giving criminals even more of an advantage when manipulating and extorting victims. One in five mobile users has been the target of a deepfake scam and nearly as many have encountered a virtual kidnapping scam (a decades-old tactic that now often uses AI voice cloning). Two in five (43%) Gen Z users have been a target of one of these.

 Who AI scams hit: Victims and targets skew Gen Z and iPhone users with a deep digital footprint. This could leave their personal information, images, or even voice more accessible to cybercriminals who want to use it as part of a scam.

  • Gen Z: 45% (vs. 31% for extortion victims and targets overall)
  • iPhone users: 62% (vs. 51% overall)
  • Data sharers*: 81% (vs. 71% overall)

*Agree with the statement: “I understand that sharing personal information with apps, on social media, or on messaging services can be risky, but I am okay with that risk”

Extortion Image 3

So why might exposure be higher for Gen Z? Digital natives are most entrenched in mobile-first behavior and most active in low-oversight casual commerce (DMing for deals, using buy/sell/trade groups, clicking on ads to purchase or download, sending money for a future service). They also show up more on alternative platforms like Discord, Tumblr, Twitch, and Mastodon, where identity checks are lighter and parasocial trust runs high, creating a sweet spot for scammers. 

“The scammer makes you believe it is a legit conversation. They/He/She talk to you like they know you. Trying to convince you they are supporting/helping you in some way to fix something. When they are just fishing for more information!”

— Gen Z US Survey Respondent   

For victims of AI-driven scams, the fallout is even more extreme: 32% suffered reputation damage (vs. 21% for extortion victims overall), 29% suffered work/school consequences (vs. 11%), 24% had their personal information stolen (vs. 14%), and 21% had financial accounts opened in their name (vs. 13%), underscoring the threat of these evolving scams. 

4. Where the risk lives: constant, cross-channel exposure   

Scammers know the more they approach a target, the more likely they are to create a victim. 78% of extortion victims and 63% of targets experience scam attempts daily (vs. 44%/36% in other scam groups), driving alert fatigue and making it more likely that a scammer will slip through the cracks.

Extortion victims and targets also over-index on using informal buying and selling channels—spaces like social media where identity is fuzzy, protections are lacking, and decisions are quick. Being in more casual spaces more frequently increases the odds of a scam landing for anyone.

5. How mindset shapes risk: overconfident and under-protected

Seven in ten extortion victims say they’re confident they can spot a scam, more than half believe they could recoup any financial losses, and most trust their phone’s safety features. At the same time, many victims and targets simply don’t worry about mobile scams at all, resulting in a lack of protective measures. Adoption of security basics (security software, strong/unique passwords, multi-factor authentication, timely system updates, permission hygiene, data backups) remains low, even after painful firsthand experience.

How to cut the risk

Most of us use our phones to shop, find deals, and pay—and we deserve to be able to do that safely. Adopting preventative security measures (such as using mobile security software), practicing good mobile hygiene (such as checking app permissions), and remembering STOP, our simple scam response framework, can keep scammers at bay: 

S—Slow down: Don’t let urgency or pressure push you into action. Take a breath before responding. Legitimate businesses like your bank or credit card don’t push immediate action.

T—Test them: If you answered the phone and are feeling panicked about the situation, likely involving a family member or friend, ask a question only the real person would know—something that can’t be found online.  

O—Opt out: If it feels off, hang up or end the conversation. You can always say the connection dropped.

P—Prove it: Confirm the person is who they say they are by reaching out yourself through a trusted number, website, or method you’ve used before.

The criminals behind extortion scams pour time and money into targeting their victims, constantly evolving their tactics to make the scams more believable and hard-hitting. If you’ve been the victim of an extortion scam, sharing your story can help others spot the signs before it’s too late, reduce the stigma of being a victim, and put the shame where it belongs: on the criminals.

As Malwarebytes Global Head of Scam and AI Research Shahak Shalev puts it:

“If we can remove the stigma and silence around scams, I think we can help everyone take a step back and pause before acting on one of these threats”


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Pixel-stealing “Pixnapping” attack targets Android devices

Researchers at US universities have demonstrated how a malicious Android app can trick the system into leaking pixel data. That may sound harmless, but imagine if a malicious app on your Android device could glimpse tiny bits of information on your screen—even the parts you thought were secure, like your two-factor authentication (2FA) codes.

That’s the chilling idea behind “Pixnapping” attacks described in the research paper coming from University of California (Berkeley and San Diego), University of Washington, and Carnegie Mellon University.

A pixel is one of the tiny colored dots that make up what you see on your device’s display. The researchers built a pixel-stealing framework that bypasses all browser protections and can even lift secrets from non-browser apps such as Google Maps, Signal, and Venmo—as well as websites like Gmail. It can even steal 2FA codes from Google Authenticator.

Pixnapping is a classic side-channel attack—stealing secrets not by breaking into software, but by observing physical clues that devices give off during normal use. Pixel-stealing ideas date back to 2013, but this research shows new tricks for extracting sensitive data by measuring how specific pixels behave.

The researchers tested their framework on modern Google Pixel phones (6, 7, 8, 9) and a Samsung Galaxy S25 and succeeded in stealing secrets from both browsers and non-browser apps. They disclosed the findings to Google and Samsung in early 2025. As of October 2025, Google has patched part of the vulnerability, but some workarounds remain and both companies are still working on a full fix. Other Android devices may also be vulnerable.

The technical knowledge required to perform such an attack is enormous. This isn’t “script kiddie” territory: Attackers would need deep knowledge of Android internals and graphics hardware. But once developed, a Pixnapping app could be disguised as something harmless and distributed like any other piece of Android malware.

To perform an attack, someone would have to convince or trick the target into installing the malicious app on their device.

This app abuses Android Intents—a fundamental part of how apps communicate and interact with each other on Android devices. You can think of an intent like a message, or request, that one app sends either to another app or to the Android operating system itself, asking for something to happen.

The malicious app’s programming will stack nearly transparent windows over the app it wants to spy on and watch for subtle timing signals that depend on pixel color.

It doesn’t take long—the paper shows it can steal temporary 2FA codes from Google Authenticator in under 30 seconds. Once stolen, the data is sent to a command-and-control (C2) server controlled by the attacker.

How to stay safe

From the steps it takes to perform such an attack we can list some steps that can keep your 2FA codes and other secrets safe.

  1. Update regularly: Make sure your device and apps have the latest security updates. Google and Samsung are rolling out fixes; don’t ignore those update prompts. The underlying vulnerability is tracked as CVE-2025-48561.
  2. Be cautious installing apps: Only install apps from trusted sources like Google Play and check reviews and permissions before installing. Avoid sideloading unknown APKs and ask yourself if the permissions an app asks for are really needed for what you want it to do.
  3. Review permissions: Android improved its permission system, but check regularly what apps can do, and don’t hesitate to remove permissions of the ones you don’t use often.
  4. Use app screenshots wisely: Don’t store or display sensitive info (like codes, addresses, or logins) in apps unless needed, and close apps after use.
  5. Monitor security newsLook for announcements from Google and Samsung about patches for this vulnerability, and act on them.
  6. Enable Play ProtectKeep Play Protect active to help spot malicious apps before they’re installed.
  7. Use up-to-date real-time anti-malware protection on your Android device, preferably with a web protection module.

If you’re worried about your 2FA codes getting stolen, consider switching to hardware token 2FA options.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Researchers break OpenAI guardrails

The maker of ChatGPT released a toolkit to help protect its AI from attack earlier this month. Almost immediately, someone broke it.

On October 6, OpenAI ran an event called DevDay where it unveiled a raft of new tools and services for software programmers who use its products. As part of that, it announced a tool called AgentKit that lets developers create AI agents using its ChatGPT AI technology. Agents are specialized AI programs that can tackle narrow sets of tasks on their own, making more autonomous decisions. They can also work together to automate tasks (such as, say, finding a good restaurant in a city you’re traveling to and then booking you a table).

Agents like this are more powerful than earlier versions of AI that would do one task and then come back to you for the next set of instructions. That’s partly what inspired OpenAI to include Guardrails in AgentKit.

Guardrails is a set of tools that help developers to stop agents from doing things they shouldn’t, either intentionally or unintentionally. For example, if you tried to tell an agent to tell you how to produce anthrax spores at scale, Guardrails would ideally detect that request and refuse it.

People often try to get AI to break its own rules using something called “jailbreaking”. There are various jailbreaking techniques, but one of the simplest is role-playing. If a person asked for instructions to make a bomb, the AI might have said no, but if they then tell the AI it’s just for a novel they’re writing, then it might have complied. Organizations like OpenAI that produce powerful AI models are constantly figuring out ways that people might try to jailbreak their models using techniques like these, and building new protections against them. Guardrails is their attempt to open those protections up to developers.

As with any new security mechanism, researchers quickly tried to break Guardrails. In this case, AI security company HiddenLayer had a go, and conquered the jailbreak protection pretty quickly.

ChatGPT is a large language model (LLM), which is a statistical model trained on so much text that it can answer your questions like a human. The problem is that Guardrails is also based on an LLM, which it uses to analyze requests that people send to the LLM it’s protecting. HiddenLayer realized that if an LLM is protecting an LLM, then you could use the same kind of attack to fool both.

To do this, they used what’s known as a prompt injection attack. That’s where you insert text into a prompt that contains carefully coded instructions for the AI.

The Guardrails LLM analyzes a user’s request and assigns a confidence score to decide whether it’s a jailbreak attempt. HiddenLayer’s team crafted a prompt that persuaded the LLM to lower its confidence score, so that they could get it to accept their normally unacceptable prompt.

OpenAI’s Guardrails offering also includes a prompt injection detector. So HiddenLayer used a prompt injection attack to break that as well.

This isn’t the first time that people have figured out ways to make LLMs do things they shouldn’t. Just this April, HiddenLayer created a ‘Policy Puppetry‘ technique that worked across all major models by convincing LLMs that they were actually looking at configuration files that governed how the LLM worked.

Jailbreaking is a widespread problem in the AI world. In March, Palo Alto Networks’ threat research team Unit 42 compared three major platforms and found that one of them barely blocked half of its jailbreak attempts (although others fared better).

OpenAI has been warning about this issue since at least December 2023, when it published a guide for developers on how they could use LLMs to create their own guardrails. It said:

“When using LLMs as a guardrail, be aware that they have the same vulnerabilities as your base LLM call itself.”

We certainly shouldn’t poke fun at the AI vendors’ attempts to protect their LLMs from attack. It’s a difficult problem to crack, and just as in other areas of cybersecurity, there’s a constant game of cat and mouse between attackers and defenders.

What this shows is that you should always be careful about what you tell an AI assistant or chatbot—because while it feels private, it might not be. There might be someone half a world away diligently trying to bend the AI to their will and extract all the secrets they can from it.


We don’t just report on vulnerabilities—we identify them, and prioritize action.

Cybersecurity risks should never spread beyond a headline. Keep vulnerabilities in tow by using ThreatDown Vulnerability and Patch Management.

Phishing scams exploit New York’s inflation refund program

A warning from the New York State on their website informs visitors that:

“Scammers are calling, mailing, and texting taxpayers about income tax refunds, including the inflation refund check.” 

Here’s the warning on the website:

New York State Department of Taxation and Finance warning

We can confirm that several phishing campaigns are exploiting a legitimate initiative from New York State, which automatically sends refund checks to eligible residents to help offset the effects of inflation.

Although eligible residents do not need to apply, sign up or provide personal information, the scammers are asking targets to provide payment information to receive their refund.

BleepingComputer reported an example of a SMS-based phishing (smishing) campaign with that objective.

text message example

“New York Department of Revenue

Your refund request has been processed and approved. Please provide accurate payment information by September 29, 2025. Funds will be deposited into your bank account or mailed to you via paper check within 1-2 business days.

URL (now offline)

  • Failure to submit the required payment information by September 29, 2025, will result in permanent forfeiture of this refund….”

As you can see, it uses all the classic phishing techniques: you need to act fast, or the consequences will be severe. The sending number is from outside the US (Philippines) and the URL they want you to follow is not an official one (Official New York State Tax Department website and online services are under tax.ny.gov).

If recipients click the link, they are directed to a fake site impersonating the tax department, which asks for personal data such as name, address, email, phone, and Social Security Number—enough information for identity theft.

Scammers typically jump at opportunities like these—situations where people expect to receive some kind of payment, but are uncertain about the process. By telling victims they need to act fast or they will miss out, they hope to catch targets off guard and act on impulse.

How to stay safe

  • Never reply to or click links in unsolicited tax refund texts, calls, or emails.
  • Do not provide your Social Security number or banking details to anyone claiming to process your tax refund.
  • Legitimate inflation refunds are sent automatically if you’re eligible, there are no actions required.
  • If in doubt, contact the alleged source through known legitimate lines of communication to ask for confirmation.
  • Report scam messages and suspicious contacts to the NYS Tax Department or IRS immediately.
  • Use an up-to-date real-time anti-malware solution, preferably with a web protection component.

Pro tip: Did you know that you can submit scams like these to Malwarebytes Scam Guard? It immediately identified the text shown above as a scam.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (October 6 – October 12)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.