IT NEWS

Facial recognition: Where and how you can opt out

Our remote team recently took a trip to our Estonian office. When we arrived from our various destinations, we started chatting about how our travel had been. Our senior privacy advocate, David Ruiz, mentioned that he’d opted out of facial recognition while at San Francisco International Airport.

However, not everyone on the team knew this was even A Thing, and that made us think…maybe not all of our readers know either. So we looked into where and how you can opt out of facial recognition—a technology that’s becoming increasingly common in many aspects of everyday life.

Airports and border control

This one is relatively straightforward. The Transportation Security Administration (TSA) actively deploys one of the most visible and widespread uses of facial recognition in the US, especially at airports. Over 80 major airports have installed facial recognition cameras at security checkpoints to verify traveler identities quickly and without physical contact. The process involves a camera taking a photo of your face and matching it in real time to the photo on your passport or ID card.

What many people don’t realize is participation in this facial recognition screening is voluntary. If you prefer not to have your face scanned, you can opt out. The TSA allows travelers to do this by choosing an alternative identity check, such as a manual ID inspection by a TSA officer. David said he simply asked, “Hey, can I opt out?” and there was no significant delay in his screening process.

The TSA officer must honor your request to opt out, and provide an alternative method. Although signage about this choice exists, it’s often subtle or easy to miss, so it’s best to know your rights before you arrive.

US Customs and Border Protection (CBP) also uses facial recognition at departure gates and border crossings to verify identities. Like at TSA checkpoints, you can opt out. When you reach the gate, ask for manual identity verification instead of having your photo taken. This opt-out option applies to both US citizens and noncitizens.

Where else can you opt out?

Outside of airports and border control, facial recognition is increasingly found in various public and private settings, including stores, stadiums, and even some workplaces. However, the ability to opt out in these contexts varies widely depending on local laws, company policies, and the technology used.

Facial recognition technology has rapidly expanded across the United States, prompting a patchwork of national and state-level regulations, as well as growing public debate about privacy, civil rights, and the right to opt out. At the federal level, there is currently no comprehensive law that expressly regulates the use of facial recognition by government agencies.

However, in September 2024, the US Commission on Civil Rights highlighted the significant risks of unregulated facial recognition, particularly for marginalized communities, and called for rigorous testing, transparency, and prompt action in the event of any discovered discrepancies or biases.

Some federal agencies have implemented internal policies to safeguard privacy. For example, the Department of Homeland Security (DHS) has instituted requirements ensuring that US citizens can opt out of facial recognition by non-law enforcement unless otherwise required by law. 

At the state level, regulations vary widely. States like Maryland have enacted some of the nation’s strongest laws, restricting law enforcement’s use of facial recognition to investigations of specific serious crimes and requiring agencies to document and disclose their use of the technology. Other states, like Illinois, Texas, and Washington require companies to notify individuals before collecting facial recognition data and, in some cases, obtain explicit consent.

In other parts of the world, the European Union’s Artificial Intelligence Act prohibits the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage.

Australia implemented strict rules about private use of facial recognition and there has been backlash in Russia over Moscow’s new facial recognition-based metro payment system. The UK on the other hand,  is looking to reduce passport queues by using the technology.

In China, new regulations effective from June 2025 require businesses to be transparent about facial recognition use and allow individuals to refuse biometric data collection in many cases. However, this doesn’t stop the Chinese authorities from using facial recognition to identify people in the streets.

Challenges and considerations

As facial recognition technology continues to evolve, it’s important to know your rights. While opting out is possible in many official settings, there are challenges:

  • Awareness: Many people do not know they can opt out, as notices are often not clearly visible or explained.
  • Pressure to comply: Some travelers feel pressured to participate because facial recognition is faster and more convenient or because they are afraid of raising suspicion.
  • Limited opt out options: For certain government or law enforcement uses, opting out may not be available or may require additional steps.
  • Data handling: Even when photos are taken, agencies like TSA claim they do not store images after verification, except in limited testing environments. However, concerns remain about how biometric data might be used or shared.

Privacy advocates and some lawmakers are pushing for stronger protection. For example, the Traveler Privacy Protection Act of 2025 was introduced in the US senate to ensure Americans can opt out of involuntary facial recognition screenings at airports and to safeguard passenger data from misuse.

Meanwhile, organizations and governments are exploring better opt-out systems that respect privacy without compromising security. Some ideas include wearable tech that signals “do not scan” or comprehensive opt-out registries, though these raise their own privacy and technical challenges.

Summary

Facial recognition technology offers convenience but raises important privacy questions. Knowing how and where to opt out empowers you to protect your biometric privacy while navigating an increasingly digital world.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Sextortion email scammers increase their “Hello pervert” money demands

Every so often the sextortion emails that start with “Hello pervert” get a redesign.

You may have received one yourself: The emails claim that the sender has been watching your online behavior and caught you red-handed doing activities that you would like to keep private.

The email usually starts with “Hello pervert” and then goes on to claim that you have been watching porn. The sender often says they have footage of what you were watching and what you were doing while watching it.

To stop the sender from spreading the incriminating footage to your email contact list, you are asked to pay them money. The overall tone is threatening, manipulative, and designed to provoke fear and urgency.

We know these emails are a big problem. We see thousands of people visiting our website a week looking to find information on sextortion emails like these. And now we’re seeing a new version with some features we haven’t seen in the past. Interestingly, just as the cost of food, travel, and—well—living has gone up, so has the amount of money that the scammers ask for in the email.

This most recent email we’ve seen also gives away the probable origin of the emails.

With all that we thought it would be interesting to take a closer look.

full text mail

“Hello pervert, I’ve sent thіs message from your Microsoft account.

I want to іnform you about a very bad sіtuatіon for you. However, you can benefіt from іt, іf you wіll act wіsely.

Have you heard of Pegasus? Thіs іs a spyware program that іnstalls on computers and smartphones and allows hackers to monіtor the actіvіty of devіce owners. It provіdes access to your webcam, messengers, emaіls, call records, etc. It works well on Androіd, іOS, macOS and Wіndows. I guess, you already fіgured out where I’m gettіng at.

It’s been a few months sіnce I іnstalled іt on all your devісes because you were not quіte choosy about what lіnks to clіck on the іnternet. Durіng thіs perіod, I’ve learned about all aspects of your prіvate lіfe, but one іs of specіal sіgnіfіcance to me.

I’ve recorded many vіdeos of you jerkіng off to hіghly controversіal рorn vіdeos. Gіven that the “questіonable” genre іs almost always the same, I can conclude that you have sіck рerversіon.

I doubt you’d want your frіends, famіly and co-workers to know about іt. However, I can do іt іn a few clіcks.

Every number іn your contact Iіst wіll suddenly receіve these vіdeos – on WhatsApp, on Telegram, on Instagram, on Facebook, on emaіl – everywhere. It іs goіng to be a tsunamі that wіll sweep away everythіng іn іts path, and fіrst of all, your former lіfe.

Don’t thіnk of yourself as an іnnocent vіctіm. No one knows where your рerversіon mіght lead іn the future, so consіder thіs a kіnd of deserved рunіshment to stop you.

I’m some kіnd of God who sees everythіng. However, don’t panіc. As we know, God іs mercіful and forgіvіng,  and so do I. But my merсy іs not free.

Transfer 1650$ to my Lіtecoіn (LTC) wallet: {redacted}

Once I receіve confіrmatіon of the transactіon, I wіll рermanently delete all vіdeos compromіsіng you, unіnstall Pegasus from all of your devіces, and dіsappear from your lіfe. You can be sure – my benefіt іs only money. Otherwіse, I wouldn’t be wrіtіng to you, but destroy your lіfe wіthout a word іn a second.

I’ll be notіfіed when you open my emaіl, and from that moment you have exactly 48 hours to send the money. If cryptocurrencіes are unchartered waters for you, don’t worry, іt’s very sіmple. Just google “crypto exchange” or “buy Litecoin” and then іt wіll be no harder than buyіng some useless stuff on Amazon.

I strongly warn you agaіnst the followіng:
* Do not reply to thіs emaіl. I’ve sent іt from your Mіcrosoft account.

* Do not contact the polіce. I have access to all your devісes, and as soon as I fіnd out you ran to the cops, vіdeos wіll be publіshed.

* Don’t try to reset or destroy your devісes. As I mentіoned above: I’m monіtorіng all your actіvіty, so you eіther agree to my terms or the vіdeos are рublіshed.

Also, don’t forget that cryptocurrencіes are anonymous, so іt’s іmpossіble to іdentіfy me usіng the provіded address.

Good luck, my perverted frіend. I hope thіs іs the last tіme we hear from each other.

And some frіendly advіce: from now on, don’t be so careless about your onlіne securіty.”

Spoofing your email address

One clever trick the scammers use is saying they’ve sent the email from your Microsoft account. The sender spoofs your email address, in the hope that it makes you think your device may indeed be compromised.

However, it’s easy for scammers to spoof (use a fake) email address because the email system doesn’t check if the sender is real. That’s why it’s important to be cautious. Even if an email looks like it’s from someone you know, or even yourself, it could be from a scammer.

If you’re technically savvy, taking one look at the authentication results in the email header would reveal that the email failed because the IP address does not match the domain.

authentication results

However, we can assume that most people receiving the email wouldn’t think to do this, and so the email spoofing might well work to add legitimacy to the email.

Encoding errors

Looking at the source of the email, we got a little insight into its origin.

The intro of the email shows the repeated use of “=D1=96” and some other encoding errors. In fact, the whole text is riddled with encoding errors, which typically appear when Cyrillic or other non-Latin characters are misinterpreted as UTF-8 or quoted-printable, or when text is generated or processed by automated systems not properly handling character sets.

encoding errors

The sequence =D1=96 is the quoted-printable encoding for the Unicode character U+0456, which is the Cyrillic letter “i”. This encoding error strongly points towards the writer’s native language being one that uses the Cyrillic script, which is predominantly used in Eastern European and Central Asian countries, with Russian being the most prominent language using it.

These errors also tell us that this scammer doesn’t use the most sophisticated tools. Although the awkward sentence structures and repetitive language are consistent with automated text generation or translation, they are classic signs of a low-effort, high-volume campaign—not the kind where an AI has been used to add personalization or a more natural voice.

Price hike

Back in April, the price that scammers were asking victims to pay for being a “pervert” was $1200, and in May it was $1450.

price in May

This time we’re asked to pay no less than $1650.

There could be several reasons for this. Maybe the costs of the operation have gone up. Or the scammers feel the value of their threat and its consequences have increased.

Scammers often start with what seems a “reasonable amount” to them and, if successful, incrementally increase it for future victims. This allows them to gauge the maximum amount that people are willing to pay to avoid the threatened consequences.

I’m happy to report that both the mentioned Litecoin wallets are empty. Let’s keep it that way.

How to spot a sextortion email

Once you’re aware of them, it’s easy to recognize these emails. Remember that not all of the below characteristics might be included in the email you receive, but all of them are red flags in their own right.

  • They often look as if they were sent from your own email address.
  • The scammer accuses you of inappropriate behavior and claims to have footage of that behavior.
  • In the email the scammer claims to have used Pegasus or some Trojan to spy on you through your own computer.
  • The scammer says they know your password and may even offer one as “proof”. This password is likely to have been stolen in a separate data breach and is unrelated to the sextortion email itself.
  • You are urged to pay up quickly or the so-called footage will be spread to all your contacts. Often you’re only allowed one day to pay.
  • The actual message often arrives as an image or a pdf attachment. Scammers do this to bypass phishing filters.

How to react to sextortion emails

First and foremost, never reply to emails of this kind. It may tell the sender that someone is reading the emails sent to that address and so they will repeatedly try other methods to defraud you.

  • Don’t let yourself get rushed into action or decisions. Scammers rely on the fact that you will not take the time to think this through and subsequently make mistakes.
  • Do not open unsolicited attachments. Especially when the sender’s address is suspicious or even your own.
  • If the email includes a password, make sure you are not using it anymore and if you are, change it as soon as possible.
  • If you are having trouble organizing your passwords, have a look at a password manager.
  • For your ease of mind, turn off your webcam or buy a webcam cover so you can cover it when you’re not using the webcam.

Check your digital footprint

Sextortion emails often contain passwords that have been stolen in another data breach and posted online. If you want to find out what personal data of yours has been exposed online, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you most frequently use) and you’ll get a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Thousands of private camera feeds found online. Make sure yours isn’t one of them

If you have internet-connected cameras in or around your home, be sure to check their settings. Researchers just discovered 40,000 of them serving up images of homes and businesses to the internet.

Bitsight’s TRACE research team revealed the issue in a report released this month. The cameras were providing the images without any kind of password or authentication, it said. While some of them were connected to businesses, showing images of offices, retail stores, and factories, many were likely connected in private residences.

Many cameras contain their own web servers that people can access remotely using a browser or app so that they can monitor their premises while away. These are often completely exposed to the internet, according to the report. That means anyone could access the video feed by typing in the right IP address.

The highest number of exposed cameras by far was in the US, at around 14,000. Breaking down the states there showed the highest concentrations in California and Texas.

Japan, the second highest country, had just half that, at 7,000. After that came Austria, Czechia, South Korea, Germany, Italy, Russia, and Taiwan.

The big threat for such users is privacy. People put these cameras everywhere, including extra-private spaces in their homes like kids’ and adults’ bedrooms. Attackers might spy on people or even set them up for extortion if the images are compromising.

Aside from the obvious privacy implications, there are other security worries, the report said. Cameras could be used to gather surveillance data by someone planning a physical intrusion, it pointed out.

But access to admin interfaces is just one threat; getting SSH access (which allows someone to log into the device via a terminal and control it as they would a regular computer) could give an attacker total control over the camera’s hardware and software if they’re able to exploit vulnerabilities left there by the manufacturer.

If this happens, a camera (which is, after all, just a computer with a lens) could become a jumping-off point for the attacker to compromise other computers on the network. Or it could be joined to a botnet to do the attacker’s bidding.

Botnets made of up connected devices are common. One of the most famous such botnets, Mirai, co-opted cameras and other internet-connected systems to launch denial of service attacks, in which thousands of devices would try to connect with a target, flooding it with traffic and rendering it inoperable.

Bitsight’s report also cites one case where attackers used vulnerabilities in a camera to install ransomware on it.

A long history of camera compromise

Internet-enabled camera issues are nothing new. Finding exposed feeds, whether via Bitsight’s own scanning engine or via publicly accessible ones like Shodan.io, is like shooting fish in a barrel. Indeed, Bitsight did something similar in 2023. In the past, we’ve seen sites like Insecam (now offline), which streamed images from 40,000 unsecured video cameras around the world. Some of those cameras were doubtless there for public consumption, just as many were not.

Finding unsecured feeds is so easy because people tend to just plug these things in and turn them on, much as you might use a portable air conditioner. Vendors should force some basic cybersecurity hygiene, but they don’t, because they don’t want to introduce any costly friction. Regulation for connected smart devices like IP cameras has emerged in the US and the UK, but enforcement is another issue.

Some might advise you to only choose a respectable brand of IP camera, but you can’t always trust big-name vendors who claim to act responsibly. Last year, Amazon settled with the Federal Trade Commission, paying $5.6m over charges that its employees and contractors spied on users of its Ring cameras.

Ring allowed everyone working for it to see any customer’s feeds, the FTC said, which led to some employees repeatedly accessing feeds of young women in sensitive areas of the home. Ring also failed to protect its cameras adequately against intruders that compromised them, the FTC said. That led to intruders taking control of the cameras. They would use camera microphones to hurl racial slurs at children, and swear at women lying in bed, the complaint alleged.

Other vendor missteps have included Wyze accidentally showing customers each others’ video feeds, and Eufy sending camera images to the cloud when it said it wouldn’t.

How to protect your internet-enabled camera

We can’t think of a worse privacy scenario than having someone snoop on you and your loved ones in what is supposed to be your safest space. Letting any connected device into your home is always risky, especially when it has video capabilities. Here is some advice to minimize that risk:

  • Use unique credentials. Make sure that you set unique logins and passwords for your cameras so that people can’t just stroll in and view them. That means taking some time to configure the camera through its admin interface and making sure to change the default password.
  • Restrict IP camera use to non-sensitive places as much as possible. While some Ring customers apparently needed cameras in the bathroom and bedroom, we urge you to think twice.
  • Research the camera for vulnerabilities. Check to see whether the brand you’re considering has had any security issues in the past, and how quickly the issues have been fixed.
  • Try accessing your camera insecurely. Try accessing your camera remotely without using your login credentials. If you can, then so can everyone else.
  • Patch regularly. Find out how to update your device with the latest security patches and check for updates regularly, or preferably set it to update automatically if you can.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Gmail’s multi-factor authentication bypassed by hackers to pull off targeted attacks

Russian hackers have bypassed Google’s multi-factor authentication (MFA) in Gmail to pull off targeted attacks, according to security researchers at Google Threat Intelligence Group (GTIG).

The hackers pulled this off by posing as US Department of State officials in advanced social engineering attacks, building a rapport with the target and then persuading them into creating app-specific passwords (app passwords).

App passwords are special 16-digit codes that Google generates to allow certain apps or devices to access your Google Account securely, especially when you have MFA enabled.

Normally, when you sign in to your Google account, you use your regular password plus a second verification step like a code sent to your phone. But since some older or less secure apps and devices—like certain email clients, cameras, or older phones—are unable to handle this extra verification step, Google provides app passwords as an alternative way to sign in.

However, because app passwords skip the second verification step, hackers can steal or phish them more easily than a full MFA login.

In an example provided by CitizenLab, the attackers initially made contact by posing as a State Department representative, inviting the target to a consultation in the setting of a private online conversation.

Although the invitation came from a Gmail account, it CCed four @state.gov accounts, giving a false sense of security and making the target believe that other people at the State Department had monitored the email conversation.

Most likely, the attacker fabricated those email addresses, knowing that the State Department’s email server accepts all messages and does not send a bounce response even if the addresses do not exist.

As the conversation unfolded and the target showed interest, they received an official looking document with instructions to register for an “MS DoS Guest Tenant” account. The document outlined the process of  “adding your work account… to our MS DoS Guest Tenant platform,” which included creating an app password to “enable secure communications between internal employees and external partners.”

So, while the target believes they are creating and sharing an app password to access a State Department platform in a secure way, they are actually giving the attacker full access to their Google account.

The targets of this campaign, which ran for months, were prominent academics and critics of Russia, and was set up with so much attention for details and skill that the researchers suspect the attacker was a Russian state-sponsored entity.

Be safe, avoid app passwords

Now that this bypass is known, we can expect more social engineering attacks leveraging app-specific passwords in the future. Here’s how to stay safe:

  • Only use app passwords when absolutely necessary. If you have the opportunity to change to apps and devices that support more secure sign-in methods, make that switch.
  • The advice to enable MFA still stands strong, but not all MFA is created equal. Authenticator apps (like Google Authenticator) or hardware security keys (FIDO2/WebAuthn) are more resistant to attacks than SMS-based codes, let alone app passwords.
  • Regularly educate yourself and others about recognizing phishing attempts. Attackers often bypass MFA by tricking users into revealing credentials or app passwords through phishing.
  • Keep an eye on unusual login attempts or suspicious behavior, such as logins from unfamiliar locations or devices. And limit those logins where possible.
  • Regularly update your operating system and the apps you use to patch vulnerabilities that attackers might exploit. Enable automatic updates whenever possible so you don’t have to remember yourself.
  • Use security software that can block malicious domains and recognize scams.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (June 15 – June 21)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Billions of logins for Apple, Google, Facebook, Telegram, and more found exposed online

When organizations, good or bad, start hoarding collections of login credentials the numbers quickly add up. Take the 184 million logins for social media accounts we reported about recently. Now try to imagine 16 billion!

Researchers at Cybernews have discovered 30 exposed datasets containing from several millions to over 3.5 billion records each. In total, the researchers uncovered an unimaginable 16 billion records.

The likely source: information stealers, or infostealers for short. Infostealers are malicious software designed specifically to gather sensitive information from infected devices. These malware variants silently extract credentials stored in browsers, email clients, messaging apps, and even crypto wallets, and send the data to cybercriminals.

And for those who are about to shrug it of as “probably old data,” it’s not. According to the researchers these aren’t just old breaches being recycled. This is fresh, weaponizable intelligence at scale.

Once again, an unfortunate demonstration on how effective and widespread infostealers are.

The only silver lining here is that all of the datasets were exposed only briefly: long enough for researchers to uncover them, but not long enough to find who was controlling vast amounts of data.

But that doesn’t take away from the fact that these credentials are in the hands of cybercriminals who can use them for:

  • Account takeovers: Cybercriminals can use stolen credentials to hijack social media, banking, or corporate accounts.
  • Identity theft: Personal details enable fraud, loan applications, or impersonation.
  • Targeted phishing: Combining leaked data allows cybercriminals to engage in very convincing and personalized scams.
  • Ransomware/business email compromise (BEC) attacks: Compromised business credentials facilitate network intrusions or fraudulent wire transfers.

The leak includes credentials for virtually every large online service. Apple, Google, Facebook, Telegram, developer platforms, VPNs, and more.

And the number is so massive it exceeds our imagination. If you printed each credential (16 billion usernames + passwords) on a single line, using standard paper, and stacked the pages, the pile would reach far beyond the edge of the stratosphere (roughly 35 miles).

How to protect against infostealers

There are a few things you can do to limit the dangers of infostealers:

  • Use an up-to-date and active anti-malware solution that can detect and remove infostealers.
  • Do not reuse passwords across different sites and services. A password manager can be very helpful to create safe passwords and remember them for you.
  • Enable two-factor authentication (2FA) for every account you can. 2FA makes it much more difficult for an attacker to access your account with your login credentials. If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of 2FA can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.

Check your digital footprint

Data stolen by infostealers is often sold or posted online. If you want to find out what personal data of yours has been exposed online, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you most frequently use) and we’ll give you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Mattel’s going to make AI-powered toys, kids’ rights advocates are worried

Toy company Mattel has announced a deal with OpenAI to create AI-powered toys, but digital rights advocates have urged caution.

In a press release last week, the owner of the Barbie brand signed a “strategic collaboration” with the AI company, which owns ChatGPT. “By using OpenAI’s technology, Mattel will bring the magic of AI to age-appropriate play experiences with an emphasis on innovation, privacy, and safety,” it said.

Details on what might emerge were scarce, but Mattel said that it only integrates new technologies into its products in “a safe, thoughtful, and responsible way”.

Advocacy groups were quick to denounce the move. Robert Weissman, co-president of public rights advocacy group Public Citizen, commented:

“Mattel should announce immediately that it will not incorporate AI technology into children’s toys. Children do not have the cognitive capacity to distinguish fully between reality and play.

Endowing toys with human-seeming voices that are able to engage in human-like conversations risks inflicting real damage on children. It may undermine social development, interfere with children’s ability to form peer relationships, pull children away from playtime with peers, and possibly inflict long-term harm.”

The kids aren’t alright

Some are concerned about the effect of AI on young developing minds. Researchers from universities including Harvard and Carnegie Mellon have warned about negative social effects, along with a tendency for children to attribute human-like properties to AI.

One such child, 14 year-old Sewell Seltzer III, took his own life after repeatedly talking to chatbots from Character.AI, which allows users to create their own AI characters.

In a lawsuit against the company, his mother Megan Garcia described how he began losing sleep and growing more depressed after using the service, to the point where he fell asleep in class. A therapist diagnosed him with anxiety and disruptive mood disorder. It emerged that he had become obsessed with an AI representing an adult character from Game of Thrones that purported to be in a real romantic relationship with him.

Past mistakes

We’re not suggesting Mattel would condone such activities. It cites “more than 80 years of earned trust from parents and families”, but that statement glosses over previous missteps.

These include Hello Barbie. Mattel launched this Wi-Fi connected doll in 2015 and encouraged kids to talk with it. It asked personal questions about children and their families, sending that audio to a third-party company that used AI to generate a response. Non-profit group Fairplay, which advocates for protecting children from inappropriate technology and brand marketing, launched a campaign protesting child surveillance. Subsequently, investigators found vulnerabilities that would allow intruders to eavesdrop on that audio. Mattel pulled the toy from shelves in 2017.

Fairplay executive Josh Golin slammed the OpenAI partnership announcement.

“Apparently, Mattel learned nothing from the failure of its creepy surveillance doll Hello Barbie a decade ago and is now escalating its threats to children’s privacy, safety and well-being.

Children’s creativity thrives when their toys and play are powered by their own imagination, not AI. And given how often AI ‘hallucinates’ or gives harmful advice, there is no reason to believe Mattel and OpenAI’s ‘guardrails’ will actually keep kids safe.”

Another incident where Mattel lost parents’ trust was back in November 2024 when a packaging mistake sent owners of its ‘Wicked’ doll to an adult movie website (Wicked Pictures) instead of a promotional landing page for the Wicked movie.

Incidents like these show that even with the best intentions in the world, companies can make mistakes.

Ultimately, it’s up to parents to make decisions about whether they’ll expose their children to AI-powered toys. It’s perhaps inevitable that AI will reach every corner of our lives, but is it ready and polished enough to be used on our children?


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

5 riskiest places to get scammed online

Scammers love your smartphone.

They can text you fraudulent tracking links for packages you never bought. They can profess their empty love to you across your social media apps. They can bombard your email inbox with phishing attempts, impersonate a family member through a phone call, and even trick you into visiting malicious versions of legitimate websites.

But, according to new research from Malwarebytes, while scammers can reach people through just about any modern method of communication, they have at least five favored tracts for finding new victims—emails, phone calls and voicemails, malicious websites, social media platforms, and text messages. It’s here that people are most likely to find phishing attempts, romance scams, sextortion threats, and more, and it’s here that everyday people should stay most cautious when receiving messages from unknown senders or in responding to allegedly urgent requests for money or information.

For this research, Malwarebytes surveyed 1,300 people over the age of 18 in the US, UK, Austria, Germany, and Switzerland, asking about the frequency, type, impact, and consequences of any scams they found on their smartphones. Capturing just how aggravating today’s online world is, a full 78% of people said they encountered or received a scam on their smartphone at least once a week.

Here are the top five places that people actually encountered those weekly scams:

  • 65% of people encountered a scam at least once a week through their email
  • 53% encountered a scam at least once a week through phone calls and voicemails
  • 50% encountered a scam at least once a week through text messages (SMS)
  • 49% encountered a scam at least once a week through malicious websites
  • 47% encountered a scam at least once a week through social media platforms

Unfortunately, scam prevention cannot fixate on only these five channels, as scammers change their tactics based on how they’re trying to trick their victims. For instance, though people were least likely to encounter a scam once a week through a buying or selling platform like Facebook Marketplace or Craigslist (36%), such platforms were of course the most likely place for scam victims to have their credit card details and passwords stolen by a scammer masquerading as a legitimate business.

The noise from such daily strife has become deeply confusing, as just 15% of people strongly agreed that they could confidently identify a scam on their phone.

Daily dilemma

While 78% of people encountered a scam on their smartphone at least once a week, a shocking 44% of people encountered a scam at least daily. Similar to the weekly breakdown, here are the top five ways that people encountered scams once a day:

  • 34% of people encountered a scam at least once a day through their email
  • 25% encountered a scam at least once a day through malicious websites
  • 24% encountered a scam at least once a day through phone calls and voicemails
  • 24% encountered a scam at least once a day through social media platforms
  • 22% encountered a scam at least once a day through text messages (SMS)

This list encompasses so much of any person’s daily use of their smartphone. They use it to check emails, browse the internet, make phone calls, scroll through social media, and text family and friends. And yet, it is in these exact places that people have come to expect getting scammed. As if the 44% of people who encounter a daily scam wasn’t depressing enough, there are 28% of people who said they encounter scams “multiple times a day.”

But the frequency of scams can only reveal so much. How, exactly, are scammers trying to trick their targets?

Social engineering and extortion

Scams are so difficult to analyze because they vary both in their delivery method and their method of deceit. A message that tries to trick a person into clicking a package tracking link is a simple act of social engineering—relying on false urgency or faked identity to fool a victim. But that message itself can come through a text message or an email, and it can direct a person to a malicious website on the internet. A romance scam, similarly, can start on a social media platform but can move into a messaging service like WhatsApp. And sometimes, a threat to release private information—which can be categorized as “extortion”—can happen through a phone call, a text message, or any combination of other communication channels.

This is why, to understand how people were being harmed by scams, Malwarebytes asked respondents about roughly 20 types of cybercrime that they could encounter and experience.

Broadly, Malwarebytes found that 74% of people had “encountered” or come across a social engineering scam, and that 36% fell victim to such scams. These were the most common social engineering scams that people encountered and that they experienced:

  • Phishing/smishing/vishing: 53% encountered and 19% experienced
  • USPS/FedEx/postal scams: 42% encountered and 12% experienced
  • Impersonation scams: 35% encountered and 10% experienced
  • Marketplace or business scams: 33% encountered and 10% experienced
  • Romance scams: 33% encountered and 10% experienced

For respondents who experienced any type of scam—making them scam victims—Malwarebytes also asked where they had found or encountered that scam. Here, the results show a far more intimate picture of where scams are most likely to harm the public.

For instance, 26% of charity scam victims were originally tricked on social media platforms. 37% of postal notification scam victims were first reached, predictably, through SMS/text messages. And, interestingly, despite how frequently cryptocurrency scams spread through social media, the most likely place for such a scam victim to be contacted was through email (30% for email vs. 13% for social media).

In its research, Malwarebytes also discovered that 17% of people have fallen victim to extortion scams, which includes ransomware scares, virtual kidnapping schemes, and threats to release sexually explicit photos (sextortion) or deepfake images.

Here, scam victims again shared where these scams arrived. The most popular channels for deepfake scammers to victimize people were social media platforms and emails—both at 17%. For sextortion scam victims, the most popular channel was email, at 35%. And 24% of virtual kidnapping scam victims said they were contacted through text messages, making it the most popular way to deliver such a threat.

These numbers may look depressing, but they should instead educate. No, there is no such thing as a perfectly safe communication channel today. But that doesn’t mean there isn’t help.

Check if something is a scam

Malwarebytes Scam Guard is a free, AI-powered digital safety companion that reviews any concerning text, email, phone number, link, image, or online message and provides on the spot guidance to help users avert and report scams. Just share a screenshot of any questionable message—like that strange email demanding a password reset or that alarming text flagging a traffic penalty—and Scam Guard will guide you to safety.

Fake bank ads on Instagram scam victims out of money

Ads on Instagram—including deepfake videos—are impersonating trusted financial institutions like Bank of Montreal (BMO) and EQ Bank (Equitable Bank) in order to scam people, according to BleepingComputer.

There are some variations in how the scammers approach this. Some use Artificial Intelligence (AI) to create deepfake videos aimed at gathering personal information, while others link to typosquatted domains that not just look the same but also have very similar domain names as the impersonated bank.

BleepingComputer shows an example of an advertisement, which claims to be from “Eq Marketing” and closely mimics EQ Bank’s branding and color scheme, while promising a rather optimistic interest yield of “4.5%”.

Advertisement leading to fake website
Image courtesy of BleepingComputer

In this example, using the “Yes, continue with my account” button presents the user with a fraudulent “EQ Bank” login screen, prompting the visitor to provide their banking credentials. From there, it’s likely the scammers will empty the bank account and move on to their next victim.

Another fraudulent ad impersonates BMO bank’s Chief Investment Strategist and leader of the Investment Strategy Group Brian Belski. This may lead people to believe they are getting valuable financial advice, for example by luring them to a “private WhatsApp investment group”.

Impersonations of bank employees and authorities are increasing and can often sound very convincing. These scammers demand immediate payment or action to avoid further impacts, which can dupe individuals into inadvertently sending money to a fraudulent account.

It’s not just Instagram where WhatsApp investment groups are used as a lure by scammers. On X we see invites like these several times a week.

WhatsApp investment group invitation

Recommendations to stay safe

As cyberthreats and financial scams become more sophisticated, it is increasingly difficult for individuals to determine if a request coming via social media, email, text, phone call or even video call is authentic.

By staying alert and proactive, you can outsmart even the most convincing deepfake scams. Remember, a healthy dose of skepticism is your best companion in the digital age.

  • Verify before you trust: Always double-check the legitimacy of any ad or message claiming to be from your bank. Visit your bank’s official website or contact them directly using verified contact details before taking any action.
  • Doublecheck the advertiser account: BleepingComputer found that the advertiser accounts running the fake ads on Instagram only had pages on Facebook, not on Instagram itself.
  • Look for red flags: Be wary of ads that create a sense of urgency, promise unrealistic rewards, or ask for sensitive information like passwords or PINs. Authentic banks will never request such details through social media or ads.
  • Scrutinize visuals and language: Deepfakes can be convincing, but subtle inconsistencies in video quality, unnatural facial movements, or awkward phrasing can be giveaways. Trust your instincts if something feels off.
  • Enable Multi-Factor Authentication (MFA) : Strengthen your account security by enabling two-factor authentication on your banking and social media accounts. This adds an extra layer of protection even if your credentials are compromised.
  • Report suspicious content: If you encounter a suspicious ad or message, report it to Instagram and notify your bank immediately. Your vigilance can help prevent others from falling victim.
  • Use web protection: This can range from programs that block known malicious sites, to browser extensions that can detect skimmers, to sophisticated assistants that you can ask if something is a scam.
  • Stay informed: Keep up to date with the latest scam tactics and security advice from your bank and reputable cybersecurity sources. Awareness is your best defense.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Scammers hijack websites of Bank of America, Netflix, Microsoft, and more to insert fake phone number

The examples in this post are actual fraud attempts found by Malwarebytes Senior Director of Research, Jérôme Segura.

Cybercriminals frequently use fake search engine listings to take advantage of our trust in popular brands, and then scam us. It often starts, as with so many attacks, with a sponsored search result on Google.

In the latest example of this type of scam, we found tech support scammers hijacking the results of people looking for 24/7 support for Apple, Bank of America, Facebook, HP, Microsoft, Netflix, and PayPal.

sponsored search result for Netflix

Here’s how it works: Cybercriminals pay for a sponsored ad on Google pretending to be a major brand. Often, this ad leads people to a fake website. However, in the cases we recently found, the visitor is taken to the legitimate site with a small difference.

Visitors are taken to the help/support section of the brand’s website, but instead of the genuine phone number, the hijackers display their scammy number instead.

The browser address bar will show that of the legitimate site and so there’s no reason for suspicion. However, the information the visitor sees will be misleading, because the search results have been poisoned to display the scammer’s number prominently in what looks like an official search result.

Once the number is called, the scammers will pose as the brand with the aim of getting their victim to hand over personal data or card details, or even allow remote access to their computer. In the case of Bank of America or PayPal, the scammers want access to their victim’s financial account so they can empty it of money.

A technically more correct name for this type of attack would be a search parameter injection attack, because the scammer has crafted a malicious URL that embeds their own fake phone number into the genuine site’s legitimate search functionality.

See the below example on Netflix:

Netflix Help Center with scammer's number

These tactics are very effective because:

  • Users see the legitimate Netflix URL in their address bar
  • The page layout looks authentic (again, because it is the real Netflix site)
  • The fake number appears in what looks like a search result, making it seem official.

This is able to happen because Netflix’s search functionality blindly reflects whatever users put in the search query parameter without proper sanitization or validation. This creates a reflected input vulnerability that scammers can exploit.

Fortunately, Malwarebytes Browser Guard caught this and shows a warning about “Search Hijacking Detected,” and explains that unauthorized changes were made to search results with an overlaid phone number.

But Netflix is just one example. As we mentioned earlier, we found that other brands, such as PayPal, Apple, Microsoft, Facebook, Bank of America, and HP being abused in the same way by scammers.

HP Customer Service page with scammer's phone number

The HP example is a bit clearer to identify as suspicious, as it says “4 Results for” which is shown in front of the scammers text. But even then if you’re on a genuine website you expect to see a genuine number, right?

Interestingly, Apple is the one where we found the scammer’s number was the hardest to identify as false.

Apple Support page with scammer's phone number

This looks as if the web page tells the visitor they have no matches for their search, so they’d better call the number on display. That would drive them straight in the arms of scammers.

How to stay safe from tech support scams

As demonstrated in these cases, Malwarebytes Browser Guard is a great defense mechanism against this kind of scam, and it is free to use.

There are also some other red flags to keep an eye out for:

  • A phone number in the URL
  • Suspicious search terms like “Call Now” or “Emergency Support” in the address bar of the browser
  • Lots of encoded characters like the %20 (space) and %2B (+ sign) along with phone numbers
  • The website showing a search result before you entered one
  • The urgent language (Call Now, Account suspended, Emergency support) displayed on the website
  • An in-browser warning for known scams (don’t ignore this).

And before you call any brand’s support number, look up the official number in previous communications you’ve had with the company (such as an email, or on social media) and compare it to the one you found in the search results. If they are different, investigate until you’re sure which one is the legitimate one.

If during the call, you are asked for personal information or banking details that have nothing to do with the matter you’re calling about, hang up.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.