Archive for author: makoadmin

ALPRs are recording your daily drive (Lock and Code S06E26)

This week on the Lock and Code podcast…

There’s an entire surveillance network popping up across the United States that has likely already captured your information, all for the non-suspicion of driving a car.

Automated License Plate Readers, or ALPRs, are AI-powered cameras that scan and store an image of every single vehicle that passes their view. They are mounted onto street lights, installed under bridges, disguised in water barrels, and affixed onto telephone poles, lampposts, parking signs, and even cop cars.

Once installed, these cameras capture a vehicle’s license plate number, along with its make, model, and color, and any identifying features, like a bumper sticker, or damage, or even sport trim options. Because nearly every ALPR camera has an associated location, these devices can reveal where a car was headed, and at what time, and by linking data from multiple ALPRs, it’s easy to determine a car’s daylong route and, by proxy, it’s owner’s daily routine.

This deeply sensitive information has been exposed in recent history.

In 2024, the US Cybersecurity and Information Security Agency discovered seven vulnerabilities in cameras made by Motorola Solutions, and at the start of 2025, the outlet Wired reported that more than 150 ALPR cameras were leaking their live streams.

But there’s another concern with ALPRs besides data security and potential vulnerability exploits, and that’s with what they store and how they’re accessed.

ALPRs are almost uniformly purchased and used by law enforcement. These devices have been used to help solve crime, but their databases can be accessed by police who do not live in your city, or county, or even state, and who do not need a warrant before making a search.

In fact, when police access the databases managed by one major ALPR manufacturer, named Flock, one of the few guardrails those police encounter is needing to type a single word in a basic text box. When Electronic Frontier Foundation analyzed 12 million searches made by police in Flock’s systems, they learned that police sometimes filled that text box with the word “protest,” meaning that police were potentially investigating activity that is protected by the First Amendment.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Will Freeman, founder of the ALRP-tracking project DeFlock Me, about this growing tide of neighborhood surveillance and the flimsy protections afforded to everyday people.

“License plate readers are a hundred percent used to circumvent the Fourth Amendment because [police] don’t have to see a judge. They don’t have to find probable cause. According to the policies of most police departments, they don’t even have to have reasonable suspicion.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Grok apologizes for creating image of young girls in “sexualized attire”

Another AI system designed to be powerful and engaging ends up illustrating how guardrails routinely fail when development speed and feature races outrun safety controls.

In a post on X, AI chatbot Grok confirmed that it generated an image of young girls in “sexualized attire.”

Apologizing post by Grok

The potential violation of US laws regarding child sexual abuse material (CSAM) demonstrates the AI chatbot’s apparent lack of guardrails. Or, at least, the guardrails are far from as effective as we’d like them to be.

xAI, the company behind Musk’s chatbot, is reviewing the incident “to prevent future issues,” and the user responsible for the prompt reportedly had their account suspended. Reportedly, in a separate post on X, Grok described the incident as an isolated case and said that urgent fixes were being issued after “lapses in safeguards” were identified.

During the holiday period, we discussed how risks increased when AI developments and features are rushed out the door without adequate safety testing. We keep pushing the limits of what AI can do faster than we can make it safe. Visual models that can sexualize minors are precisely the kind of deployment that should never go live without rigorous abuse testing.

So, while on one hand we see geo-blocking due to national and state content restrictions, the AI linked to one of the most popular social media platforms failed to block content that many would consider far more serious than what lawmakers are currently trying to regulate. In effect, centralized age‑verification databases become breach targets while still failing to prevent AI tools from generating abusive material.

Women have also reported being targeted by Grok’s image-generation features. One X user tweeted:

“Literally woke up to so many comments asking Grok to put me in a thong / bikini and the results having so many bookmarks. Even worse I went onto the Grok page and saw slimy disgusting lowlifes doing that to pictures of CHILDREN. Genuinely disgusting.”

We can only imagine the devastating results when cybercriminals would abuse this type of weakness to defraud or extort parents with fabricated explicit content of their young ones. Tools for inserting real faces into AI-generated content are already widely available, and current safeguards appear unable to reliably prevent abuse.

Tips

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

Treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (December 29 – January 4)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

How AI made scams more convincing in 2025

This blog is part of a series where we highlight new or fast-evolving threats in consumer security. This one focuses on how AI is being used to design more realistic campaigns, accelerate social engineering, and how AI agents can be used to target individuals.

Most cybercriminals stick with what works. But once a new method proves effective, it spreads quickly—and new trends and types of campaigns follow.

In 2025, the rapid development of Artificial Intelligence (AI) and its use in cybercrime went hand in hand. In general, AI allows criminals to improve the scale, speed, and personalization of social engineering through realistic text, voice, and video. Victims face not only financial loss, but erosion of trust in digital communication and institutions.

Social engineering

Voice cloning

One of the main areas where AI improved was in the area of voice-cloning, which was immediately picked up by scammers. In the past, they would mostly stick to impersonating friends and relatives. In 2025, they went as far as impersonating senior US officials. The targets were predominantly current or former US federal or state government officials and their contacts.

In the course of these campaigns, cybercriminals used test messages as well as AI-generated voice messages. At the same time, they did not abandon the distressed-family angle. A woman in Florida was tricked into handing over thousands of dollars to a scammer after her daughter’s voice was AI-cloned and used in a scam.

AI agents

Agentic AI is the term used for individualized AI agents designed to carry out tasks autonomously. One such task could be to search for publicly available or stolen information about an individual and use that information to compose a very convincing phishing lure.

These agents could also be used to extort victims by matching stolen data with publicly known email addresses or social media accounts, composing messages and sustaining conversations with people who believe a human attacker has direct access to their Social Security number, physical address, credit card details, and more.

Another use we see frequently is AI-assisted vulnerability discovery. These tools are in use by both attackers and defenders. For example, Google uses a project called Big Sleep, which has found several vulnerabilities in the Chrome browser.

Social media

As mentioned in the section on AI agents, combining data posted on social media with data stolen during breaches is a common tactic. Such freely provided data is also a rich harvesting ground for romance scams, sextortion, and holiday scams.

Social media platforms are also widely used to peddle fake products, AI generated disinformation, dangerous goods,  and drop-shipped goods.

Prompt injection

And then there are the vulnerabilities in public AI platforms such as ChatGPT, Perplexity, Claude, and many others. Researchers and criminals alike are still exploring ways to bypass the safeguards intended to limit misuse.

Prompt injection is the general term for when someone inserts carefully crafted input, in the form of an ordinary conversation or data, to nudge or force an AI into doing something it wasn’t meant to do.

Malware campaigns

In some cases, attackers have used AI platforms to write and spread malware. Researchers have documented campaign where attackers leveraged Claude AI to automate the entire attack lifecycle, from initial system compromise through to ransom note generation, targeting sectors such as government, healthcare, and emergency services.

Since early 2024, OpenAI says it has disrupted more than 20 campaigns around the world that attempted to abuse its AI platform for criminal operations and deceptive campaigns.

Looking ahead

AI is amplifying the capabilities of both defenders and attackers. Security teams can use it to automate detection, spot patterns faster, and scale protection. Cybercriminals, meanwhile, are using it to sharpen social engineering, discover vulnerabilities more quickly, and build end-to-end campaigns with minimal effort.

Looking toward 2026, the biggest shift may not be technical but psychological. As AI-generated content becomes harder to distinguish from the real thing, verifying voices, messages, and identities will matter more than ever.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

In 2025, age checks started locking people out of the internet

If 2024 was the year lawmakers talked about online age verification, 2025 was the year they actually flipped the switch.​

In 2025, across parts of Europe and the US, age checks for certain websites (especially pornography) turned long‑running child‑protection debates into real‑world access controls. Overnight, users found entire categories of sites locked behind ID checks, platforms geo‑blocking whole countries, and VPN traffic surging as people tried to get around the new walls.​

From France’s hardline stance on adult sites to the UK’s Online Safety Act, to a patchwork of new rules across multiple US states, these “show me your ID before you browse” systems are reshaping the web. The stated goal is to “protect the children,” but in practice the outcome is frequently a blunt national block, followed by users voting with their VPN buttons.​

The core tension: safety vs privacy

The fundamental challenge for websites and services is not checking age in principle, but how to do it without turning everyday browsing into an identity check. Almost every viable method asks users to hand over sensitive data, raising the stakes if (or more likely when) that data leaks in a breach.​

For ordinary users, the result is a confusing mess of blocks, prompts, and workarounds. On paper, countries want better protection for minors. In practice, adults discover that entire platforms are unavailable unless they are prepared to disclose personal information or disguise where they connect from. No website wants to be the one blamed after an age‑verification database is compromised, yet regulators continue to push for stronger identity links.​

How age checks actually work

Regulators such as Ofcom publish lists of acceptable age‑verification methods, each with its own privacy and usability trade‑offs. None are perfect, and many shift risk from governments and platforms onto users’ most sensitive personal data.​

  • Facial age estimation: Users upload a selfie or short video so an algorithm can guess whether they look over 18, which avoids storing documents but relies on sensitive biometrics and imperfect accuracy.​
  • Open banking: An age‑check service queries your bank for a simple “adult or not” answer. It may be convenient on paper but it’s a hard sell when the relying site is an adult platform.​
  • Digital identity services: Digital ID wallets can assert “over 18” without exposing full credentials, but they add yet another app and infrastructure layer that must be trusted and widely adopted.​
  • Credit card checks: Using a valid payment card as a proxy for adulthood is simple and familiar, but it excludes adults without cards and does not cover lower age thresholds like “over 13.”​
  • Email‑based estimation: Systems infer age from where an email address has been used (such as banks or utilities), effectively encouraging cross‑service profiling and “digital snooping.”​
  • Mobile network checks: Providers indicate whether an account has age‑related restrictions. This can be fast, but is unreliable for pay‑as‑you‑go accounts, burner SIMs, or poorly maintained records.​
  • Photo‑ID matching: Users upload an ID document plus a selfie so systems can match faces and ages. This is effective, but concentrates highly sensitive identity data in yet another attractive target for attackers.​

My personal preference would be double‑blind verification: a third‑party provider verifies your age, then issues a simple token like “18+” to sites without revealing your identity or learning which site you visit, offering stronger privacy than most current approaches.​

In almost every case, users must surrender personal information or documents to prove their age, increasing the risk that identity data ends up in the wrong hands. This turns age gates into long‑lived security liabilities rather than temporary access checks.​

Geoblocking, VPNs, and cross‑border frictions

Right now, most platforms comply by detecting user location via IP address and then either demanding age checks or denying access entirely to users in specific regions. France’s enforcement actions, for example, led several major adult sites to geo-block the entire country in 2025, while the UK’s Online Safety Act coincided with a sharp rise in VPN use rather than widespread cross-border blocking.

European regulators generally focus on domestic ISPs, Digital Services Act reporting, and large platform fines rather than on filtering traffic from other countries, partly because broad traffic blocking raises net‑neutrality and technical complexity concerns. In the US, some state proposals have explicitly targeted VPN circumventions, signalling a willingness to attack the workarounds rather than the underlying incentives.​

Meanwhile, network‑level filtering vendors advertise “cross‑border” controls and VPN detection for governments, hinting at future scenarios where unregulated inbound flows or anonymity tools are aggressively throttled. If enforcement pressure grows, these capabilities could evolve from niche offerings into standard state infrastructure.​

A future of less anonymity?

A common argument is that eroding online anonymity will also curb toxic behavior and abuse on social media, since people act differently when their real‑world identity is at stake. But tying everyday browsing to identity checks risks chilling legitimate speech and exploration long before it delivers any proven civility benefits.​

A world where every connection requires ID is unlikely to arrive overnight. Still, the direction of travel is clear: more countries are normalizing age gates that double as identity checks, and more users are learning to route around them. Unless privacy‑preserving systems like robust double‑blind verification become the norm, age‑verification policies intended to protect children may end up undermining both privacy and open access to information.​


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

2025 exposed the risks we ignored while rushing AI

This blog is part of a series where we highlight new or fast-evolving threats in the consumer security landscape. This one looks at how the rapid rise of Artificial Intelligence (AI) is putting users at risk.

In 2025 we saw an ever-accelerating race between AI providers to push out new features. We also saw manufacturers bolt AI onto products simply because it sounded exciting. In many cases, it really shouldn’t have.

Agentic browsers

Agentic or AI browsers that can act autonomously to execute tasks introduced a new set of vulnerabilities—especially to prompt injection attacks. With great AI power comes great responsibility, and risk. If you’re thinking about using an AI browser, it’s worth slowing down and considering the security and privacy implications first. Even experienced AI providers like OpenAI (the makers of ChatGPT) were unable to keep their agentic browser Atlas secure. By pasting a specially crafted link into the Omnibox, attackers were able to trick Atlas into treating a URL input as a trusted command.

Mimicry

The popularity of AI chatbots created the perfect opportunity for scammers to distribute malicious apps. Even if the AI engine itself worked perfectly, attackers have another way in: fake interfaces. According to BleepingComputer, scammers are already creating spoofed AI sidebars that look identical to real ones from browsers like OpenAI’s Atlas and Perplexity’s Comet. These fake sidebars mimic the real interface, making them almost impossible to spot.

Misconfiguration

And then there’s this special category of using AI in products because it sounds cooler with AI or you can ask for more money from buyers.

Toys

We saw a plush teddy bear promising “warmth, fun, and a little extra curiosity” that was taken off the market after researcher found its built-in AI responding with sexual content and advice about weapons. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults.

Misinterpretation

Sometimes we rely on AI systems too much and forget that they hallucinate. As in the case where a school’s AI system mistook a boy’s empty Doritos bag for a gun and triggered a full-blown police response. Multiple police cars arrived with officers drawing their weapons, all because of a false alarm.

Data breaches

Alongside all this comes a surge in privacy concerns. Some issues stem from the data used to train AI models; others come from mishandled chat logs. Two AI companion apps recently exposed private conversations because users weren’t clearly warned that certain settings would result in their conversations becoming searchable or result in targeted advertising.

So, what should we do?

We’ve said it before and we’ll probably say it again:  We keep pushing the limits of what AI can do faster than we can make it safe. As long as we keep chasing the newest features, companies will keep releasing new integrations, whether they’re safe or not.

As consumers, the best thing we can do is stay informed about new developments and the risks that come with them. Ask yourself: Do I really need this? What am I trusting AI with? What’s the potential downside? Sometimes it’s worth doing things the slower, safer way.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Malware in 2025 spread far beyond Windows PCs

This blog is part of a series highlighting new and concerning trends we noticed over the last year. Trends matter because they almost always provide a good indication of what’s coming next.

If there’s one thing that became very clear in 2025, it’s that malware is no longer focused on Windows alone. We’ve seen some major developments, especially in campaigns targeting Android and macOS. Unfortunately, many people still don’t realize that protecting smartphones, tablets, and other connected devices is just as essential as securing their laptops.

Android

Banking Trojans on Android are not new, but their level of sophistication continues to rise. These threats continue to be a major problem in 2025, often disguising themselves as fake apps to steal credentials or stealthily take over devices. A recent wave of advanced banking Trojans, such as Herodotus, can mimic human typing behaviors to evade detection, highlighting just how refined these attacks have become. Android malware also includes adware that aggressively pushes intrusive ads through free apps, degrading both the user experience and overall security.

Several Trojans were found to use overlays, which are fake login screens appearing on top of real banking and cryptocurrency apps. They can read what’s on the screen, so when someone enters their username and password, the malware steals them.

macOS

One of the most notable developments for Mac users was the expansion of the notorious ClickFix campaign to macOS. Early in 2025, I described how criminals used fake CAPTCHA sites and a clipboard hijacker to provide instructions that led visitors ro infect their own machines with the Lumma infostealer.

ClickFix is the name researchers have since given to this type of campaign, where users are tricked into running malicious commands themselves. On macOS, this technique is being used to distribute both AMOS stealers and the Rhadamanthys infostealer.

Cross-platform

Malware developers increasingly use cross-platform languages such as Rust and Go to create malware that can run on Windows, macOS, Linux, mobile, and even Internet of Things (IoT) devices. This enables flexible targeting and expands the number of potential victims. Malware-as-a-Service (MaaS) models are on the rise, offering these tools for rent or purchase on underground markets, further professionalizing malware development and distribution.

Social engineering

iPhone users have been found to be more prone to scams and less conscious about mobile security than Android owners. That brings us to the first line of defense, which has nothing to do with the device or operating system you use: education.

Social engineering exploits human behavior, and knowing what to look out for makes you far less likely to fall for a scam.

Fake apps that turn out to be malware, malicious apps in the Play Store, sextortion, and costly romance scams all prey on basic human emotions. They either go straight for the money or deliver Trojan droppers as the first step toward infecting a device.

We’ve also seen consistent growth in Remote Access Trojan (RAT) activity, often used as an initial infection method. There’s also been a rise in finance-focused attacks, including cryptocurrency and banking-related targets, alongside widespread stealer malware driving data breaches.

What does this mean for 2026?

Taken together, these trends point to a clear shift. Cybercriminals are increasingly focusing on operating systems beyond Windows, combining advanced techniques and social engineering tailored specifically to mobile and macOS.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (December 22 – December 28)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Hacktivists claim near-total Spotify music scrape

Hacktivist group Anna’s Archive claims to have scraped almost all of Spotify’s catalog and is now seeding it via BitTorrent, effectively turning a streaming platform into a roughly 300 TB pirate “preservation archive.”

On its blog, the group states:

“A while ago, we discovered a way to scrape Spotify at scale. We saw a role for us here to build a music archive primarily aimed at preservation.”

Spotify insists that the hacktivists obtained no user data. Still, the incident highlights how large‑scale scraping, digital rights management (DRM) circumvention, and weak abuse controls can turn major content platforms into high‑value targets.

Anna’s Archive claims it obtained metadata for around 256 million tracks and audio files for roughly 86 million songs, totaling close to 300 TB. Reportedly, this represents about 99.9% of Spotify’s catalog and roughly 99.6% of all streams.

Spotify says it has “identified and disabled the nefarious user accounts that engaged in unlawful scraping” and implemented new safeguards.

From a security perspective, this incident is a textbook example of how scraping can escalate beyond “just metadata” into industrial‑scale content theft. By combining public APIs, token abuse, rate‑limit evasion, and DRM bypass techniques, attackers can extract protected content at scale. If you can create or compromise enough accounts and make them appear legitimate, you can chip away at content protections over time.

The “Spotify scrape” will likely be framed as a copyright story. But from a security angle, it serves as a reminder: if a platform exposes content or metadata at scale, someone will eventually automate access to it, weaponize it, and redistribute it.

And hiding behind violations of terms and conditions—which have never stopped criminals—is not effective security control.

How does this affect you?

There is currently no indication that passwords, payment details, or private playlists were exposed. This incident is purely about content and metadata, not user databases. That said, scammers may still claim otherwise. Be cautious of messages alleging your account data was compromised and asking for your login details.

Some general Spotify security tips, to be on the safe side:

  • If you have reused your Spotify password elsewhere or shared your credentials, consider changing your password for peace of mind.
  • Regularly review active sessions on streaming services and revoke anything you do not recognize. Spotify does not offer per-device session management, but you can sign out of all devices via Account > Settings and privacy on the Spotify website.
  • Avoid unofficial downloaders, converters, or “Spotify mods” that ask for your login or broad OAuth permissions. These tools often rely on the same kind of scraping infrastructure—or worse, function as credential-stealing malware.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Pornhub tells users to expect sextortion emails after data exposure

After a recent data breach that affected Pornhub Premium members, Pornhub has updated its online statement to warn users about potential direct contact from cybercriminals.

“We are aware that the individuals responsible for this incident have threatened to contact impacted Pornhub Premium users directly. You may therefore receive emails claiming they have your personal information. As a reminder, we will never ask for your password or payment information by email.”

Pornhub is one of the world’s most visited adult video-sharing websites, allowing users to view content anonymously or create accounts to upload and interact with videos.

Pornhub has reported that on November 8, 2025, a security breach at third-party analytics provider Mixpanel exposed “a limited set of analytics events for certain users.” Pornhub stressed that this was not a breach of Pornhub’s own systems, and said that passwords, payment details, and financial information were not exposed.

Mixpanel confirmed it experienced a security incident on November 8, 2025, but disputes that the Pornhub data originated from that breach. The company stated there is:

 “No indication that this data was stolen from Mixpanel during our November 2025 security incident or otherwise.”

Regardless of the source, cybercriminals commonly attempt to monetize stolen user data through direct extortion. At the moment, it is unclear how many users are affected, although available information suggests that only Premium members had their data exposed.

In October, we reported that one in six mobile users are targeted by sextortion scams. Sextortion is a form of online blackmail where criminals threaten to share a person’s private, nude, or sexually explicit images or videos unless the victim complies with their demands—often for more sexual content, sexual favors, or money.

Having your email address included in a dataset of known Pornhub users makes you a likely target for this type of blackmail.

How to stay safe from sextortion

Unless you used a dedicated throwaway email address to sign up for Pornhub Premium, you should be prepared to receive a sextortion-type email. If one arrives:

  • Any message referencing your Pornhub use, searches, or payment should be treated as an attempt to exploit breached or previously leaked data.
  • Never provide passwords or payment information by email. Pornhub has stated it will not ask for these.
  • Do not respond to blackmail emails. Ignore demands, do not pay, and do not reply—responding confirms your address is actively monitored.
  • Save extortion emails, including headers, content, timestamps, and attachments, but do not open links or files. This information can support reports to your email provider, local law enforcement, or cybercrime units.
  • Change your Pornhub password (if your account is still active) and ensure it’s unique and not reused anywhere else.
  • Turn on multi-factor authentication (MFA) for your primary email account and any accounts that could be used for account recovery or identity verification.
  • Review your bank and card statements for unfamiliar charges and report any suspicious transactions at once.
  • If you used a real-name email address for Pornhub, consider moving sensitive subscriptions to a separate, pseudonymous email going forward.

Use STOP, our simple scam response framework to help protect against scams. 

  • SSlow down: Don’t let urgency or pressure push you into action. Take a breath before responding. Legitimate businesses like your bank or credit card don’t push immediate action.  
  • TTest them: If you answered the phone and are feeling panicked about the situation, likely involving a family member or friend, ask a question only the real person would know—something that can’t be found online. 
  • OOpt out: If it feels off, hang up or end the conversation. You can always say the connection dropped. 
  • PProve it: Confirm the person is who they say they are by reaching out yourself through a trusted number, website or method you have used before. 

Should you have doubts about the legitimacy of any communications, submit them to Malwarebytes Scam Guard. It will help you determine whether it’s a scam and provide advice on how to act.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.