Archive for author: makoadmin

Outlook add-in goes rogue and steals 4,000 credentials and payment data

Researchers found a malicious Microsoft Outlook add-in which was able to steal 4,000 stolen Microsoft account credentials, credit card numbers, and banking security answers. 

How is it possible that the Microsoft Office Add-in Store ended listing an add-in that silently loaded a phishing kit inside Outlook’s sidebar?

A developer launched an add-in called AgreeTo, an open-source meeting scheduling tool with a Chrome extension. It was a popular tool, but at some point, it was abandoned by its developer, its backend URL on Vercel expired, and an attacker later claimed that same URL.

That requires some explanation. Office add-ins are essentially XML manifests that tell Outlook to load a specific URL in an iframe. Microsoft reviews and signs the manifest once but does not continuously monitor what that URL serves later.

So, when the outlook-one.vercel.app subdomain became free to claim, a cybercriminal jumped at the opportunity to scoop it up and abuse the powerful ReadWriteItem permissions requested and approved in 2022. These permissions meant the add-in could read and modify a user’s email when loaded. The permissions were appropriate for a meeting scheduler, but they served a different purpose for the criminal.

While Google removed the dead Chrome extension in February 2025, the Outlook add-in stayed listed in Microsoft’s Office Store, still pointing to a Vercel URL that no longer belonged to the original developer.

An attacker registered that Vercel subdomain and deployed a simple four-page phishing kit consisting of fake Microsoft login, password collection, Telegram-based data exfiltration, and a redirect to the real login.microsoftonline.com.

What make this work was simple and effective. When users opened the add-in, they saw what looked like a normal Microsoft sign-in inside Outlook. They entered credentials, which were sent via a JavaScript function to the attacker’s Telegram bot along with IP data, then were bounced to the real Microsoft login so nothing seemed suspicious.

The researchers were able to access the attacker’s poorly secured Telegram-based exfiltration channel and recovered more than 4,000 sets of stolen Microsoft account credentials, plus payment and banking data, indicating the campaign was active and part of a larger multi-brand phishing operation.

“The same attacker operates at least 12 distinct phishing kits, each impersonating a different brand – Canadian ISPs, banks, webmail providers. The stolen data included not just email credentials but credit card numbers, CVVs, PINs, and banking security answers used to intercept Interac e-Transfer payments. This is a professional, multi-brand phishing operation. The Outlook add-in was just one of its distribution channels.”

What to do

If you are or ever have used the AgreeTo add-in after May 2023:

  • Make sure it’s removed. If not, uninstall the add-in.
  • Change the password for your Microsoft account.
  • If that password (or close variants) was reused on other services (email, banking, SaaS, social), change those as well and make each one unique.
  • Review recent sign‑ins and security activity on your Microsoft account, looking for logins from unknown locations or devices, or unusual times.
  • Review other sensitive information you may have shared via email.
  • Scan your mailbox for signs of abuse: messages you did not send, auto‑forwarding rules you did not create, or password‑reset emails for other services you did not request.
  • Watch payment statements closely for at least the next few months, especially small “test” charges and unexpected e‑transfer or card‑not‑present transactions, and dispute anything suspicious immediately.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Child exploitation, grooming, and social media addiction claims put Meta on trial

Meta is facing two trials over child safety allegations in California and New Mexico. The lawsuits are landmark cases, marking the first time that any such accusations have reached a jury. Although over 40 state attorneys general have filed suits about child safety issues with social media, none had gone to trial until now.

The New Mexico case, filed by Attorney General Raúl Torrez in December 2023, centers on child sexual exploitation. Torrez’s team built their evidence by posing as children online and documenting what happened next, in the form of sexual solicitations. The team brought the suit under New Mexico’s Unfair Trade Practices Act, a consumer protection statute that prosecutors argue sidesteps Section 230 protections.

The most damaging material in the trial, which is expected to run seven weeks, may be Meta’s own paperwork. Newly unsealed internal documents revealed that a company safety researcher had warned about the sheer scale of the problem, claiming that around half a million cases of child exploitation are happening daily. Torrez did not mince words about what he believes the platform has become, calling it an online marketplace for human trafficking. From the complaint:

“Meta’s platforms Facebook and Instagram are a breeding ground for predators who target children for human trafficking, the distribution of sexual images, grooming, and solicitation.”

The complaint’s emphasis on weak age verification touches on a broader issue regulators around the world are now grappling with: how platforms verify the age of their youngest users—and how easily those systems can be bypassed.

In our own research into children’s social media accounts, we found that creating underage profiles can be surprisingly straightforward. In some cases, minimal checks or self-declared birthdates were enough to access full accounts. We also identified loopholes that could allow children to encounter content they shouldn’t or make it easier for adults with bad intentions to find them.

The social media and VR giant has pushed back hard, calling the state’s investigation ethically compromised and accusing prosecutors of cherry-picking data. Defence attorney Kevin Huff argued that the company disclosed its risks rather than concealing them.

Yesterday, Stanford psychiatrist Dr. Anna Lembke told the court she believes Meta’s design features are addictive and that the company has been using the term “Problematic Internet Use” internally to avoid acknowledging addiction.

Meanwhile in Los Angeles, a separate bellwether case against Meta and Google opened on Monday. A 20-year-old woman identified only as KGM is at the center of the case. She alleges that YouTube and Instagram hooked her from childhood. She testified that she was watching YouTube at six, on Instagram by nine, and suffered from worsening depression and body dysmorphia. Her case, which TikTok and Snap settled before trial, is the first of more than 2,400 personal injury filings consolidated in the proceeding. Plaintiffs’ attorney Mark Lanier called it a case about:

“two of the richest corporations in history, who have engineered addiction in children’s brains.”

A litany of allegations

None of this appeared from nowhere. In 2021, whistleblower Frances Haugen leaked internal Facebook documents showing the company knew its platforms damaged teenage mental health. In 2023, Meta whistleblower Arturo Béjar testified before the Senate that the company ignored sexual endangerment of children.

Unredacted documents unsealed in the New Mexico case in early 2024 suggested something uglier still: that the company had actively marketed messaging platforms to children while suppressing safety features that weren’t considered profitable. Internal employees sounded alarms for years but executives reportedly chose growth, according to New Mexico AG Raúl Torrez. Last September, whistleblowers said that the company had ignored child sexual abuse in virtual reality environments.

Outside the courtroom, governments around the world are moving faster than the US Congress. Australia banned under 16s from social media in December 2025, becoming the first country to do so. France’s National Assembly followed, approving a ban on social media for under 15s in January by 130 votes to 21. Spain announced its own under 16 ban this month. By last count, at least 15 European governments were considering similar measures. Whether any of these bans will actually work is uncertain, particularly as young users openly discuss ways to bypass controls.

The United States, by contrast, has passed exactly one major federal child online safety law: the Children’s Online Privacy Protection Act (COPPA), in 1998. The Kids Online Safety Act (KOSA), introduced in 2022, passed the Senate 91-3 in mid-2024 then stalled in the House. It was reintroduced last May and has yet to reach a floor vote. States have tried to fill the gap, with 18 proposed similar legislation in 2025, but only one of those was enacted (in Nebraska). A comprehensive federal framework remains nowhere in sight.

On its most recent earnings call, Meta acknowledged it could face material financial losses this year. The pressure is no longer theoretical. The juries in Santa Fe and Los Angeles will now weigh whether the company’s design choices and safety measures crossed legal lines.

If you want to understand how social media platforms can expose children to harmful content—and what parents can realistically do about it—check out our research project on social media safety.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Apple patches zero-day flaw that could let attackers take control of devices

Apple has released security updates for iPhones, iPads, Macs, Apple Watches, Apple TVs, and Safari, fixing, in particular, a zero-day flaw that is actively exploited in targeted attacks.

Exploiting this zero-day flaw would allow cybercriminals to run any code they want on the affected device, potentially installing spyware or backdoors without the owner noticing.

Installing these updates as soon as possible keeps your personal information—and everything else on your Apple devices—safe from such an attack.

CVE-2026-20700

The zero-day vulnerability tracked as CVE-2026-20700, is a memory corruption issue in watchOS 26.3, tvOS 26.3, macOS Tahoe 26.3, visionOS 26.3, iOS 26.3 and iPadOS 26.3. An attacker with memory write capability may be able to execute arbitrary code.

Apple says the vulnerability was used as part of an infection chain combined with CVE-2025-14174 and CVE-2025-43529 against devices running iOS versions prior to iOS 26.

Those two vulnerabilities were already patched in the December 2025 update.

Updates for your particular device

The table below shows which updates are available and points you to the relevant security content for that operating system (OS).

iOS 26.3 and iPadOS 26.3 iPhone 11 and later, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 8th generation and later, and iPad mini 5th generation and later
iOS 18.7.5 and iPadOS 18.7.5 iPhone XS, iPhone XS Max, iPhone XR, iPad 7th generation
macOS Tahoe 26.3 macOS Tahoe
macOS Sequoia 15.7.4 macOS Sequoia
macOS Sonoma 14.8.4 macOS Sonoma
tvOS 26.3 Apple TV HD and Apple TV 4K (all models)
watchOS 26.3 Apple Watch Series 6 and later
visionOS 26.3 Apple Vision Pro (all models)
Safari 26.3 macOS Sonoma and macOS Sequoia

How to update your Apple devices

How to update your iPhone or iPad

For iOS and iPadOS users, here’s how to check if you’re using the latest software version:

  • Go to Settings > General > Software Update. You will see if there are updates available and be guided through installing them.
  • Turn on Automatic Updates if you haven’t already—you’ll find it on the same screen.
iPadOS 26.3 update

How to update macOS on any version

To update macOS on any supported Mac, use the Software Update feature, which Apple designed to work consistently across all recent versions. Here are the steps:

  • Click the Apple menu in the upper-left corner of your screen.
  • Choose System Settings (or System Preferences on older versions).
  • Select General in the sidebar, then click Software Update on the right. On older macOS, just look for Software Update directly.
  • Your Mac will check for updates automatically. If updates are available, click Update Now (or Upgrade Now for major new versions) and follow the on-screen instructions. Before you upgrade to macOS Tahoe 26, please read these instructions.
  • Enter your administrator password if prompted, then let your Mac finish the update (it might need to restart during this process).
  • Make sure your Mac stays plugged in and connected to the internet until the update is done.

How to update Apple Watch

Ensure your iPhone is paired with your Apple Watch and connected to Wi-Fi, then:

  • Keep your Apple Watch on its charger and close to your iPhone.
  • Open the Watch app on your iPhone.
  • Tap General > Software Update.
  • If an update appears, tap Download and Install.
  • Enter your iPhone passcode or Apple ID password if prompted.

Your Apple Watch will automatically restart during the update process. Make sure it remains near your iPhone and on charge until the update completes.

How to update Apple TV

Turn on your Apple TV and make sure it’s connected to the internet, then:

  • Open the Settings app on Apple TV.
  • Navigate to System > Software Updates.
  • Select Update Software.
  • If an update appears, select Download and Install.

The Apple TV will download the update and restart as needed. Keep your device connected to power and Wi-Fi until the process finishes.

How to update your Safari browser

Safari updates are included with macOS updates, so installing the latest version of macOS will also update Safari. To check manually:

  • Open the Apple menu > System Settings > General > Software Update.
  • If you see a Safari update listed separately, click Update Now to install it.
  • Restart your Mac when prompted.

If you’re on an older macOS version that’s still supported (like Sonoma or Sequoia), Apple may offer Safari updates independently through Software Update.

More advice to stay safe

The most important fix—however inconvenient it may be—is to upgrade to iOS 26.3 (or the latest available version for your device). Not doing so means missing an accumulating list of security fixes, leaving your device vulnerable to newly found vulnerabilities.

 But here are some other useful tips:

  • Make it a habit to restart your device on a regular basis.
  • Do not open unsolicited links and attachments without verifying with the trusted sender.
  • Remember: Apple threat notifications will never ask users to click links, open files, install apps or ask for account passwords or verification codes.
  • For Apple Mail users, these vulnerabilities create risk when viewing HTML-formatted emails containing malicious web content.
  • Malwarebytes for iOS can help keep your device secure, with Trusted Advisor alerting you when important updates are available.
  • If you are a high-value target, or you want the extra level of security, consider using Apple’s Lockdown Mode.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Criminals are using AI website builders to clone major brands

AI tool Vercel was abused by cybercriminals to create a Malwarebytes lookalike website.

Cybercriminals no longer need design or coding skills to create a convincing fake brand site. All they need is a domain name and an AI website builder. In minutes, they can clone a site’s look and feel, plug in payment or credential-stealing flows, and start luring victims through search, social media, and spam.

One side effect of being an established and trusted brand is that you attract copycats who want a slice of that trust without doing any of the work. Cybercriminals have always known it is much easier to trick users by impersonating something they already recognize than by inventing something new—and developments in AI have made it trivial for scammers to create convincing fake sites.​​

Registering a plausible-looking domain is cheap and fast, especially through registrars and resellers that do little or no upfront vetting. Once attackers have a name that looks close enough to the real thing, they can use AI-powered tools to copy layouts, colors, and branding elements, and generate product pages, sign-up flows, and FAQs that look “on brand.”

A flood of fake “official” sites

Data from recent holiday seasons shows just how routine large-scale domain abuse has become.

Over a three‑month period leading into the 2025 shopping season, researchers observed more than 18,000 holiday‑themed domains with lures like “Christmas,” “Black Friday,” and “Flash Sale,” with at least 750 confirmed as malicious and many more still under investigation. In the same window, about 19,000 additional domains were registered explicitly to impersonate major retail brands, nearly 3,000 of which were already hosting phishing pages or fraudulent storefronts.

These sites are used for everything from credential harvesting and payment fraud to malware delivery disguised as “order trackers” or “security updates.”

Attackers then boost visibility using SEO poisoning, ad abuse, and comment spam, nudging their lookalike sites into search results and promoting them in social feeds right next to the legitimate ones. From a user’s perspective, especially on mobile without the hover function, that fake site can be only a typo or a tap away.​

When the impersonation hits home

A recent example shows how low the barrier to entry has become.

We were alerted to a site at installmalwarebytes[.]org that masqueraded from logo to layout as a genuine Malwarebytes site.

Close inspection revealed that the HTML carried a meta tag value pointing to v0 by Vercel, an AI-assisted app and website builder.

Built by v0

The tool lets users paste an existing URL into a prompt to automatically recreate its layout, styling, and structure—producing a near‑perfect clone of a site in very little time.

The history of the imposter domain tells an incremental evolution into abuse.

Registered in 2019, the site did not initially contain any Malwarebytes branding. In 2022, the operator began layering in Malwarebytes branding while publishing Indonesian‑language security content. This likely helped with search reputation while normalizing the brand look to visitors. Later, the site went blank, with no public archive records for 2025, only to resurface as a full-on clone backed by AI‑assisted tooling.​

Traffic did not arrive by accident. Links to the site appeared in comment spam and injected links on unrelated websites, giving users the impression of organic references and driving them toward the fake download pages.

Payment flows were equally opaque. The fake site used PayPal for payments, but the integration hid the merchant’s name and logo from the user-facing confirmation screens, leaving only the buyer’s own details visible. That allowed the criminals to accept money while revealing as little about themselves as possible.

PayPal module

Behind the scenes, historical registration data pointed to an origin in India and to a hosting IP (209.99.40[.]222) associated with domain parking and other dubious uses rather than normal production hosting.

Combined with the AI‑powered cloning and the evasive payment configuration, it painted a picture of low‑effort, high‑confidence fraud.

AI website builders as force multipliers

The installmalwarebytes[.]org case is not an isolated misuse of AI‑assisted builders. It fits into a broader pattern of attackers using generative tools to create and host phishing sites at scale.

Threat intelligence teams have documented abuse of Vercel’s v0 platform to generate fully functional phishing pages that impersonate sign‑in portals for a variety of brands, including identity providers and cloud services, all from simple text prompts. Once the AI produces a clone, criminals can tweak a few links to point to their own credential‑stealing backends and go live in minutes.

Research into AI’s role in modern phishing shows that attackers are leaning heavily on website generators, writing assistants, and chatbots to streamline the entire kill chain—from crafting persuasive copy in multiple languages to spinning up responsive pages that render cleanly across devices. One analysis of AI‑assisted phishing campaigns found that roughly 40% of observed abuse involved website generation services, 30% involved AI writing tools, and about 11% leveraged chatbots, often in combination. This stack lets even low‑skilled actors produce professional-looking scams that used to require specialized skills or paid kits.​

Growth first, guardrails later

The core problem is not that AI can build websites. It’s that the incentives around AI platform development are skewed. Vendors are under intense pressure to ship new capabilities, grow user bases, and capture market share, and that pressure often runs ahead of serious investment in abuse prevention.

As Malwarebytes General Manager Mark Beare put it:

“AI-powered website builders like Lovable and Vercel have dramatically lowered the barrier for launching polished sites in minutes. While these platforms include baseline security controls, their core focus is speed, ease of use, and growth—not preventing brand impersonation at scale. That imbalance creates an opportunity for bad actors to move faster than defenses, spinning up convincing fake brands before victims or companies can react.”

Site generators allow cloned branding of well‑known companies with no verification, publishing flows skip identity checks, and moderation either fails quietly or only reacts after an abuse report. Some builders let anyone spin up and publish a site without even confirming an email address, making it easy to burn through accounts as soon as one is flagged or taken down.

To be fair, there are signs that some providers are starting to respond by blocking specific phishing campaigns after disclosure or by adding limited brand-protection controls. But these are often reactive fixes applied after the damage is done.

Meanwhile, attackers can move to open‑source clones or lightly modified forks of the same tools hosted elsewhere, where there may be no meaningful content moderation at all.

In practice, the net effect is that AI companies benefit from the growth and experimentation that comes with permissive tooling, while the consequences is left to victims and defenders.

We have blocked the domain in our web protection module and requested a domain and vendor takedown.

How to stay safe

End users cannot fix misaligned AI incentives, but they can make life harder for brand impersonators. Even when a cloned website looks convincing, there are red flags to watch for:

  • Before completing any payment, always review the “Pay to” details or transaction summary. If no merchant is named, back out and treat the site as suspicious.
  • Use an up-to-date, real-time anti-malware solution with a web protection module.
  • Do not follow links posted in comments, on social media, or unsolicited emails to buy a product. Always follow a verified and trusted method to reach the vendor.

If you come across a fake Malwarebytes website, please let us know.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

February 2026 Patch Tuesday includes six actively exploited zero-days

Microsoft releases important security updates on the second Tuesday of every month, known as “Patch Tuesday.” This month’s update patches fix 59 Microsoft CVE’s including six zero-days.

Let’s have a quick look at these six actively exploited zero-days.

Windows Shell Security Feature Bypass Vulnerability

CVE-2026-21510 (CVSS score 8.8 out of 10) is a security feature bypass in the Windows Shell. A protection mechanism failure allows an attacker to circumvent Windows SmartScreen and similar prompts once they convince a user to open a malicious link or shortcut file.

The vulnerability is exploited over the network but still requires on user interaction. The victim must be socially engineered into launching the booby‑trapped shortcut or link for the bypass to trigger. Successful exploitation lets the attacker suppress or evade the usual “are you sure?” security dialogs for untrusted content, making it easier to deliver and execute further payloads without raising user suspicion.

MSHTML Framework Security Feature Bypass Vulnerability

CVE-2026-21513 (CVSS score 8.8 out of 10) affects the MSHTML Framework, which is used by Internet Explorer’s Trident/embedded web rendering). It is classified as a protection mechanism failure that results in a security feature bypass over the network.

A successful attack requires the victim to open a malicious HTML file or a crafted shortcut (.lnk) that leverages MSHTML for rendering. When opened, the flaw allows an attacker to bypass certain security checks in MSHTML, potentially removing or weakening normal browser or Office sandbox or warning protections and enabling follow‑on code execution or phishing activity.

Microsoft Word Security Feature Bypass Vulnerability

CVE-2026-21514 (CVSS score 5.5 out of 10) affects Microsoft Word. It relies on untrusted inputs in a security decision, leading to a local security feature bypass.  

An attacker must persuade a user to open a malicious Word document to exploit this vulnerability. If exploited, the untrusted input is processed incorrectly, potentially bypassing Word’s defenses for embedded or active content—leading to execution of attacker‑controlled content that would normally be blocked.

Desktop Window Manager Elevation of Privilege Vulnerability

CVE-2026-21519 (CVSS score 7.8 out of 10) is a local elevation‑of‑privilege vulnerability in Windows Desktop Window Manager caused by type confusion (a flaw where the system treats one type of data as another, leading to unintended behavior).

A locally authenticated attacker with low privileges and no required user interaction can exploit the issue to gain higher privileges. Exploitation must be done locally, for example via a crafted program or exploit chain stage running on the target system. An attacker who successfully exploited this vulnerability could gain SYSTEM privileges.

Windows Remote Access Connection Manager Denial of Service Vulnerability

CVE-2026-21525 (CVSS score 6.2 out of 10) is a denial‑of‑service vulnerability in the Windows Remote Access Connection Manager service (RasMan).

An unauthenticated local attacker can trigger the flaw with low attack complexity, leading to a high impact on availability but no direct impact on confidentiality or integrity. This means they could crash the service or potentially the system, but not elevate privileges or execute malicious code.

Windows Remote Desktop Services Elevation of Privilege Vulnerability

CVE-2026-21533 (CVSS score 7.8 out of 10) is an elevation‑of‑privilege vulnerability in Windows Remote Desktop Services, caused by improper privilege management.

A local authenticated attacker with low privileges, and no required user interaction, can exploit the flaw to escalate privileges to SYSTEM and fully compromise confidentiality, integrity, and availability on the affected system. Successful exploitation typically involves running attacker‑controlled code on a system with Remote Desktop Services present and abusing the vulnerable privilege management path.

Azure vulnerabilities

Azure users are also advised to take note of two critical vulnerabilities with CVSS ratings of 9.8:

How to apply fixes and check you’re protected

These updates fix security problems and keep your Windows PC protected. Here’s how to make sure you’re up to date:

1. Open Settings

  • Click the Start button (the Windows logo at the bottom left of your screen).
  • Click on Settings (it looks like a little gear).

2. Go to Windows Update

  • In the Settings window, select Windows Update (usually at the bottom of the menu on the left).

3. Check for updates

  • Click the button that says Check for updates.
  • Windows will search for the latest Patch Tuesday updates.
  • If you have selected automatic updates earlier, you may see this under Update history:
list of recent updates
  • Or you may see a Restart required message, which means all you have to do is restart your system and you’re done updating.
  • If not, continue with the steps below.

4. Download and Install

  • If updates are found, they’ll start downloading right away. Once complete, you’ll see a button that says Install or Restart now.
  • Click Install if needed and follow any prompts. Your computer will usually need a restart to finish the update. If it does, click Restart now.

5. Double-check you’re up to date

  • After restarting, go back to Windows Update and check again. If it says You’re up to date, you’re all set!
You're up to date

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Malwarebytes earns PCMag Best Tech Brand spot, scores 100% with MRG Effitas 

Malwarebytes is on a roll.  Recently named one of PCMag’s “Best Tech Brands for 2026,” Malwarebytes also scored 100% on the first-ever MRG Effitas consumer security product test, cementing the fact that we are loved by users and trusted by experts.  

But don’t take our word for it.

As PCMag Principal Writer Neil J. Rubenking said:

“If your antivirus fails, and it don’t look good, who ya gonna call? The answer: Malwarebytes. Even tech support agents from competitors have instructed us to use it.”

PCMag

Malwarebytes has been named one of PCMag’s Best Tech Brands for 2026. Coming in at #12, Malwarebytes makes the list with the highest Net Promoter Score (NPS) of all the brands in the list (likelihood to recommend by users).

With this ranking, Malwarebytes made its third appearance as a PCMag Best Tech Brand! We’ve also achieved the year’s highest average Net Promoter Score, at 83.40. (Last year, we had the second-highest NPS, after only Toyota).

Best Brands 2026 from PC Mag

But NPS alone can’t put us on the list—excellent reviews are needed, too. PCMag’s Rubenking found plenty to be happy about in his assessments of our products in 2025. For example, Malwarebytes Premium adds real-time multi-layered detection that eradicates most malware to the stellar stopping power you get on demand in the free edition.

MRG Effitas

Malwarebytes has aced the first-ever MRG Effitas Consumer Assessment and Certification, which evaluated eight security applications to determine their capabilities in stopping malware, phishing, and other online threats. We detected and stopped all in-the-wild malware infections and phishing samples while also generating zero false positives.

We’re beyond excited to have reached a 100% detection rate for in-the-wild malware as well as a 100% rate for all phishing samples with zero false positives. 

  • MRG ITW
  • MRG Phishing
  • MRG FP

The testing criteria is designed to determine how well a product works to do what it promises based on what MRG Effitas refers to as “metrics that matter.” We understand that the question isn’t if a system will encounter malware, but when.

Malwarebytes is proud to be recognized for its work in protecting people against everyday threats online.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Discord will limit profiles to teen-appropriate mode until you verify your age

Discord announced it will put all existing and new profiles in teen-appropriate mode by default in early March.

The teen-appropriate profile mode will remain in place until users prove they are adults. To change a profile to “full access” will require verification by Discord’s age inference model—a new system that runs in the background to help determine whether an account belongs to an adult, without always requiring users to verify their age.

Savannah Badalich, Head of Product Policy at Discord, explained the reasoning:

“Rolling out teen-by-default settings globally builds on Discord’s existing safety architecture, giving teens strong protections while allowing verified adults flexibility. We design our products with teen safety principles at the core and will continue working with safety experts, policymakers, and Discord users to support meaningful, long term wellbeing for teens on the platform.”

Platforms have been facing growing regulatory pressure—particularly in the UK, EU, and parts of the US—to introduce stronger age-verification measures. The announcement also comes as concerns about children’s safety on social media continue to surface. In research we published today, parents highlighted issues such as exposure to inappropriate content, unwanted contact, and safeguards that are easy to bypass. Discord was one of the platforms we researched.

The problem in Discord’s case lies in the age-verification methods it’s made available, which require either a facial scan or a government-issued ID. Discord says that video selfies used for facial age estimation never leave a user’s device, but this method is known not to work reliably for everyone.

Identity documents submitted to Discord’s vendor partners are also deleted quickly—often immediately after age confirmation, according to Discord. But, as we all know, computers are very bad at “forgetting” things and criminals are very good at finding things that were supposed to be gone.

Besides all that, the effectiveness of this kind of measure remains an issue. Minors often find ways around systems—using borrowed IDs, VPNs, or false information—so strict verification can create a sense of safety without fully eliminating risk. In some cases, it may even push activity into less regulated or more opaque spaces.

As someone who isn’t an avid Discord user, I can’t help but wonder why keeping my profile teen-appropriate would be a bad thing. Let us know in the comments what your objections to this scenario would be.

I wouldn’t have to provide identification and what I’d “miss” doesn’t sound terrible at all:

  • Mature and graphic images would be permanently blocked.
  • Age-restricted channels and servers would be inaccessible.
  • DMs from unknown users would be rerouted to a separate inbox.
  • Friend requests from unknown users would always trigger a warning pop-up.
  • No speaking on server stages.

Given the amount of backlash this news received, I’m probably missing something—and I don’t mind being corrected. So let’s hear it.

Note: All comments are moderated. Those including links and inappropriate language will be deleted. The rest must be approved by a moderator.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

How safe are kids using social media? We did the groundwork

When researchers created an account for a child under 13 on Roblox, they expected heavy guardrails. Instead, they found that the platform’s search features still allowed kids to discover communities linked to fraud and other illicit activity.

The discoveries spotlight the question that lawmakers around the world are circling: how do you keep kids safe online?

Australia has already acted, while the UK, France, and Canada are actively debating tighter rules around children’s use of social media. This month US Senator Ted Cruz reintroduced a bill to do it while also chairing a Congressional hearing about online kid safety.

Lawmakers have said these efforts are to keep kids safe online. But as the regulatory tide rises, we wanted to understand what digital safety for children actually looks like in practice.

So, we asked a specialist research team to explore how well a dozen mainstream tech providers are protecting children aged under 13 online.

We found that most services work well when kids use the accounts and settings designed for them. But when children are curious, use the wrong account type, or step outside those boundaries, things can go sideways quickly.

Over several weeks in December, the research team explored how platforms from Discord to YouTube handled children’s online use. They relied on standard user behavior rather than exploits or technical tricks to reflect what a child could realistically encounter.

The researchers focused on how platforms catered to kids through specific account types, how age restrictions were enforced in practice, and whether sensitive content was discoverable through normal browsing or search.

What emerged was a consistent pattern: curious kids who poke around a little, or who end up using the wrong account type, can run into inappropriate content with surprisingly little effort.

A detailed breakdown of the platforms tested, account types used, and where sensitive content was discovered appears in the research scope and methodology section at the end of this article.

When kids’ accounts are opt-in

One thing the team tried was to simply access the generic public version of a site rather than the kid-protected area.

This was a particular problem with YouTube. The company runs a kid-specific service called YouTube Kids, which the researchers said is effectively sanitized of inappropriate content (it sounds like things have changed since 2022).

The issue is that YouTube’s regular public site isn’t sanitized, and even though the company says you must be at least 13 to use the service unless ‘enabled’ by a parent, in reality anyone can access it. From the report:

“Some of the content will require signing in (for age verification) prior the viewing, but the minor can access the streaming service as a ‘Guest’ user without logging in, bypassing any filtering that would otherwise apply to a registered child account.”

That opens up a range of inappropriate material, from “how-to” fraud channels through to scenes of semi-nudity and sexually suggestive material, the researchers said. Horrifically, they even found scenes of human execution on the public site. The researchers concluded:

“The absence of a registration barrier on the public platform renders the ‘YouTube Kids’ protection opt-in rather than mandatory.”

When adult accounts are easy to fake

Another worry is that even when accounts are age-gated, enterprising minors can easily get around them. While most platforms require users to be 13+, a self-declaration is often enough. All that remains is for the child to register an email address with a service that doesn’t require age verification.

This “double blind” vulnerability is a big problem. Kids are good at creating accounts. The tech industry has taught them to be, because they need them for most things they touch online, from streaming to school.

When they do get past the age gates, curious kids can quickly get to inappropriate material. Researchers found unmoderated nudity and explicit material on the social network Discord, along with TikTok content providing credit card fraud and identity theft tutorials. A little searching on the streaming site Twitch surfaced ads for escort services.

This points to a trade-off between privacy and age verification. While stricter age verification could close some of these gaps, it requires collecting more personal data, including IDs or biometric information. That creates privacy risks of its own, especially for children. That’s why most platforms rely on self-declared age, but the research shows how easily that can be bypassed.

When kids’ accounts let toxic content through

Cracks in the moderation foundations allow risky content: Roblox, the website and app where users build their own content, filters chats for child accounts. However, it also features “Communities,” which are groups designed for socializing and discovery.

These groups are easily searchable, and some use names and terminology commonly linked to criminal activities, including fraud and identity theft. One, called “Fullz,” uses a term widely understood to refer to stolen personal information, and “new clothes” is often used to refer to a new batch of stolen payment card data. The visible community may serve as a gateway, while the actual coordination of illicit activity or data trading occurs via “inner chatter” between the community members.

This kind of search wasn’t just an issue for Roblox, warned the team. It found Instagram profiles promoting financial fraud and crypto schemes, even from a restricted teen account.

Some sites passed the team’s tests admirably, though. The researchers simulated underage users who’d bypassed age verification, but were unable to find any harmful content on Minecraft, Snapchat, Spotify, or Fortnite. Fortnite’s approach is especially strict, disabling chat and purchases on accounts for kids under 13 until a parent verifies via email. It also uses additional verification steps using a Social Security number or credit card. Kids can still play, but they’re muted.

What parents can do

There is no platform that can catch everything, especially when kids are curious. That makes parental involvement the most important layer of protection.

One reason this matters is a related risk worth acknowledging: adults attempting to reach children through social platforms. Even after Instagram took steps to limit contact between adult and child accounts, parents still discovered loopholes. This isn’t a failure of one platform so much as a reminder that no set of controls can replace awareness and involvement.

Mark Beare, GM of Consumer at Malwarebytes says:

“Parents are navigating a fast-moving digital world where offline consequences are quickly felt, be it spoofed accounts, deepfake content or lost funds. Safeguards exist and are encouraged, but children can still be exposed to harmful content.”

This doesn’t mean banning children from the internet. As the EFF points out, many minors use online services productively with the support and supervision of their parents. But it does mean being intentional about how accounts are set up, how children interact with others online, and how comfortable they feel asking for help.

Accounts and settings

  • Use child or teen accounts where available, and avoid defaulting to adult accounts.
  • Keep friends and followers lists set to private.
  • Avoid using real names, birthdays, or other identifying details unless they are strictly required.
  • Avoid facial recognition features for children’s accounts.
  • For teens, be aware of “spam” or secondary accounts they’ve set up that may have looser settings.

Social behavior

  • Talk to your child about who they interact with online and what kinds of conversations are appropriate.
  • Warn them about strangers in comments, group chats, and direct messages.
  • Encourage them to leave spaces that make them uncomfortable, even if they didn’t do anything wrong.
  • Remind them that not everyone online is who they claim to be.

Trust and communication

  • Keep conversations about online activity open and ongoing, not one-off warnings.
  • Make it clear that your child can come to you if something goes wrong without fear of punishment or blame.
  • Involve other trusted adults, such as parents, teachers, or caregivers, so kids aren’t navigating online spaces alone.

This kind of long-term involvement helps children make better decisions over time. It also reduces the risk that mistakes made today can follow them into the future, when personal information, images, or conversations could be reused in ways they never intended.


Research findings, scope and methodology 

This research examined how children under the age of 13 may be exposed to sensitive content when browsing mainstream media and gaming services. 

For this study, a “kid” was defined as an individual under 13, in line with the Children’s Online Privacy Protection Act (COPPA). Research was conducted between December 1 and December 17, 2025, using US-based accounts. 

The research relied exclusively on standard user behavior and passive observation. No exploits, hacks, or manipulative techniques were used to force access to data or content. 

Researchers tested a range of account types depending on what each platform offered, including dedicated child accounts, teen or restricted accounts, adult accounts created through age self-declaration, and, where applicable, public or guest access without registration. 

The study assessed how platforms enforced age requirements, how easy it was to misrepresent age during onboarding, and whether sensitive or illicit content could be discovered through normal browsing, searching, or exploration. 

Across all platforms tested, default algorithmic content and advertisements were initially benign and policy-compliant. Where sensitive content was found, it was accessed through intentional, curiosity-driven behavior rather than passive recommendations. No proactive outreach from other users was observed during the research period. 

The table below summarizes the platforms tested, the account types used, and whether sensitive content was discoverable during testing. 

Platform  Account type tested  Dedicated kid/teen account  Age gate easy to bypass  Illicit content discovered  Notes
YouTube (public)  No registration (guest)  Yes (YouTube Kids)  N/A  Yes  Public YouTube allowed access to scam/fraud content and violent footage without sign-in. Age-restricted videos required login, but much content did not. 
YouTube Kids  Kid account  Yes  N/A  No  Separate app with its own algorithmic wall. No harmful content surfaced. 
Roblox  All-age account (13+)  No  Not required  Yes  Child accounts could search for and find communities linked to cybercrime and fraud-related keywords. 
Instagram  Teen account (13–17)  No  Not required  Yes  Restricted accounts still surfaced profiles promoting fraud and cryptocurrency schemes via search. 
TikTok  Younger user account (13+)  Yes  Not required  No  View-only experience with no free search. No harmful content surfaced. 
TikTok  Adult account  No  Yes  Yes  Search surfaced credit card fraud–related profiles and tutorials after age gate bypass. 
Discord  Adult account  No  Yes  Yes  Public servers surfaced explicit adult content when searched directly. No proactive contact observed. 
Twitch  Adult account  No  Yes  Yes  Discovered escort service promotions and adult content, some behind paywalls. 
Fortnite  Cabined (restricted) account (13+)  Yes  Hard to bypass  No  Chat and purchases disabled until parent verification. No harmful content found. 
Snapchat  Adult account  No  Yes  No  No sensitive content surfaced during testing. 
Spotify  Adult account  Yes  Yes  No  Explicit lyrics labeled. No harmful content found. 
Messenger Kids  Kid account  Yes  Not required  No  Fully parent-controlled environment. No search or
external contacts. 

Screenshots from the research

  • List of Roblox communities with cybercrime-oriented keywords
  • Roblox community that offers chat without verification
  • Roblox community with cybercrime-oriented keywords
  • Graphic content on publicly accessible YouTube
  • Credit card fraud content on publicly accessible YouTube
  • Active escort page on Twitch
  • Stolen credit cards for sale on an Instagram teen account
  • Carding for beginners content on an Instagram teen account
  • Carding for beginners content on a TikTok adult account, accessed by kids with a fake date of birth.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Man tricked hundreds of women into handing over Snapchat security codes

Fresh off a breathless Super Bowl Sunday, we’re less thrilled to bring you this week’s Weirdo Wednesday. Two stories caught our eye, both involving men who crossed clear lines and invaded women’s privacy online.

Last week, 27-year-old Kyle Svara of Oswego, Illinois admitted to hacking women’s Snapchat accounts across the US. Between May 2020 and February 2021, Svara harvested account security codes from 571 victims, leading to confirmed unauthorized access to at least 59 accounts.

Rather than attempting to break Snapchat’s robust encryption protocols, Svara targeted the account owners themselves with social engineering.

After gathering phone numbers and email addresses, he triggered Snapchat’s legitimate login process, which sent six-digit security codes directly to victims’ devices. Posing as Snapchat support, he then sent more than 4,500 anonymous messages via a VoIP texting service, claiming the codes were needed to “verify” or “secure” the account.

Svara showed particular interest in Snapchat’s My Eyes Only feature—a secondary four-digit PIN meant to protect a user’s most sensitive content. By persuading victims to share both codes, he bypassed two layers of security without touching a single line of code. He walked away with private material, including nude images.

Svara didn’t do this solely for his own kicks. He marketed himself as a hacker-for-hire, advertising on platforms like Reddit and offering access to specific accounts in exchange for money or trades.

Selling his services to others was how he got found out. Although Svara stopped hacking in early 2021, his legal day of reckoning followed the 2024 sentencing of one of his customers: Steve Waithe, a former track and field coach who worked at several high-profile universities including Northeastern. Waithe paid Svara to target student athletes he was supposed to mentor.

Svara also went after women in his home area of Plainfield, Illinois, and as far away as Colby College in Maine.

He now faces charges including identity theft, wire fraud, computer fraud, and making false statements to law enforcement about child sex abuse material. Sentencing is scheduled for May 18.

How to protect your Snapchat account

Never send someone your login details or secret codes, even if you think you know them.

This is also a good time to talk about passkeys.

Passkeys let you sign in without a password, but unlike multi-factor authentication, passkeys are cryptographically tied to your device, and can’t be phished or forwarded like one-time codes. Snapchat supports them, and they offer stronger protection than traditional multi-factor authentication, which is increasingly susceptible to smart phishing attacks.

Bad guys with smart glasses

Unfortunately, hacking women’s social media accounts to steal private content isn’t new. But predators will always find a way to use smart tech in nefarious ways. Such is the case with new generations of ‘smart glasses’ powered by AI.

This week, CNN published stories from women who believed they were having private, flirtatious interactions with strangers—only to later discover the men were recording them using camera-equipped smart glasses and posting the footage online.

These clips are often packaged as “rizz” videos—short for “charisma”—where so-called manfluencers film themselves chatting up women in public, without consent, to build followings and sell “coaching” services.

The glasses, sold by companies like Meta, are supposed to be used for recording only with consent, and often display a light to show that they’re recording. In practice, that indicator is easy to hide.

When combined with AI-powered services to identify people, as researchers did in 2024, the possibilities become even more chilling. We’re unaware of any related cases coming to court, but suspect it’s only a matter of time.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

AI chat app leak exposes 300 million messages tied to 25 million users

An independent security researcher uncovered a major data breach affecting Chat & Ask AI, one of the most popular AI chat apps on Google Play and Apple App Store, with more than 50 million users.

The researcher claims to have accessed 300 million messages from over 25 million users due to an exposed database. These messages reportedly included, among other things, discussions of illegal activities and requests for suicide assistance.

Behind the scenes, Chat & Ask AI is a “wrapper” app that plugs into various large language models (LLMs) from other companies, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. Users can choose which model they want to interact with.

The exposed data included user files containing their entire chat history, the models used, and other settings. But it also revealed data belonging to users of other apps developed by Codeway—the developer of Chat & Ask AI.

The vulnerability behind this data breach is a well-known and documented Firebase misconfiguration. Firebase is a cloud-based backend-as-a-service (BaaS) platform provided by Google that helps developers build, manage, and scale mobile and web applications.

Security researchers often refer to a set of preventable errors in how developers set up Google Firebase services, which leave backend data, databases, and storage buckets accessible to the public without authentication.

One of the most common Firebase misconfigurations is leaving Security Rules set to public. This allows anyone with the project URL to read, modify, or delete data without authentication.

This prompted the researcher to create a tool that automatically scans apps on Google Play and Apple App Store for this vulnerability—with astonishing results. Reportedly, the researcher, named Harry, found that 103 out of 200 iOS apps they scanned had this issue, collectively exposing tens of millions of stored files. 

To draw attention to the issue, Harry set up a website where users can see the apps affected by the issue. Codeway’s apps are no longer listed there, as Harry removes entries once developers confirm they have fixed the problem. Codeway reportedly resolved the issue across all of its apps within hours of responsible disclosure.

How to stay safe

Besides checking if any apps you use appear in Harry’s Firehound registry, there are a few ways to better protect your privacy when using AI chatbots.

  • Use private chatbots that don’t use your data to train the model.
  • Don’t rely on chatbots for important life decisions. They have no experience or empathy.
  • Don’t use your real identity when discussing sensitive subjects.
  • Keep shared information impersonal. Don’t use real names and don’t upload personal documents.
  • Don’t share your conversations unless you absolutely have to. In some cases, it makes them searchable.
  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you’re not logged in to that social media platform. Your conversations could be linked to your social media account, which might contain a lot of personal information.

Always remember that the developments in AI are going too fast for security and privacy to be baked into technology. And that even the best AIs still hallucinate.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.