Archive for author: makoadmin

AI-generated passwords are a security risk

Using Artificial Intelligence (AI) to generate your passwords is a bad idea. It’s likely to give that password to a criminal who can then use it in a dictionary attack—which is when an attacker runs through a prepared list of likely passwords (words, phrases, patterns) with automated tools until one of them works, instead of trying every possible combination.

AI cybersecurity firm Irregular tested ChatGPT, Claude, and Gemini and found that the passwords they generate are “highly predictable,” and not truly random. When they tested Claude, 50 prompts produced just 23 unique passwords. One string appeared 10 times, while many others shared the same structure.

This could turn out to be a problem.

Traditionally, attackers build or download wordlists made of common passwords, real‑world leaks, and patterned variants (words plus numbers and symbols) to use in dictionary attacks. It requires almost no effort to add a thousand or so passwords commonly provided by AI chatbots.

AI chatbots are trained to provide answers based on what they’ve learned. They are good at predicting what comes next based on what they already have, not at inventing something completely new.

As the researchers put it:

“LLMs work by predicting the most likely next token, which is the exact opposite of what secure password generation requires: uniform, unpredictable randomness.”

In the past, we explained why computers are not very good at randomness in the first place. Password managers get around this fact by using dedicated cryptographic random number generators that mix in real‑world entropy, instead of the pattern‑based text generation you see with LLMs.

In other words, a good password manager doesn’t “invent” your password the way an AI does. It asks the operating system for cryptographic random bits and turns those directly into characters, so there’s no hidden pattern for attackers to learn.

A website or platform where you submit such passwords may tell you they’re strong, but the same basic reasoning as to why you shouldn’t reuse passwords applies. What use is a strong password if cybercriminals already have it?

As always, we prefer passkeys over passwords, but we realize this isn’t always an option. If you have to use a password, don’t let an AI make one up for you. It’s just not safe. And if you already did, consider changing it and add multi-factor authentication (2FA) to make the account more secure.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Intimate products maker Tenga spilled customer data

Tenga confirmed reports published by several outlets that the company notified customers of a data breach.

The Japanese manufacturer of adult products appears to have fallen victim to a phishing attack targeting one of its employees. Tenga reportedly wrote in the data breach notification:

“An unauthorized party gained access to the professional email account of one of our employees.”

This unauthorized access exposed the contents of said account’s inbox, potentially including customer names, email addresses, past correspondence, order details, and customer service inquiries.

In its official statement, Tenga said a “limited segment” of US customers who interacted with the company were impacted by the incident. Regarding the scope of the stolen data, it stated:

“The information involved was limited to customer email addresses and related correspondence history. No sensitive personal data, such as Social Security numbers, billing/credit card information, or TENGA/iroha Store passwords were jeopardized in this incident.”

From the wording of Tenga’s online statement, it seems the compromised account was used to send spam emails that included an attachment.

“Attachment Safety: We want to state clearly that there is no risk to your device or data if the suspicious attachment was not opened. The risk was limited to the potential execution of the attachment within the specific ‘spam’ window (February 12, 2026, between 12am and 1am PT).”



We reached out to Tenga about this “suspicious attachment” but have not heard back at the time of writing. We’ll keep you posted.

Tenga proactively contacted potentially affected customers. It advises them to change passwords and remain vigilant about any unusual activity. We would add that affected customers should be on the lookout for sextortion-themed phishing attempts.

What to do if your data was in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.
  • Use our free Digital Footprint scan to see whether your personal information has been exposed online.

What do cybercriminals know about you?

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.

Meta patents AI that could keep you posting from beyond the grave

Tech bros have been wanting to become immortal for years. Until they get there, their fallback might be continuing to post nonsense on social media from the afterlife.

On December 30, 2025, Meta was granted US patent 12513102B2: Simulation of a user of a social networking system using a language model. It describes a system that trains an AI on a user’s posts, comments, chats, voice messages, and likes, then deploys a bot to respond to newsfeeds, DMs, and even simulated audio or video calls.

Filed in November 2023 by Meta CTO Andrew Bosworth, it sounds innocuous enough. Perhaps some people would use it to post their political hot takes while they’re asleep.

Dig deeper, though, and the patent veers from absurd to creepy. It’s designed to be used not just from beyond the pillow but beyond the grave.

From the patent:

“The language model may be used for simulating the user when the user is absent from the social networking system, for example, when the user takes a long break or if the user is deceased.”

A Meta spokesperson told Business Insider that the company has no plans to act on the patent. And tech companies have a habit of laying claim to bizarre ideas that never materialize. But Facebook’s user numbers have stalled, and it presumably needs all the engagement it can get. We already know that the company loves the idea of AI ‘users’, having reportedly piloted them in late 2024, much to human users’ annoyance.

If the company ever did decide to pull the trigger on this technology, it would be a departure from its own memorialization policy, which preserves accounts without changes. One reason the company might not be willing to step over the line is that the world simply isn’t ready for AI conversations with the dead. Other companies have considered and even tested similar systems. Microsoft patented a chatbot that would allow you to talk to AI versions of deceased individuals in 2020; its own AI general manager called it disturbing, and it never went into production. Amazon demonstrated Alexa mimicking a dead grandmother’s voice from under a minute of audio in 2022, framing it as preserving memories. That never launched either.

Some projects that did ship left people wishing they hadn’t. Startup 2Wai’s avatar app originally offered the chance to preserve loved ones as AI avatars. Users called it “nightmare fuel” and “demonic”. The company seems to have pivoted to safer ground like social avatars and personal AI coaches now.

The other thing holding Meta back could be the legal questions. Unsurprisingly for such a new idea, there isn’t a uniform US framework on the use of AI to represent the dead. Several states recognize post-mortem right of publicity, although states like New York limit that to people whose voices and images have commercial value (typically meaning celebrities). California’s AB 1836 specifically targets AI-generated impersonations of the deceased, though.

Meta would also need to tiptoe carefully around the law in Europe. The company had to pause AI training on European users in 2024 under regulatory pressure, but then launched it anyway in March last year. Then it refused to sign the EU’s GPAI Code of Practice last July (the only major AI firm to do so). Meta’s relationship with EU regulators is strained at best.

Europe’s General Data Protection Regulation (GDPR) excludes deceased persons’ data, but Article 85 of the French Data Protection law lets anyone leave instructions about the retention, deletion and communication of their personal data after death. The EU AI Act’s Article 50 (fully applicable this August) will also require AI systems to disclose they are AI, with penalties up to €15 million or 3% of worldwide turnover for companies that don’t comply.

Hopefully Meta really will file this in the “just because we can do it doesn’t mean we should” drawer, and leave erstwhile social media sharers to rest in peace.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Betterment data breach might be worse than we thought

Betterment LLC is an investment advisor registered with US Securities and Exchange Commission (SEC). The company disclosed a January 2026 incident in which an attacker used social engineering to access a third‑party platform used for customer communications, then abused it to send crypto‑themed phishing messages and exfiltrate contact and identity data for more than a million people.

What makes this particularly concerning is the depth of the exposed information. This isn’t just a list of email addresses. The leaked files include retirement plan details, financial interests, internal meeting notes, and pipeline data. It’s information that gives cybercriminals real context about a person’s finances and professional life.

What’s worse is that ransomware group Shiny Hunters claims that, since Betterment refused to pay their demanded ransom, it is publishing the stolen data.

Shiny Hunters claim

While Betterment has not revealed the number of affected customers in its online communications, general consensus indicates that the data of 1.4 million customers was involved. And now, every cybercriminal can download this information at their leisure.

We analyzed some of the data and found one particularly worrying CSV file with detailed data on 181,487 people. This file included information such as:

  • Full names (first and last)
  • Personal email addresses (e.g., Gmail)
  • Work email addresses
  • Company name and employer info
  • Job titles and roles
  • Phone numbers (both mobile and work numbers)
  • Addresses and company websites
  • Plan details—company retirement/401k plans, assets, participants
  • Survey responses, deal and client pipeline details, meeting notes
  • Financial needs/interests (e.g., requesting a securities-backed line of credit for a house purchase)


This kind of data is a gold mine for phishers, who can use it in targeted attacks. It has enough context to craft convincing, individually tailored phishing emails. For example:

  • Addressing someone by their real name, company, and job title
  • Referencing the company’s retirement or financial plans
  • Impersonating Betterment advisors or plan administrators
  • Initiating scam calls about financial advice

Combined with data from other breaches it could even be worse and lead to identity theft.

What to do if your data was in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Job scam uses fake Google Forms site to harvest Google logins

As part of our investigation into a job-themed phishing campaign, we came across several suspicious URLs that all looked like this:

https://forms.google.ss-o[.]com/forms/d/e/{unique_id}/viewform?form=opportunitysec&promo=

The subdomain forms.google.ss-o[.]com is a clear attempt to impersonate the legitimate forms.google.com. The “ss-o” is likely introduced to look like “single sign-on,” an authentication method that allows users to securely log in to multiple, independent applications or websites using one single set of credentials (username and password).

Unfortunately, when we tried to visit the URLs we were redirected to the local Google search website. This is a common phisher’s tactic to prevent victims from sharing their personalized links with researchers or online analysis.

After some digging, we found a file called generation_form.php on the same domain, which we believe the phishing crew used to create these links. The landing page for the campaign was: https://forms.google.ss-o[.]com/generation_form.php?form=opportunitysec

The generation_form.php script does what the name implies: It creates a personalized URL for the person clicking that link.

With that knowledge in hand, we could check what the phish was all about. Our personalized link brought us to this website:

Fake Google Forms site
Fake Google Forms site

The greyed out “form” behind the prompt promises:

  • We’re Hiring! Customer Support Executive (International Process)
  • Are you looking to kick-start or advance your career…
  • The fields in the form: Full Name, Email address, and an essay field “Please describe in detail why we should choose you”
  • Buttons: “Submit” and “Clear form.”

The whole web page emulates Google Forms, including logo images, color schemes, a notice about not “submitting passwords,” and legal links. At the bottom, it even includes the typical Google Forms disclaimer (“This content is neither created nor endorsed by Google.”) for authenticity.

Clicking the “Sign in” button took us to https://id-v4[.]com/generation.php, which has now been taken down. The domain id-v4.com has been used in several phishing campaigns for almost a year. In this case, it asked for Google account credentials.

Given the “job opportunity” angle, we suspect links were distributed through targeted emails or LinkedIn messages.

How to stay safe

Lures that promise remote job opportunities are very common these days. Here are a few pointers to help keep you safe from targeted attacks like this:

  • Do not click on links in unsolicited job offers.
  • Use a password manager, which would not have filled in your Google username and password on a fake website.
  • Use an up to date, real-time anti-malware solution with a web protection component.

Pro tip: Malwarebytes Scam Guard identified this attack as a scam just by looking at the URL.

IOCs

id-v4[.]com

forms.google.ss-o[.]com

forms.google.ss-o.com blocked by Malwarebytes
Blocked by Malwarebytes

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Scammers use fake “Gemini” AI chatbot to sell fake “Google Coin”

Scammers have found a new use for AI: creating custom chatbots posing as real AI assistants to pressure victims into buying worthless cryptocurrencies.

We recently came across a live “Google Coin” presale site featuring a chatbot that claimed to be Google’s Gemini AI assistant. The bot guided visitors through a polished sales pitch, answered their questions about investment, projecting returns, and ultimately ended with victims sending an irreversible crypto payment to the scammers.

Google does not have a cryptocurrency. But as “Google Coin” has appeared before in scams, anyone checking it out might think it’s real. And the chatbot was very convincing.

Google Coin Pre-Market

AI as the closer

The chatbot introduced itself as,

“Gemini — your AI assistant for the Google Coin platform.”

It used Gemini-style branding, including the sparkle icon and a green “Online” status indicator, creating the immediate impression that it was an official Google product.

When asked, “Will I get rich if I buy 100 coins?”, the bot responded with specific financial projections. A $395 investment at the current presale price would be worth $2,755 at listing, it claimed, representing “approximately 7x” growth. It cited a presale price of $3.95 per token, an expected listing price of $27.55, and invited further questions about “how to participate.”

This is the kind of personalized, responsive engagement that used to require a human scammer on the other end of a Telegram chat. Now the AI does it automatically.

Fake Gemini chatbot

A persona that never breaks

What stood out during our analysis was how tightly controlled the bot’s persona was. We found that it:

  • Claimed consistently to be “the official helper for the Google Coin platform”
  • Refused to provide any verifiable company details, such as a registered entity, regulator, license number, audit firm, or official email address
  • Dismissed concerns and redirected them to vague claims about “transparency” and “security”
  • Refused to acknowledge any scenario in which the project could be a scam
  • Redirected tougher questions to an unnamed “manager” (likely a human closer waiting in the wings)

When pressed, the bot doesn’t get confused or break character. It loops back to the same scripted claims: a “detailed 2026 roadmap,” “military-grade encryption,” “AI integration,” and a “growing community of investors.”

Whoever built this chatbot locked it into a sales script designed to build trust, overcome doubt, and move visitors toward one outcome: sending cryptocurrency.

Scripted fake Gemini chatbot

Why AI chatbots change the scam model

Scammers have always relied on social engineering. Build trust. Create urgency. Overcome skepticism. Close the deal.

Traditionally, that required human operators, which limited how many victims could be engaged at once. AI chatbots remove that bottleneck entirely.

A single scam operation can now deploy a chatbot that:

  • Engages hundreds of visitors simultaneously, 24 hours a day
  • Delivers consistent, polished messaging that sounds authoritative
  • Impersonates a trusted brand’s AI assistant (in this case, Google’s Gemini)
  • Responds to individual questions with tailored financial projections
  • Escalates to human operators only when necessary

This matches a broader trend identified by researchers. According to Chainalysis, roughly 60% of all funds flowing into crypto scam wallets were tied to scammers using AI tools. AI-powered scam infrastructure is becoming the norm, not the exception. The chatbot is just one piece of a broader AI-assisted fraud toolkit—but it may be the most effective piece, because it creates the illusion of a real, interactive relationship between the victim and the “brand.”

The bait: a polished fake

The chatbot sits on top of a convincing scam operation. The Google Coin website mimics Google’s visual identity with a clean, professional design, complete with the “G” logo, navigation menus, and a presale dashboard. It claims to be in “Stage 5 of 5” with over 9.9 million tokens sold and a listing date of February 18—all manufactured urgency.

To borrow credibility, the site displays logos of major companies—OpenAI, Google, Binance, Squarespace, Coinbase, and SpaceX—under a “Trusted By Industry” banner. None of these companies have any connection to the project.

If a visitor clicks “Buy,” they’re taken to a wallet dashboard that looks like a legitimate crypto platform, showing balances for “Google” (on a fictional “Google-Chain”), Bitcoin, and Ethereum.

The purchase flow lets users buy any number of tokens they want and generates a corresponding Bitcoin payment request to a specific wallet address. The site also layers on a tiered bonus system that kicks in at 100 tokens and scales up to 100,000: buy more and the bonuses climb from 5% up to 30% at the top tier. It’s a classic upsell tactic designed to make you think it’s smarter to spend more.

Every payment is irreversible. There is no exchange listing, no token with real value, and no way to get your money back.

Waiting for payment

What to watch for

We’re entering an era where the first point of contact in a scam may not be a human at all. AI chatbots give scammers something they’ve never had before: a tireless, consistent, scalable front-end that can engage victims in what feels like a real conversation. When that chatbot is dressed up as a trusted brand’s official AI assistant, the effect is even more convincing.

According to the FTC’s Consumer Sentinel data, US consumers reported losing $5.7 billion to investment scams in 2024 (more than any other type of fraud, and up 24% on the previous year). Cryptocurrency remains the second-largest payment method scammers use to extract funds, because transactions are fast and irreversible. Now add AI that can pitch, persuade, and handle objections without a human operator—and you have a scalable fraud model.

AI chatbots on scam sites will become more common. Here’s how to spot them:

They impersonate known AI brands. A chatbot calling itself “Gemini,” “ChatGPT,” or “Copilot” on a third-party crypto site is almost certainly not what it claims to be. Anyone can name a chatbot anything.

They won’t answer due diligence questions. Ask what legal entity operates the platform, what financial regulator oversees it, or where the company is registered. Legitimate operations can answer those questions, scam bots try to avoid them (and if they do answer, verify it).

They project specific returns. No legitimate investment product promises a specific future price. A chatbot telling you that your $395 will become $2,755 is not giving you financial information—it’s running a script.

They create urgency. Pressure tactics like, “stage 5 ends soon,” “listing date approaching,” “limited presale” are designed to push you into making fast decisions.

How to protect yourself

Google does not have a cryptocurrency. It has not launched a presale. And its Gemini AI is not operating as a sales assistant on third-party crypto sites. If you encounter anything suggesting otherwise, close the tab.

  • Verify claim on the official website of the company being referenced.
  • Don’t rely on a chatbot’s branding. Anyone can name a bot anything.
  • Never send cryptocurrency based on projected returns.
  • Search the project name along with “scam” or “review” before sending any money.
  • Use web protection tools like Malwarebytes Browser Guard, which is free to use and blocks known and unknown scam sites.

If you’ve already sent funds, report it to your local law enforcement, the FTC at reportfraud.ftc.gov, and the FBI’s IC3 at ic3.gov.

IOCs

0xEc7a42609D5CC9aF7a3dBa66823C5f9E5764d6DA

98388xymWKS6EgYSC9baFuQkCpE8rYsnScV4L5Vu8jt

DHyDmJdr9hjDUH5kcNjeyfzonyeBt19g6G

TWqzJ9sF1w9aWwMevq4b15KkJgAFTfH5im

bc1qw0yfcp8pevzvwp2zrz4pu3vuygnwvl6mstlnh6

r9BHQMUdSgM8iFKXaGiZ3hhXz5SyLDxupY


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Chrome “preloading” could be leaking your data and causing problems in Browser Guard

This article explains why Chrome’s “preloading” feature can cause scary-looking blocks in Malwarebytes Browser Guard and how to turn it off.

Modern browsers want to provide content instantly. To do that, Chrome includes a feature called page preloading. When this is enabled, Chrome doesn’t just wait for you to click a link. It guesses what you’re likely to click next and starts loading those pages in the background—before you decide whether to visit them.

That guesswork happens in several places. When you type a search into the address bar, Chrome may start preloading one or more of the top search results so that, if you click them, they open almost immediately. It can also preload pages that are linked from the site you’re currently on, based on Google’s prediction that they’re “likely next steps.” All of this happens quietly, without any extra tabs opening, and often without any obvious sign that more pages are being fetched.​

From a performance point of view, that’s clever. From a privacy and security point of view, it’s more complicated.

Those preloaded pages can run code, drop cookies, and contact servers, even if you never actually visit them in the traditional sense. In other words, your browser can talk to a site you didn’t consciously choose to open.​

Malwarebytes Browser Guard inspects web traffic and blocks connections to domains it considers malicious or suspicious. So, if Chrome decides to preload a search result that leads to a site on our blocklist, Browser Guard will still do its job and stop that background connection. The result can be confusing: You see a warning page (called a block page) for a site you don’t recognize and are sure you never clicked.

Nothing unusual is happening there, and it does not mean your browser is “clicking links by itself.” It simply means Chrome’s preloading feature made a behind-the-scenes request, and Browser Guard intercepted it as designed. Other privacy tools take a similar approach. Some popular content blockers disable preloading by default because it leaks more data and can contact unwanted sites.

For now, the simplest way to stop these unexpected block pages is to turn off preloading in Chrome’s settings, which prevents those speculative background requests.

How to manage Chrome’s preloading setting

We recommend turning off page preloading in Chrome to protect your browsing privacy and to stop seeing unexpected block pages when searching the web. If you don’t want to turn off page preloading, you can try using a different browser and repeating your search.

To turn off page preloading:

  1. In your browser search bar, enter: chrome://settings
  2. In the left sidebar, click Performance.
  3. Scroll down to Speed, then toggle Preload pages off.
How to turn preload pages on and off

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Scam Guard for desktop: A second set of eyes for suspicious moments 

Scams aren’t so obvious anymore. They’re well-written, have working grammar, and can lead victims to very convincing branded webpages. Scammers increasingly use AI tools to clone sites and create highly sophisticated scams at scale, so don’t expect to rely on spotting obvious typos anymore.

That’s why Scam Guard, Malwarebytes’ free, AI-powered scam detection assistant, is now available on Windows and Mac. Previously mobile-only, Scam Guard helps you quickly figure out whether something you’re looking at is risky, all before you click, reply, or share. 

When something feels off but you’re not sure what

Scams show up everywhere: emails, texts, pop-ups, messages, and websites that look legitimate. But when you’re moving fast, it’s easy to slip up. 

Scam Guard is designed for exactly those moments. If you’re unsure about a message or link, you can ask Scam Guard to take a look. It uses AI to analyze the content and give you a clear, fast assessment so you can decide what to do next with more confidence. 

How Scam Guard helps 

Scam Guard provides a quick reality check just when you need it: 

  • Real-time threat intelligence: An AI-powered chat companion backed by decades of Malwarebytes threat intelligence and cybersecurity expertise. Get instant verdicts you can trust, plus clear next steps.
  • Comprehensive scam detection: It flags suspicious messages and links, spots common scam tactics, and explains why something is risky. It covers romance, phishing, financial, text, robocall, and shipping scams, and more.
  • Built for where scams start: Works right on your desktop, where many scams begin.
  • 24/7 support: Available around the clock, so you can get help anytime you need it.

Stop wondering to yourself “Is this legit?” With Scam Guard, you get an answer you can trust.  

Now on desktop, right where scams happen 

Many scams target users while they’re on their computers—checking email, browsing the web, or managing accounts. Bringing Scam Guard to Windows and Mac helps where it’s most useful. 

Whether you’re reviewing an unexpected message, a pop-up that feels urgent, or a deal that sounds a little too good, Scam Guard gives you a smarter way to pause and check before reacting.  Here’s how to share a scam with Scam Guard on your computer.

Extra protection, without extra stress 

Staying safe online shouldn’t mean becoming suspicious of everything. It should mean having the right tools when something doesn’t add up. Scam Guard is there to help you slow down, spot warning signs, and avoid costly mistakes. It makes it easier to protect yourself from scams.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard. Submit a screenshot, paste suspicious content, or share a link, text or phone number, and we’ll tell you if it’s a scam or legit. Available with Malwarebytes Premium Security for all your devices, and in the Malwarebytes app for iOS and Android.

Update Chrome now: Zero-day bug allows code execution via malicious webpages

Google has issued a patch for a high‑severity Chrome zero‑day, tracked as CVE‑2026‑2441, a memory bug in how the browser handles certain font features that attackers are already exploiting.

CVE-2026-2441 has the questionable honor of being the first Chrome zero-day of 2026. Google considered it serious enough to issue a separate update of the stable channel for it, rather than wait for the next major release.

How to update Chrome

The latest version number is 145.0.7632.75/76 for Windows and macOS, and 145.0.7632.75 for Linux. So, if your Chrome is on version 145.0.7632.75 or later, it’s protected from these vulnerabilities.

The easiest way to update is to allow Chrome to update automatically. But you can end up lagging behind if you never close your browser or if something goes wrong, such as an extension preventing the update.

To update manually, click the More menu (three dots), then go to Settings > About Chrome. If an update is available, Chrome will start downloading it. Restart Chrome to complete the update, and you’ll be protected against these vulnerabilities.

Chrome is up to date
Chrome at version 145.0.7632.76 is up to date

You can also find step-by-step instructions in our guide to how to update Chrome on every operating system.

Technical details

Google confirms it has seen active exploitation but is not sharing who is being targeted, how often, or detailed indicators yet.

But we can derive some information from what we know.

The vulnerability is a use‑after‑free issue in Chrome’s CSS font feature handling (CSSFontFeatureValuesMap), which is part of how websites display and style text. More specifically: The root cause is an iterator invalidation bug. Chrome would loop over a set of font feature values while also changing that set, leaving the loop pointing at stale data until an attacker managed to turn that into code execution.

Use-after-free (UAF) is a type of software vulnerability where a program attempts to access a memory location after it has been freed. That can lead to crashes or, in some cases, lets an attacker run their own code.

The CVE-record says, “Use after free in CSS in Google Chrome prior to 145.0.7632.75 allowed a remote attacker to execute arbitrary code inside a sandbox via a crafted HTML page.” (Chromium security severity: High)

This means an attacker would be able to create a special website, or other HTML content that would run code inside the Chrome browser’s sandbox.

Chrome’s sandbox is like a secure box around each website tab. Even if something inside the tab goes rogue, it should be confined and not able to tamper with the rest of your system. It limits what website code can touch in terms of files, devices, and other apps, so a browser bug ideally only gives an attacker a foothold in that restricted environment, not full control of the machine.

Running arbitrary code inside the sandbox is still dangerous because the attacker effectively “becomes” that browser tab. They can see and modify anything the tab can access. Even without escaping to the operating system, this is enough to steal accounts, plant backdoors in cloud services, or reroute sensitive traffic.

If chained with a vulnerability that allows a process to escape the sandbox, an attacker can move laterally, install malware, or encrypt files, as with any other full system compromise.

How to stay safe

To protect your device against attacks exploiting this vulnerability, you’re strongly advised to update as soon as possible. Here are some more tips to avoid becoming a victim, even before a zero-day is patched:

  • Don’t click on unsolicited links in emails, messages, unknown websites, or on social media.
  • Enable automatic updates and restart regularly. Many users leave browsers open for days, which delays protection even if the update is downloaded in the background.
  • Use an up-to-date, real-time anti-malware solution which includes a web protection component.

Users of other Chromium-based browsers can expect to see a similar update.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Hobby coder accidentally creates vacuum robot army

Sammy Azdoufal wanted to steer his robot vacuum with a PS5 controller. Like any good maker, he thought it would be fun to drive a new DJI Romo around manually. He ended up gaining access to an army of robotic cleaners that gave him eyes into thousands of homes.

Driven by purely playful reasons, Azdoufal used Anthropic’s Claude Code AI coding assistant to reverse-engineer his Romo’s communication protocols. But when his homebrew app connected to DJI’s servers, roughly 7,000 robot vacuums across 24 countries started answering.

He could watch their live camera feeds, listen through onboard microphones, and generate floor plans of homes he’d never visited. With just a 14-digit serial number, he pinpointed a Verge journalist’s robot, confirmed it was cleaning the living room at 80% battery, and produced an accurate map of the house from another country.

The technical failure was almost comically basic. DJI’s MQTT message broker had no topic-level access controls. Once you authenticated with a single device token, you could see traffic from others device in plaintext.

It wasn’t only vacuums that answered back. DJI’s Power portable battery stations, which run on the same MQTT infrastructure, also showed up. These are home-backup generators expandable to 22.5kWh, marketed for keeping your house running during outages.

What makes this different from a conventional security discovery is how it happened. Azdoufal used Claude Code to decompile DJI’s mobile app, understand its protocol, extract his own authentication token, and build a custom client.

AI coding tools are lowering the bar for advanced offensive security. The population capable of probing Internet of Things (IoT) protocols just got much, much larger, further eroding any remaining faith in security through obscurity.

Why plenty of IoT vacuum cleaners suck

This isn’t the first time someone has remotely pwned a robot vacuum cleaner. In 2024, hackers commandeered Ecovacs Deebot X2 vacuums across US cities, shouting slurs through speakers and chasing pets around. Ecovacs’s PIN protection was checked only by the app, never by the server or the device.

Last September, South Korea’s consumer watchdog tested six brands. While Samsung and LG fared well, and found serious flaws in three Chinese models. Dreame’s X50 Ultra allowed remote camera activation. Researcher who Dennis Giese later reported a TLS vulnerability in Dreame’s app to CISA. Dreame didn’t respond to CISA’s queries.

The pattern keeps repeating: manufacturers ship vacuums with textbook security failures, ignore researchers, then scramble when journalists publish.

DJI’s initial response made things worse. Spokesperson Daisy Kong told The Verge the flaw had been fixed the prior week. That statement arrived about thirty minutes before Azdoufal demonstrated thousands of robots, including the journalist’s own review unit, still reporting in live. DJI later issued a fuller statement acknowledging a backend permission validation issue and two patches, on February 8 and 10.

DJI said that TLS encryption was always in place, but Azdoufal says that protects the connection, not what’s inside it. He also told The Verge that additional vulnerabilities remain unpatched, including a PIN bypass on the camera feed.

Regulators are applying pressure

Regulation is arriving, slowly. The EU’s Cyber Resilience Act will require mandatory security-by-design for all connected products sold in the bloc by December 2027, with fines up to €15 million. The UK’s PSTI Act, in force since April 2024, became the world’s first law banning default passwords on smart devices. The US Cyber Trust Mark, by contrast, is voluntary. These frameworks technically apply regardless of where the manufacturer sits. In practice, enforcing fines on a Shenzhen company that ignores CISA coordination requests is a different proposition entirely.

How to stay safe

There are practical steps you can take:

  • Check independent security testing before buying connected devices
  • Place IoT devices on a separate guest network
  • Keep firmware updated
  • Disable features you don’t need

And ask yourself whether a vacuum really needs a camera. Many LiDAR-only models navigate effectively without video. If your device includes a camera or microphone, consider whether you’re comfortable with that exposure—or physically cover the lens when not in use.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.