IT NEWS

How scammers use your data to create personalized tricks that work

Think of your digital footprint as your online shadow—the trail you leave behind whenever you browse, post, shop, or even appear in someone’s contact list. It’s your likes, reviews, comments, and all the little traces you didn’t mean to share. Together, they paint a picture of you—one that friends, employers, and yes, scammers can see.

Step 1:  Your active footprint

Your active footprint is everything you choose to share online. Every photo, product review, or status update you post adds another brushstroke to your online portrait. Over time, those choices form a public story about who you are—your interests, values, and connections. That story shapes how people, employers, and even algorithms see you.

Step 2: Your passive footprint

Your passive footprint is the quieter one—the data you leave behind without meaning to. Every website you visit, every cookie that tracks your clicks, every photo that quietly tags its GPS location adds to it. These fragments often work in the background, invisible but persistent, quietly mapping your habits, preferences, and even your movements.

You step in more stuff than you think

Your personal data is scattered in more places than you’d expect. Social networks like Facebook, LinkedIn, and TikTok hold snapshots of your life and relationships. Government databases, company websites, and news mentions might hold your name or location. Forums, review sites, and shopping accounts keep their own records. And data brokers collect and sell huge bundles of personal details, sometimes packaging them into lists anyone can buy. Even if you’ve never shared something directly, chances are it’s already out there.

Alone, small details don’t seem like much—a nickname here, a photo there—but stitched together they can reveal a lot. Your job title, home city, favorite restaurant, even your pet’s name (a popular security question!) can help someone impersonate or target you. Combine that with info leaked in data breaches, and attackers can build an eerily complete version of you—ready-made for scams or identity theft.

How scammers collect your data

To stay safe, it helps to see the world the way a scammer does: your online details are puzzle pieces, and they’re putting the picture together.

Scraping

Attackers use automated tools to pull information from public pages across the internet. That can include your bio, job history, or photos from social media, or your name and email address from company websites and online forums. All technically “public,” but when combined, they create a full dossier of your online life.

Breaches

When companies get hacked or fail to secure their databases, your data can spill into the open. Big names like Equifax, LinkedIn, and Yahoo have all been hit. Leaks like these often contain names, addresses, phone numbers, and passwords—and once data hits the dark web, it can circulate for years. That’s why old breaches can still come back to haunt you.

Brokers

Data brokers legally collect information from public records and commercial sources, then sell detailed profiles for advertising and risk scoring. On the dark web, things get murkier: stolen logins, payment info, and even full identity kits (“fullz”) are traded by criminals. You’ll never meet these markets—but your data might end up there anyway.

Social engineering

Social engineering is where information meets manipulation. Attackers blend the details they find—your social posts, work info, or breached credentials—to make scams feel real. They might impersonate your boss, your bank, or even you. These scams work because they sound familiar, borrowing the tone and timing of real interactions.

Real scams that use the victim’s digital footprint

Here are just a few examples of how personal content shared online—even casually or lovingly—can be reused in ways you’d never imagine.

AI voice scams that sound heartbreakingly real

When a mother in the US received a call from her daughter saying she’d been in a car accident and needed bail money, she didn’t hesitate to help. The voice on the other end sounded exactly like her, but it wasn’t. It was an AI-generated clone.

Scammers don’t need much to pull this off—just a few seconds of clear speech. That could come from a TikTok clip, a podcast snippet, a YouTube video, or even a Facebook post where your child’s voice can be heard in the background. Once they have that audio, AI tools can replicate tone, emotion, and phrasing so accurately that even family members struggle to tell the difference.

The Facebook photo that gives away your location

You don’t need to tag your location for someone to find you. A recent Malwarebytes investigation showed how AI can now identify where a photo was taken just from the background—down to the street, storefront, or skyline. That means every sunny brunch pic or family snapshot on Facebook could quietly reveal where you live, work, or spend time.

Attackers can use this information to craft more convincing local scams—pretending to be from nearby businesses, schools, or community groups to earn your trust. It’s a sharp reminder that even innocent photos can expose more than you intend.

When scammers know just enough to sound official

Earlier this year, Californians were hit with a wave of fake tax refund texts and emails. The messages looked convincing—complete with government logos, correct refund amounts, and links to realistic-looking sites. But the senders weren’t tax officials. They were scammers who had pieced together public and leaked data to make their messages sound real.

That data can come from anywhere—a tagged post that shows you live in California, a LinkedIn page that lists your workplace, or a data broker that sells demographic info. When combined, these fragments let criminals target specific regions or groups, making their scams feel personal and timely.

SAFES: Make small digital footprints

S – Share less, on your terms

Tighten privacy settings on your social accounts so only people you trust can see your posts. Avoid oversharing—travel plans, birthdays, and addresses are gold for scammers. And skip those “fun” quizzes and surveys; they’re often data collection traps in disguise.

A – Arm your logins

Use a password manager to create strong, unique passwords for every account. Turn on multi-factor authentication (MFA) wherever possible. Avoid using personal details—pets, schools, hobbies—in passwords or security questions.

F – Find your exposure

Set up Google Alerts for your name and nicknames to see when new information about you pops up. Run a free scan with Malwarebytes Digital Footprint Portal to find out if your email appears in data breaches, and change affected passwords fast. Many banks and credit cards also offer free identity monitoring—use it.

E – Evaluate trust

Treat surprise messages and calls with healthy skepticism, especially if they sound urgent. Verify requests by going directly to official websites or contact numbers. And talk to family about scams—kids and seniors are often the most common targets.

S – Stay updated

Keep your software, devices, and apps current. Security updates close the loopholes that criminals love to exploit. Use an up-to-date real-time anti-malware solution with a web protection component—and follow us to stay alert to new scams and major data leaks.

Your digital footprint tells a story, but you don’t need to vanish from the internet, just manage what you leave behind. A few small, consistent habits can keep your online shadow short, sharp, and safely under your control.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Fake PayPal invoice from Geek Squad is a tech support scam

One of our employees received this suspicious email and showed it to me. Although it’s a pretty straightforward attempt to lure targets into calling the scammers, it’s worth writing up because it looks like it was sent out in bulk.

Let’s look at the red flags.

Firstly, the sender address:

Email comes from tinapal83638@gmail.com and is sent to undisclosed recipients, with the target in BCC

PayPal doesn’t use Gmail addresses to send invoices, and they also don’t put your address in the blind carbon copy (BCC) field. BCC hides the list of recipients, which is often a sign the email was sent to a large group.

And “Tina Pal” must be Pay’s evil twin—one who doesn’t know it’s customary to address your customers by name rather than “PayPal customer.”

Because the message came from a genuine Gmail address, the authentication results (SPF, DKIM, and DMARC) all pass. That only proves the email wasn’t spoofed and was sent from a legitimate Gmail server, not that it’s actually from PayPal.

The red flag here is that PayPal emails will not come from random Gmail addresses. Official communications come from addresses like service@paypal.com.

The email body itself was empty but came with a randomly named attachment—two red flags in one. PayPal would at least use some branding in the email and never expect their customers to open an attachment.

Here’s what the invoice in the attachment looked like:

PayPal branded invoice

“PayPal Notification:

Your account has been billed $823.00. The payment will be processed in the next 24 hours. Didn’t make this purchase? Contact PayPal Support right now.”

More red flags:

  • Urgency: “The payment will be processed in the next 24 hours” or else the rather large amount of $823 is gone.
  • Phone number only: This isn’t how you normally dispute PayPal charges. Genuine PayPal emails direct you to log in to your account or use their online Resolution Center, not to call a number.
  • Unverified number: Reverse lookup tools don’t show it as PayPal’s. Scammers often spoof phone numbers or register them under unrelated businesses. An official PayPal support number will appear on PayPal’s website and be recognized by lookup tools.
  • Brand mismatch: An invoice comes from the company charging you, not from the payment provider. So, this one should have been branded for Geek Squad or be titled something like “payment notification.”

What tech support scammers do

In this type of tech support scam, the target calls the listed number, and the “tech” on the other end asks to remotely log in to their computer to check for “viruses.” They might run a short program to open command prompts and folders, just to scare and distract the victim. Then they’ll ask to install another tool to “fix” things, which will search the computer for anything they can turn into money. Others will sell you fake protection software and bill you for their services. Either way, the result is the same: you’ll be scammed out of a lot of money.

Safety tips

The best way to stay safe is to stay informed about the tricks scammers use. Learn to spot the red flags that almost always give away scams and phishing emails, and remember:

  • Do not open unsolicited attachments.
  • Use verified, official ways to contact companies. Don’t call numbers listed in suspicious emails or attachments.
  • Beware of someone wanting to connect to your computer remotely. One of the tech support scammer’s biggest weapons is their ability to connect remotely to their victims. If they do this, they essentially have total access to all of your files and folders.

If you’ve already fallen victim to a tech support scam:

  • Paid the scammer? Contact your credit card company or bank and let them know what’s happened. You may also want to file a complaint with the FTC or contact your local law enforcement, depending on your region.
  • Shared a password? If you shared your password with a scammer, change it everywhere it’s used. Consider using a password manager and enable 2FA for important accounts.
  • Scan your system: If scammers had access to your system, they may have planted a backdoor so they can revisit whenever they feel like it. Malwarebytes can remove these and other software left behind by scammers.
  • Watch your accounts: Keep an eye out for unexpected payments or suspicious charges on your credit cards and bank accounts.
  • Be wary of suspicious emails. If you’ve fallen for one scam, they may target you again.

Pro tip: Malwarebytes Scam Guard recognized this email as a scam. Upload any suspicious text, emails, attachments and other files to ask for its opinion. It’s really very good at recognizing scams.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Gmail breach panic? It’s a misunderstanding, not a hack

After a misinterpretation of an interview with a security researcher, several media outlets hinted at a major Gmail breach.

Reporters claimed the incident took place in April. In reality, the researcher had said there was an enormous amount of Gmail usernames and passwords circulating on the dark web.

Those are two very different things. The credentials probably stem from a great many past attacks and breaches over the years.

But the rumors spread quickly—enough that Google felt it had to deny that their Gmail systems had suffered a breach.

“The inaccurate reports are stemming from a misunderstanding of infostealer databases, which routinely compile various credential theft activity occurring across the web. It’s not reflective of a new attack aimed at any one person, tool, or platform.”

What happens is that cybercriminals buy and sell databases containing stolen usernames and passwords from data breaches, information stealers, and phishing campaigns. They do this to expand their reach or combine data from different sources to create more targeted attacks.

The downside for them is that many of these credentials are outdated, invalid, or linked to accounts that are no longer in use.

The downside for everyone else is that misleading reporting like this causes panic where there’s no need for it—whether it stems from misunderstanding technical details or from the pressure to make a headline.

Still, it’s always smart to check whether your email address has been caught up in a breach.

You can use our Digital Footprint scanner to see if your personal information is exposed online and take steps to secure it. If you find any passwords that you still use, change them immediately and enable multi-factor authentication (2FA) for those accounts wherever possible.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Atlas browser’s Omnibox opens up new privacy and security risks

It seems that with every new agentic browser we discover yet another way to abuse one.

OpenAI recently introduced a ChatGPT based AI browser called Atlas. It didn’t take researchers long to find that the combined search and prompt bar—called the Omnibox—can be exploited.

By pasting a specially crafted link into the Omnibox, attackers can trick Atlas into treating the entire input as a trusted user prompt instead of a URL. That bypasses many safety checks and allows injected instructions to be run with elevated trust.

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.” We’ve discussed the dangers of prompt injection before, but the bottom line is simple: when you give your browser the power to act on your behalf, you also give criminals the chance to abuse that trust.

As researchers at Brave noted:

“AI-powered browsers that can take actions on your behalf are powerful yet extremely risky. If you’re signed into sensitive accounts like your bank or your email provider in your browser, simply summarizing a {specially fabricated} Reddit post could result in an attacker being able to steal money or your private data.”

Axios reports that Atlas’s dual-purpose Omnibox opens fresh privacy and security risks for users. That’s the downside of combining too much functionality without strong guardrails. But when new features take priority over user security and privacy, those guardrails get overlooked.

Despite researchers demonstrating vulnerabilities, OpenAI claims to have implemented protections to prevent any real dangers. According to its help page:

“Agent mode runs also operates under boundaries:

System access: Cannot run code in the browser, download files, or install extensions.

Data access: Cannot access other apps on your computer or your file system, read or write ChatGPT memories, access saved passwords, or use autofill data.

Browsing activity: Pages ChatGPT visits in agent mode are not added to your browsing history.”

Agentic AI browsers like OpenAI’s Atlas face a fundamental security challenge: separating real user intent from injected, potentially malicious instructions. They often fail because they interpret any instructions they find as user prompts. Without stricter input validation and more robust boundaries, these tools remain highly vulnerable to prompt injection attacks—with potentially severe consequences for privacy and data security.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

School’s AI system mistakes a bag of chips for a gun

An artificial intelligence (AI) detection system at Kenwood High School mistakenly flagged a student’s bag of potato chips as a gun, triggering a police response.

The 16-year-old had finished eating a bag of Doritos and crumpled it up in his pocket when he was done. But the school’s AI-based gun detection system mistook the crumpled foil for a firearm.

Moments later, multiple police cars arrived with officers drawing their weapons, dramatically escalating what should have been a non-event.

The student recalls:

“Police showed up, like eight cop cars, and then they all came out with guns pointed at me talking about getting on the ground. I was putting my hands up like, ‘what’s going on?’ He told me to get on my knees and arrested me and put me in cuffs.”

Systems like these scan images or video feeds for the shape and appearance of weapons. They’re meant to reduce risk, but they’re only as good as the algorithms behind them and the human judgment that follows.

Superintendent Dr. Myriam Rogers told reporters:

“The program is based on human verification and in this case the program did what it was supposed to do which was to signal an alert and for humans to take a look to find out if there was cause for concern in that moment.”

While we understand the need for safety measures against guns on school grounds, this could have been handled better. Eight police cars arriving at the scene and officers with guns drawn will certainly have had an impact on the students who witnessed it, let alone the student that was the focus of their attention.

As school principal Kate Smith said:

“We understand how upsetting this was for the individual that was searched as well as the other students who witnessed the incident.”

AI safety tools are designed to protect students, but they do make mistakes, and when they fail, they can create the very fear they’re meant to prevent. Until these systems can reliably tell the difference between a threat and a harmless snack, schools need stronger guardrails—and a little more human sense.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Around 70 countries sign new UN Cybercrime Convention—but not everyone’s on board

Around 70 countries have signed the new United Nations (UN) Convention against Cybercrime—the first global treaty designed to combat cybercrime through unified international rules and cooperation.

The treaty needs at least 40 UN member states to ratify it before it becomes international law. Once the 40th country does so, it will take another 90 days for the convention to become legally binding for all those who have joined.

Notably, the United States declined to sign. In a brief statement, a State Department spokesperson said:

“The United States continues to review the treaty.”

And there is a lot to review. The convention has sparked significant debate about privacy, sovereignty, and how far law enforcement powers should reach. It was created in response to the rising frequency, sophistication, and cost of cybercrime worldwide—and the growing difficulty of countering it. As cyberattacks increasingly cross borders, international cooperation has become critical.

Supporters say the treaty closes legal loopholes that allow criminals to hide in countries that turn a blind eye. It also aims to solve miscommunication by establishing common definitions of cybercrimes, especially for threats like ransomware, online fraud, and child exploitation.​

But civil rights and digital privacy advocates argue that the treaty expands surveillance and monitoring powers, in turn eroding personal freedoms, and undermines safeguards for privacy and free expression.

Cybersecurity experts fear it could even criminalize legitimate research.

Katitza Rodriguez, policy director for global privacy at the Electronic Frontier Foundation (EFF) stated:

“The latest UN cybercrime treaty draft not only disregards but also worsens our concerns. It perilously broadens its scope beyond the cybercrimes specifically defined in the Convention, encompassing a long list of non-cybercrimes.”

The Foundation for Defense of Democracies (FDD) goes even further, arguing that the treaty could become a platform for authoritarian states to advance ideas of state control over the internet, draw democratic governments into complicity with repression, and weaken key cybersecurity tools on which Americans depend.

“Russia and China are exporting oppression around the world and using the United Nations as legal cover.”

Even Microsoft warned that significant changes would need to be made to the original draft before it could be considered safe:

“We need to ensure that ethical hackers who use their skills to identify vulnerabilities, simulate cyberattacks, and test system defenses are protected. Key criminalization provisions are too vague and do not include a reference to criminal intent, which would ensure activities like penetration testing remain lawful.”

Those changes never came to life. Many observers now say the treaty creates a legal framework that allows monitoring, data storage, and cross-border information sharing without clear data protection. Critics argue it lacks strong, explicit safeguards for due process and human rights, particularly when it comes to cross-border data exchange and extradition.

When you think about it, the idea of having a global system to counter cybercriminals makes sense—criminals don’t care about borders, and the current patchwork of national laws only helps them hide. But to many, the real problem lies in how the treaty defines cybercrime and what governments could do in its name.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

NSFW ChatGPT? OpenAI plans “grown-up mode” for verified adults

If you’ve had your fill of philosophical discussions with ChatGPT, CEO Sam Altman has news for you: the service will soon be able to engage in far less highbrow conversations of the sexual kind. That’s right—sexting is coming to ChatGPT. Are we really surprised?

It marks a change in sentiment for the company, which originally banned NSFW content. In an October 14 post on X, Altman said the company had kept ChatGPT “pretty restrictive” to avoid creating mental health issues for vulnerable users. But now, he says, the company has learned from that experience and feels ready to “experiment more.”

“In a few weeks, we plan to put out a new version of ChatGPT that allows people to have a personality that behaves more like what people liked about 4o (we hope it will be better!). If you want your ChatGPT to respond in a very human-like way, or use a ton of emoji, or act like a friend, ChatGPT should do it (but only if you want it, not because we are usage-maxxing).”

He added that by December, as age-gating expands, ChatGPT will “allow even more, like erotica for verified adults.”

This isn’t a sudden pivot. Things started to change at least as far back as May last year, when the company said in its Model Specification document that it was considering allowing ChatGPT to get a little naughty under the right circumstances.

“We believe developers and users should have the flexibility to use our services as they see fit, so long as they comply with our usage policies. We’re exploring whether we can responsibly provide the ability to generate NSFW content in age-appropriate contexts through the API and ChatGPT. We look forward to better understanding user and societal expectations of model behavior in this area.”

It followed up on that with another statement in a February 2025 update to the document, when it starting mulling a ‘grown-up mode’ while drawing a hard boundary around things like age, sexual deepfakes, and revenge porn.

A massive market

There’s no denying the money behind this move. Analysts believe people paid $2.7 billion worldwide for a little AI companionship last year, with the market expected to balloon to $24.5 billion by 2034—a staggering 24% annual growth rate.

AI “girlfriends” and “boyfriends” already span everything from video-based virtual partners to augmented reality companions that can call you. Even big tech companies have been getting into it, with Elon Musk’s X launching a sexualized virtual companion called Ani that will apparently strip for you if you pester it enough.

People have been getting down and dirty with technology for decades, of course (phone sex lines began in the early 1980s, and cam sites have been a thing for years). But AI changes the scale entirely. There’s no limit to automation, no need for human operators, and no guarantee that the users on the other side know where the boundaries are.

We’re not judging, but the normal rules apply. This stuff is supposed to be for adults, which makes it more important than ever that parents monitor what their kids access online.

Privacy risk

Earlier this month, we covered how two AI companion apps exposed millions of private chat logs, including sexual conversations, after a database misconfiguration—a sobering reminder of how much intimate data these services collect.

It wasn’t the first time, either. Back in 2024, another AI girlfriend platform was breached, leaking users’ fantasies, chat histories, and profile data. That story showed just how vulnerable these apps can be when they mix emotional intimacy with poor security hygiene.

As AI companionship becomes mainstream, breaches like these raise tough questions about how safely this kind of data can ever really be stored.

For adults wanting a little alone time with an AI, remember to take a regular break and a sanity check. While Altman might think that OpenAI has “been able to mitigate the serious mental health issues,” experts still warn that relationships with increasingly lifelike AIs can create very real emotional risks.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

How to set up two factor authentication (2FA) on your Instagram account

Two-factor authentication (2FA) isn’t foolproof, but it is one of the best ways to protect your accounts from hackers.

It adds a small extra step when logging in, but that extra effort pays off. Instagram’s 2FA requires an additional code whenever you try to log in from an unrecognized device or browser—stopping attackers even if they have your password.

Instagram offers multiple 2FA options: text message (SMS), an authentication app (recommended), or a security key.

Instagram 2FA options

Here’s how to enable 2FA on Instagram for Android, iPhone/iPad, and the web.

How to set up 2FA for Instagram on Android

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon (three horizontal lines) in the top right.
  4. Select Accounts Center at the bottom.
  5. Tap Password and security > Two-factor authentication.
  6. Choose your Instagram account.
  7. Select a verification method: Text message (SMS), Authentication app (recommended), or WhatsApp.
    • SMS: Enter your phone number if you haven’t already. Instagram will send you a six-digit code. Enter it to confirm.
    • Authentication app: Choose an app like Google Authenticator or Duo Mobile. Scan the QR code or copy the setup key, then enter the generated code on Instagram.
    • WhatsApp: Enable text message security first, then link your WhatsApp number.
  8. Follow the on-screen instructions to finish setup.

How to set up 2FA for Instagram on iPhone or iPad

  1. Open the Instagram app and log in.
  2. Tap your profile picture at the bottom right.
  3. Tap the menu icon > Settings > Security > Two-factor authentication.
  4. Tap Get Started.
  5. Choose Authentication app (recommended), Text message, or WhatsApp.
    • Authentication app: Copy the setup key or scan the QR code with your chosen app. Enter the generated code and tap Next.
    • Text message: Turn it on, then enter the six-digit SMS code Instagram sends you.
    • WhatsApp: Enable text message first, then add WhatsApp.
  6. Follow on-screen instructions to complete the setup.

How to set up 2FA for Instagram in a web browser

  1. Go to instagram.com and log in.
  2. Open Accounts Center > Password and security.
  3. Click Two-factor authentication, then choose your account.
    • Note: If your accounts are linked, you can enable 2FA for both Instagram and your overall Meta account here.Instagram accoounts center
  4. Choose your preferred 2FA method and follow the online prompts.

Enable it today

Even the strongest password isn’t enough on its own. 2FA means a thief must have access to your an additional factor to be able to log in to your account, whether that’s a code on a physical device or a security key. That makes it far harder for criminals to break in.

Turn on 2FA for all your important accounts, especially social media and messaging apps. It only takes a few minutes, but it could save you hours—or even days—of recovery later.It’s currently the best password advice we have.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Phishing scam uses fake death notices to trick LastPass users

LastPass has alerted users about a new phishing attack that claims the recipient has died. According to the message, a family member has submitted a death certificate to gain access to the recipient’s password vault. A link in the phishing email, supposedly to stop the request, leads to a fake page that asks for the LastPass user’s master password.

Legacy request opened
Image courtesy of LastPass

“Legacy Request Opened (URGENT IF YOU ARE NOT DECEASED)

A death certificate was uploaded by a family member to regain access to the Lastpass account

If you have not passed away and you believe this is a mistake, please reply to this email with STOP”

LastPass links this campaign to CryptoChameleon (also known as UNC5356), a group that previously targeted cryptocurrency users and platforms with similar social engineering attacks. The same group used LastPass branding in a phishing kit in April 2024.

The phishing attempt exploits the legitimate inheritance process, which is an emergency access feature in LastPass that allows designated contacts request access to a vault if the account holder dies or becomes incapacitated.

Stealing someone’s password manager credentials gives attackers access to every login stored inside. We recently reported on an attempt to steal 1Password credentials.

Lastpass also notes that:

“Several of the phishing sites are clearly intended to target passkeys, reflecting both the increased interest on the part of cybercriminals in passkeys and the increased adoption on the part of consumers.”

Passkeys are a very secure replacement for passwords. They can’t be cracked, guessed or phished, and let you log in easily without having to type a password every time. Most password managers—like LastPass, 1Password, Dashlane, and Bitwarden—now store and sync passkeys across devices.

Because passkeys often protect high-value assets like banking, crypto wallets, password managers, and company accounts—they’ve become an attractive prize for attackers.

Advice for users

While passkeys themselves cannot be phished via simple credential theft, attackers can trick users into:

  • Registering a new passkey on a malicious site or a fake login page
  • Approving fraudulent device syncs or account transfers
  • Disabling passkeys and reverting to weaker login methods, then stealing those fallback credentials

LastPass and other security experts recommend:

  • Never enter your master password on links received via email or text.
  • Understand how passkeys work and keep them safe.
  • Only logging into your password manager via official apps or bookmarks.
  • Be wary of urgent or alarming messages demanding immediate action.
  • Remember that legitimate companies won’t ask for sensitive credentials via email or phone.
  • Use an up-to-date real-time anti-malware solution preferably with a web protection module.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (October 20 – October 26)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.