IT NEWS

Deepfakes, AI resumes, and the growing threat of fake applicants

Recruiters expect the odd exaggerated resume, but many companies, including us here at Malwarebytes, are now dealing with something far more serious: job applicants who aren’t real people at all.

From fabricated identities to AI-generated resumes and outsourced impostor interviews, hiring pipelines have become a new way for attackers to sneak into organizations.

Fake applicants aren’t just a minor HR inconvenience anymore but a genuine security risk. So, what’s the purpose behind it, and what should you look out for?

How these fake applicants operate

These applicants don’t just fire off a sketchy resume and hope for the best. Many use polished, coordinated tactics designed to slip through screening.

AI-generated resumes

AI-generated resumes are now one of the most common signs of a fake applicant. Language models can produce polished, keyword-heavy resumes in seconds, and scammers often generate dozens of variations to see which one gets past an Applicant Tracking System. In some cases, entire profiles are generated at the same time.

These resumes often look flawless on paper but fall apart when you ask about specific projects, timelines, or achievements. Hiring teams have reported waves of nearly identical resumes for unrelated positions, or applicants whose written materials are far more detailed than anything they can explain in conversation. Some have even received multiple resumes with the same formatting quirks, phrasing, or project descriptions.

Fake or borrowed identities

Impersonation is common. Scammers use AI-generated or stolen profile photos, fake addresses, and VoIP phone numbers to look legitimate. LinkedIn activity is usually sparse, or you’ll find several nearly identical profiles using the same name with slightly different skills.

At Malwarebytes, as in this Register article, we’ve noticed that the details applicants provide don’t always match what we see during the interview. In some cases, the same name and phone number have appeared across multiple applications, each supported by a freshly tailored resume. Further discrepancies occur in many instances where the applicant claims to be located in one country, but calls from another country entirely, usually in Asia.

Outsourced, scripted, and deepfake interviews

Fraudulent interviews tend to follow a familiar pattern. Introductions are short and vague, and answers arrive after long, noticeable pauses, as if the person is being coached off-screen. Many try to keep the camera off, or ask to complete tests offline instead of live.

In more advanced cases, you might see the telltale signs of real-time filters or deepfake tools, like mismatched lip-sync, unnatural blinking, or distorted edges. Most scammers still rely on simpler tricks like camera avoidance or off-screen coaching, but there have been reports of attackers using deepfake video or voice clones in interviews. It’s still rare, but it shows how quickly these tools are evolving.

Why they’re doing it

Scammers have a range of motives, from fraud to full system access.

Financial gain

For some groups, the goal is simple: money. They target remote, well-paid roles and then subcontract the work to cheaper labor behind the scenes. The fraudulent applicant keeps the salary while someone else quietly does the job at a fraction of the cost. It’s a volume game, and the more applications they get through, the more income they can generate.

Identity or documentation fraud

Others are trying to build a paper trail. A “successful hire” can provide employment verification, payroll history, and official contract letters. These documents can later support visa applications, bank loans, or other kinds of identity or financial fraud. In these cases, the scammer may never even intend to start work. They just need the paperwork that makes them look legitimate.

Algorithm testing and data harvesting

Some operations use job applications as a way to probe and learn. They send out thousands of resumes to test how screening software responds, to reverse-engineer what gets past filters, and to capture recruiter email patterns for future campaigns. By doing this at scale, they train automation that can mimic real applicants more convincingly over time.

System access for cybercrime

This is where the stakes get higher. Landing a remote role can give scammers access to internal systems, company data, and intellectual property—anything the job legitimately touches.

Even when the scammer isn’t hired, simply entering your hiring pipeline exposes internal details: how your team communicates, who makes what decisions, which roles have which tools. That information can be enough to craft a convincing impersonation later. At that point, the hiring process becomes an unguarded door into the organization.

The wider risk (not just to recruiters)

Recruiters aren’t the only ones affected. Everyday people on LinkedIn or job sites can get caught in the fallout too.

Fake applicant networks rely on scraping public profiles to build believable identities. LinkedIn added anti-bot checks in 2023, but fake profiles still get through, which means your name, photo, or job history could be copied and reused without your knowledge.

They also send out fake connection requests that lead to phishing messages, malicious job offers, or attempts to collect personal information. Recent research from the University of Portsmouth found that fake social media profiles are more common than many people realise:

80% of respondents said they’d encountered suspicious accounts, and 77% had received link requests from strangers.

It’s a reminder that anyone on LinkedIn can be targeted, not just recruiters, and that these profiles often work by building trust first and slipping in malicious links or requests later.

How recruiters can protect themselves

You can tighten screening without discriminating or adding friction by following these steps:

Verify identity earlier

Start with a camera-on video call whenever you can. Look for the subtle giveaways of filters or deepfakes: unnatural blinking, lip-sync that’s slightly off, or edges of the face that seem to warp or lag. If something feels odd, a simple request like “Please adjust your glasses” or “touch your cheek for a moment” can quickly show whether you’re speaking to a real person.

Cross-check details

Make sure the basics line up. The applicant’s face should match their documents, and their time zone should match where they say they live. Work history should hold up when you check references. A quick search can reveal duplicate resumes, recycled profiles, or LinkedIn accounts with only a few months of activity.

Watch for classic red flags

Most fake applicants slip when the questions get personal or specific. A resume that’s polished but hollow, a communication style that changes between messages, or hesitation when discussing timelines or past roles can all signal coaching. Long pauses before answers often hint that someone off-screen may be feeding responses.

Secure onboarding

If someone does pass the process, treat early access carefully. Limit what new hires can reach, require multi-factor authentication from day one, and make sure their device has been checked before it touches your network. Bringing in your security team early helps ensure that recruitment fraud doesn’t become an accidental entry point.


Final thoughts

Recruiting used to be about finding the best talent. Today, it often includes identity verification and security awareness.

As remote work becomes the norm, scammers are getting smarter. Fake applicants might show up as a nuisance, but the risks range from compliance issues to data loss—or even full-scale breaches.

Spotting the signs early, and building stronger screening processes, protects not just your hiring pipeline, but your organization as a whole.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

How phishers hide banking scams behind free Cloudflare Pages

During a recent investigation, we uncovered a phishing operation that combines free hosting on developer platforms with compromised legitimate websites to build convincing banking and insurance login portals. These fake pages don’t just grab a username and password–they also ask for answers to secret questions and other “backup” data that attackers can use to bypass multi-factor authentication and account recovery protections.

Instead of sending stolen data to a traditional command-and-control server, the kit forwards every submission to a Telegram bot. That gives the attackers a live feed of fresh logins they can use right away. It also sidesteps many domain-based blocking strategies and makes swapping infrastructure very easy.​

Phishing groups increasingly use services like Cloudflare Pages (*.pages.dev) to host their fake portals, sometimes copying a real login screen almost pixel for pixel. In this case, the actors spun up subdomains impersonating financial and healthcare providers. The first one we found was impersonating Heartland bank Arvest.

fake Arvest log in page
Fake Arvest login page

On closer look, the phishing site shows visitors two “failed login” screens, prompts for security questions, and then sends all credentials and answers to a Telegram bot.

Comparing their infrastructure with other sites, we found one impersonating a much more widely known brand: United Healthcare.

HealthSafe ID overpayment refund
HealthSafe ID overpayment refund

In this case, the phishers abused a compromised website as a redirector. Attackers took over a legitimate-looking domain like biancalentinidesigns[.]com and saddle it with long, obscure paths for phishing or redirection. Emails link to the real domain first, which then forwards the victim to the active Cloudflare pages phishing site. Messages containing a familiar or benign-looking domain are more likely to slip past spam filters than links that go straight to an obviously new cloud-hosted subdomain.​

Cloud-based hosting also makes takedowns harder. If one *.pages.dev hostname gets reported and removed, attackers can quickly deploy the same kit under another random subdomain and resume operations.​

The phishing kit at the heart of this campaign follows a multi-step pattern designed to look like a normal sign-in flow while extracting as much sensitive data as possible.​

Instead of using a regular form submission to a visible backend, JavaScript harvests the fields and bundles them into a message sent straight to the Telegram API.. That message can include the victim’s IP address, user agent, and all captured fields, giving criminals a tidy snapshot they can use to bypass defenses or sign in from a similar environment.​

The exfiltration mechanism is one of the most worrying parts. Rather than pushing credentials to a single hosted panel, the kit posts them into one or more Telegram chats using bot tokens and chat IDs hardcoded in the JavaScript. As soon as a victim submits a form, the operator receives a message in their Telegram client with the details, ready for immediate use or resale.​

This approach offers several advantages for the attackers: they can change bots and chat IDs frequently, they do not need to maintain their own server, and many security controls pay less attention to traffic that looks like a normal connection to a well-known messaging platform. Cycling multiple bots and chats gives them redundancy if one token is reported and revoked.​

What an attack might look like

Putting all the pieces together, a victim’s experience in this kind of campaign often looks like this:​

  • They receive a phishing email about banking or health benefits: “Your online banking access is restricted,” or “Urgent: United Health benefits update.”
  • The link points to a legitimate but compromised site, using a long or strange path that does not raise instant suspicion.​
  • That hacked site redirects, silently or after a brief delay, to a *.pages.dev phishing site that looks almost identical to the impersonated brand.​
  • After entering their username and password, the victim sees an error or extra verification step and is asked to provide answers to secret questions or more personal and financial information.​
  • Behind the scenes, each submitted field is captured in JavaScript and sent to a Telegram bot, where the attacker can use or sell it immediately.​

From the victim’s point of view, nothing seems unusual beyond an odd-looking link and a failed sign-in. For the attackers, the mix of free hosting, compromised redirectors, and Telegram-based exfiltration gives them speed, scale, and resilience.

The bigger trend behind this campaign is clear: by leaning on free web hosting and mainstream messaging platforms, phishing actors avoid many of the choke points defenders used to rely on, like single malicious IPs or obviously shady domains. Spinning up new infrastructure is cheap, fast, and largely invisible to victims.

How to stay safe

Education and a healthy dose of skepticism are key components to staying safe. A few habits can help you avoid these portals:​

  • Always check the full domain name, not just the logo or page design. Banks and health insurers don’t host sign-in pages on generic developer domains like *.pages.dev*.netlify.app, or on strange paths on unrelated sites.​
  • Don’t click sign-in or benefits links in unsolicited emails or texts. Instead, go to the institution’s site via a bookmark or by typing the address yourself.​
  • Treat surprise “extra security” prompts after a failed login with caution, especially if they ask for answers to security questions, card numbers, or email passwords.​
  • If anything about the link, timing, or requested information feels wrong, stop and contact the provider using trusted contact information from their official site.
  • Use an up-to-date anti-malware solution with a web protection component.

Pro tip: Malwarebytes free Browser Guard extension blocked these websites.

Browser Guard Phishing block

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Scammers harvesting Facebook photos to stage fake kidnappings, warns FBI

The FBI has warned about a new type of scam where your Facebook pictures are harvested to act as “proof-of-life” pictures in a virtual kidnapping.

The scammers pretend they have kidnapped somebody and contact friends and next of kin to demand a ransom for their release. While the alleged victim is really just going about their normal day, criminals show the family real Facebook photos to “prove” that person is still alive but in their custody.

This attack resembles Facebook cloning but with a darker twist. Instead of just impersonating you to scam your friends, attackers weaponize your pictures to stage fake proof‑of‑life evidence.

Both scams feed on oversharing. Public posts give criminals more than enough information to impersonate you, copy your life, and convince your loved ones something is wrong.

This alert focuses on criminals scraping photos from social media (usually Facebook, but also LinkedIn, X, or any public profile) then manipulating those images with AI or simple editing to use during extortion attempts. If you know what to look for, you might spot inconsistencies like missing tattoos, unusual lighting, or proportions that don’t quite match.

Scammers rely on panic. They push tight deadlines, threaten violence, and try to force split-second decisions. That emotional pressure is part of their playbook.

In recent years, the FBI has also warned about synthetic media and deepfakes, like explicit images generated from benign photos and then used for sextortion, which is a closely related pattern of abuse of user‑posted pictures. Together, these warnings point to a trend: ordinary profile photos, holiday snaps, and professional headshots are increasingly weaponized for extortion rather than classic account hacking.

What you can do

To make it harder for criminals to use these tactics, be mindful of what information you share on social media. Share pictures of yourself, or your children, only with actual friends and not for the whole world to find. And when you’re travelling, post the beautiful pictures you have taken when you’re back, not while you’re away from home.

Facebook’s built-in privacy tool lets you quickly adjust:

  • Who can see your posts.
  • Who can see your profile information.
  • App and website permissions.

If you’re on the receiving end of a virtual kidnapping attempt:

  • Establish a code word only you and your loved ones know that you can use to prove it’s really you.
  • Always attempt to contact the alleged victim before considering paying any ransom demand.
  • Keep records of every communication with the scammers. They can be helpful in a police investigation.
  • Report the incident to the FBI’s Internet Crime Complaint Center at www.ic3.gov.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (December 1 – December 7)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Leaks show Intellexa burning zero-days to keep Predator spyware running

Intellexa is a well-known commercial spyware vendor, servicing governments and large corporations. Its main product is the Predator spyware.

An investigation by several independent parties describes Intellexa as one of the most notorious mercenary spyware vendors, still operating its Predator platform and hitting new targets even after being placed on US sanctions lists and being under active investigation in Greece.

The investigation draws on highly sensitive documents and other materials leaked from the company, including internal records, sales and marketing material, and training videos. Amnesty International researchers reviewed the material to verify the evidence.

To me, the most interesting part is Intellexa’s continuous use of zero-days against mobile browsers. Google’s Threat Analysis Group (TAG) posted a blog about that, including a list of 15 unique zero-days.

Intellexa can afford to buy and burn zero-day vulnerabilities. They buy them from hackers and use them until the bugs are discovered and patched–at which point they are “burned” because they no longer work against updated systems.

The price for such vulnerabilities depends on the targeted device or application and the impact of exploitation. For example, you can expect to pay in the range of $100,000 to $300,000 for a robust, weaponized Remote Code Excecution (RCE) exploit against Chrome with sandbox bypass suitable for reliable, at‑scale deployment in a mercenary spyware platform. And in 2019, zero-day exploit broker Zerodium offered millions for zero-click full chain exploits with persistence against Android and iPhones.

Which is why only governments and well-resourced organizations can afford to hire Intellexa to spy on the people they’re interested in.

The Google TAG blog states:

“Partnering with our colleagues at CitizenLab in 2023, we captured a full iOS zero-day exploit chain used in the wild against targets in Egypt. Developed by Intellexa, this exploit chain was used to install spyware publicly known as Predator surreptitiously onto a device.”

To slow down the “burn” rate of its exploits, Intellexa delivers one-time links directly to targets through end-to-end encrypted messaging apps. This is a common method: last year we reported how the NSO Group was ordered to hand over the code for Pegasus and other spyware products that were used to spy on WhatsApp users.

The fewer people who see an exploit link, the harder it is for researchers to capture and analyze it. Intellexa also uses malicious ads on third-party platforms to fingerprint visitors and redirect those who match its target profiles to its exploit delivery servers.

This zero-click infection mechanism, dubbed “Aladdin,” is believed to still be operational and actively developed. It leverages the commercial mobile advertising system to deliver malware. That means a malicious ad could appear on any website that serves ads, such as a trusted news website or mobile app, and look completely ordinary. If you’re not in the target group, nothing happens. If you are, simply viewing the ad is enough to trigger the infection on your device, no need to click.

zero click infection chain
Zero-click infection chain
Image courtesy of Amnesty International

How to stay safe

While most of us will probably never have to worry about being in the target group, there are still practical steps you can take:

  • Use an ad blocker. Malwarebytes Browser Guard is a good start. Did I mention it’s a free browser extension that works on Chrome, Firefox, Edge, and Safari? And it should work on most other Chromium based browsers (I even use it on Comet).
  • Keep your software updated. When it comes to zero-days, updating your software only helps after researchers discover the vulnerabilities. However, once the flaws become public, less sophisticated cybercriminals often start exploiting them, so patching remains essential to block these more common attacks.
  • Use a real-time anti-malware solution on your devices.
  • Don’t open unsolicited messages from unknown senders. Opening them could be enough to start a compromise of your device.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Canadian police trialing facial recognition bodycams

A municipal police force in Canada is now using facial recognition bodycams, it was revealed this week. The police service in the prairie city of Edmonton is trialing technology from US-based Axon, which makes products for the military and law enforcement.

Up to 50 officers are taking part in the trial this month, according to reports. Officers won’t turn the cameras on in the field until they’re actively investigating or enforcing, representatives from Axon said.

When the cameras are activated, the recognition software will run in the background, not reporting anything to the wearer. The camera captures images of anyone within roughly four feet of the officer and sends them to a cloud service, where it will be compared against 6,341 people already flagged in the police system. According to police and Axon, images that don’t match the list will be deleted, and the database is entirely owned by the Police Service, meaning that Axon doesn’t get to see it.

This represents a turnaround for Axon. In 2019, its first ethics board report said that facial recognition wasn’t reliable enough for body cameras.

CEO Rick Smith said at the time:

“Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board’s recommendation, Axon will not be commercializing face matching products on our body cameras at this time.”

Two years later, nine of the board’s members resigned after the company reportedly went against their recommendations by pursuing plans for taser-equipped drones. Axon subsequently put the drone project on hold.

Gideon Christian, an associated law professor at the University of Calgary (in Alberta, the same province as Edmonton), told Yahoo News that the Edmonton Police Service’s move would transform bodycams from a tool making police officers accountable to a tool of mass surveillance:

“This tool is basically now being thrown from a tool for police accountability and transparency to a tool for mass surveillance of members of the public.”

Policy spaghetti in the US and further afield

This wouldn’t be the first time that police have tried facial recognition, often with lamentable results. The American Civil Liberties Union identified at least seven wrongful arrests in the US thanks to inaccurate facial recognition results, and that was in April 2024. Most if not all of those incidents involved black people, it said. Facial recognition datasets have been found to be racially biased.

In June 2024, police in Detroit agreed not to make arrests based purely on facial recognition as part of a settlement for the wrongful arrest of Robin Williams. Williams, a person of color, was arrested for theft in front of his wife and daughter after detectives relied heavily on an inaccurate facial recognition match.

More broadly in the US, 15 states had limited police use of facial recognition as of January this year, although some jurisdictions are reversing course. New Orleans reinstated its use in 2022 after a spike in homicides. Police have also been known to request searches from law enforcement in neighboring cities if they are banned from using the technology in their own municipality.

Across the Atlantic, things are equally mixed. The EU AI Act bans live facial recognition in public spaces for law enforcement, with narrow exceptions. The UK, meanwhile, which hasn’t been a part of Europe since 2018, doesn’t have any dedicated facial recognition legislation. It has already deployed the technology for some police forces, which are often used to track children. UK prime minister Keir Starmer announced plans to use facial recognition tech more widely last year, prompting rebuke from privacy advocates.

The Edmonton Police Force will review the results of the trial and decide whether to move forward with broader use of the technology in 2026.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

How scammers use fake insurance texts to steal your identity

Sometimes it’s hard to understand how some scams work or why criminals would even try them on you.

In this case it may have been a matter of timing. One of my co-workers received this one:

text message insurance scam

“Insurance estimates for certain age ranges:

20-30 ~ 200 – 300/mo
31-40 ~ 270 – 450/mo
41-64 ~ 350 – 500/mo

Please respond with your age and gender for a tailored pricing.”

A few red flags:

  • No company name
  • Unsolicited message from an unknown number
  • They ask for personal information (age, gender)

First off, don’t respond to this kind of message, not even to tell them to get lost. A reply tells the scammer that the number is “responsive,” which only encourages more texts.

And if you provide the sender with the personal details they ask for, those can be used later for social engineering, identity theft, or building a profile for future scams.

How these insurance scams work

Insurance scams fall into two broad groups: scams targeting consumers (to steal money or data) and fraud against insurers (fake or inflated claims). Both ultimately raise premiums and can expose victims to identity theft or legal trouble. Criminals like insurance-themed lures because policies are complex, interactions are infrequent, and high-value payouts make fraud profitable.

Here, we’re looking at the consumer-focused attacks.

Different criminal groups have their own goals and attack methods, but broadly speaking they’re after one of three goals: sell your data to other criminals, scam you out of money, or steal your identity.

Any reply with your details usually leads to bigger asks, like more texts, or a link to a form that wants even more information. For example, the scammer will promise “too good to be true” premiums and all you have to do is fill out this form with your financial details and upload a copy of your ID to prove who you are. That’s everything needed for identity theft.

Scammers also time these attacks around open enrollment periods. During health insurance enrollment windows, it’s common for criminals to pose as licensed agents to sell fake policies or harvest personal and financial information.

How to stay safe from insurance scams

The first thing to remember is not to respond. But if you feel you have to look into it, do some research first. Some good questions to ask yourself before you proceed:

  • Does the sender’s number belong to a trusted organization?
  • Are they offering something sensible or is it really too good to be true?
  • When sent to a website, does the URL in the address bar belong to the organization you expected to visit?
  • Is the information they’re asking for actually required?

You can protect yourself further by:

  • Keeping your browser and other important apps up to date.
  • Use a real-time anti-malware solution with a web protection component.
  • Consult with friends or family to check whether you’re doing the right thing.

After engaging with a suspicious sender, use STOP, our simple scam response framework to help protect against scams. 

  • Slow down: Don’t let urgency or pressure push you into action. Take a breath before responding. Legitimate businesses, like your bank or credit card provider, don’t push immediate action.  
  • Test them: If you’re on a call and feel pressured, ask a question only the real person would know, preferably something that can’t easily be found online. 
  • Opt out: If something feels wrong, hang up or end the conversation. You can always say the connection dropped. 
  • Prove it: Confirm the person is who they say they are by reaching out yourself through a trusted number, website, or method you have used before. 

Pro tip: You can upload suspicious messages of any kind to Malwarebytes Scam Guard. It will tell you whether it’s likely to be a scam and advise you what to do.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Update Chrome now: Google fixes 13 security issues affecting billions

Google has released an update for its Chrome browser that includes 13 security fixes, four of which are classified as high severity. One of these was found in Chrome’s Digital Credentials feature–a tool that lets you share verified information from your digital wallet with websites so you can prove who you are across devices.

Chrome is by far the world’s most popular browser, with an estimated 3.4 billion users. That scale means when Chrome has a security flaw, billions of users are potentially exposed until they update.

That’s why it’s important to install these patches promptly. Staying unpatched means you could be at risk just by browsing the web, and attackers often exploit these kinds of flaws before most users have a chance to update. Always let your browser update itself, and don’t delay restarting the browser as updates usually fix exactly this kind of risk.

How to update Chrome

The latest version number is 143.0.7499.40/.41 for Windows and macOS, and 143.0.7499.40 for Linux. So, if your Chrome is on version 143.0.7499.40 or later, it’s protected from these vulnerabilities.

The easiest way to update is to allow Chrome to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To update manually, click the More menu (three dots), then go to Settings > About Chrome. If an update is available, Chrome will start downloading it. Restart Chrome to complete the update, and you’ll be protected against these vulnerabilities.

You can also find step-by-step instructions in our guide to how to update Chrome on every operating system.

Chrome is up to date

Technical details

One of the vulnerabilities was found in the Digital Credentials feature and is tracked as CVE-2025-13633. As usual Google is keeping the details sparse until most users have updated. The description says:

Use after free in Digital Credentials in Google Chrome prior to 143.0.7499.41 allowed a remote attacker who had compromised the renderer process to potentially exploit heap corruption via a crafted HTML page.

That sounds complicated so let’s break it down.

Use after free (UAF) is a specific type of software vulnerability where a program attempts to access a memory location after it has been freed. That can lead to crashes or, in some cases, let an attackers run their own code.

The renderer process is the part of modern browsers like Chrome that turns HTML, CSS, and JavaScript into the visible webpage you see in a tab. It’s sandboxed for safety, separate from the browser’s main “browser process” that manages tabs, URLs, and network requests. So, for HTML pages, this is essentially the browser’s webpage display engine.

The heap is an area of memory made available for use by the program. The program can request blocks of memory for its use within the heap. In order to allocate a block of some size, the program makes an explicit request by calling the heap allocation operation.

A “remote attacker who had compromised the renderer” means the attacker would already need a foothold (for example, via a malicious browser extension) and then lure you to a site containing specially crafted HTML code.

So, my guess is that this vulnerability could be abused by a malicious extension to steal the information handled through Digital Credentials. The attacker could access information normally requiring a passkey, making it a tempting target for anyone trying to steal sensitive information.

Some of the fixes also apply to other Chromium browsers, so if you use Brave, Edge, or Opera, for example, you should keep an eye out for updates there too.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Attackers have a new way to slip past your MFA

Attackers are using a tool called Evilginx to steal session cookies, letting them bypass the need for a multi-factor authentication (MFA) token.

Researchers are warning about a rise in cases where this method is used against educational institutions.

Evilginx is an attacker-in-the-middle phishing toolkit that sits between you and the real website, relaying the genuine sign-in flow so everything looks normal while it captures what it needs. Because it sends your input to the real service, it can collect your username and password, as well as the session cookie issued after you complete MFA.

Session cookies are temporary files websites use to remember what you’re doing during a single browsing session–like staying signed in or keeping items in a shopping cart. They are stored in the browser’s memory and are automatically deleted when the user closes their browser or logs out, making them less of a security risk than persistent cookies. But with a valid session cookie the attacker can keep the session alive and continue as if they were you. Which, on a web shop or banking site could turn out to be costly.

Attack flow

The attacker sends you a link to a fake page that looks exactly the same as, for example, a bank login page, web shop, or your email or company’s single sign-on (SSO) page. In reality, the page is a live proxy to the real site.

Unaware of the difference, you enter your username, password, and MFA code as usual. The proxy relays this to the real site which grants access and sets a session cookie that says “this user is authenticated.”

But Evilginx isn’t just stealing your login details, it also captures the session cookie. The attacker can reuse it to impersonate you, often without triggering another MFA prompt.

Once inside, attackers can browse your email, change security settings, move money, and steal data. And because the session cookie says you’re already verified, you may not see another MFA challenge. They stay in until the session expires or is revoked.

Banks often add extra checks here. They may ask for another MFA code when you approve a payment, even if you’re already signed in. It’s called step-up authentication. It helps reduce fraud and meets Strong Customer Authentication rules by adding friction to high-risk actions like transferring money or changing payment details.

How to stay safe

Because Evilginx proxies the real site with valid TLS and live content, the page looks and behaves correctly, defeating simple “look for the padlock” advice and some automated checks.

Attackers often use links that live only for a very short time, so they disappear again before anyone can add them to a block list.​ Security tools then have to rely on how these links and sites behave in real time, but behavior‑based detection is never perfect and can still miss some attacks.

So, what you can and should do to stay safe is:

  • Be careful with links that arrive in an unusual way. Don’t click until you’ve checked the sender and hovered over the destination. When in doubt, feel free to use Malwarebytes Scam Guard on mobiles to find out whether it’s a scam or not. It will give you actionable advice on how to proceed.
  • Use up-to-date real-time anti-malware protection with a web component.
  • Use a password manager. It only auto-fills passwords on the exact domain they were saved for, so they usually refuse to do this on look‑alike phishing domains such as paypa1[.]com or micros0ft[.]com. But Evilginx is trickier because it sits in the middle while you talk to the real site, so this is not always enough.
  • Where possible, use phishing-resistant MFA. Passkeys or hardware security keys, which bind authentication to your device are resistant to this type of replay.
  • Revoke sessions if you notice something suspicious. Sign out of all sessions and re-login with MFA. Then change your password and review account recovery settings.

Pro tip: Malwarebytes Browser Guard is a free browser extension that can detect malicious behavior on web sites.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

How attackers use real IT tools to take over your computer

A new wave of attacks is exploiting legitimate Remote Monitoring and Management (RMM) tools like LogMeIn Resolve (formerly GoToResolve) and PDQ Connect to remotely control victims’ systems. Instead of dropping traditional malware, attackers trick people into installing these trusted IT support programs under false pretenses–disguising them as everyday utilities. Once installed, the tool gives attackers full remote access to the victim’s machine, evading many conventional security detections because the software itself is legitimate.

We’ve recently noticed an uptick in our telemetry for the detection name RiskWare.MisusedLegit.GoToResolve, which flags suspicious use of the legitimate GoToResolve/LogMeIn Resolve RMM tool.

Our data shows the tool was detected with several different filenames. Here are some examples from our telemetry:

all different filenames for the same file

The filenames also provide us with clues about how the targets were likely tricked into downloading the tool.

Here’s an example of a translated email sent to someone in Portugal:

translated email

As you can see, hovering over the link shows that it points to a file uploaded to Dropbox. Using a legitimate RMM tool and a legitimate domain like dropbox[.]com makes it harder for security software to intercept such emails.

Other researchers have also described how attackers set up fake websites that mimic the download pages for popular free utilities like Notepad++ and 7-Zip.

Clicking that malicious link delivers an RMM installer that’s been pre-configured with the attacker’s unique “CompanyId”–a hardcoded identifier tying the victim machine directly to the attacker’s control panel.

hex code with CompanyId

This ID lets them instantly spot and connect to the newly infected system without needing extra credentials or custom malware, as the legitimate tool registers seamlessly with their account. Firewalls and other security tools often allow their RMM traffic, especially because RMMs are designed to run with admin privileges. The result is that malicious access blends in with normal IT admin traffic.

How to stay safe

By misusing trusted IT tools rather than conventional malware, attackers are raising the bar on stealth and persistence. Awareness and careful attention to download sources are your best defense.

  • Always download software directly from official websites or verified sources.
  • Check file signatures and certificates before installing anything.
  • Verify unexpected update prompts through a separate, trusted channel.
  • Keep your operating system and software up to date.
  • Use an up-to-date, real-time anti-malware solution. Malwarebytes for Windows now includes Privacy Controls that alert you to any remote-access tools it finds on your desktop.
  • Learn how to spot social engineering tricks used to push malicious downloads.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.