IT NEWS

December Patch Tuesday fixes three zero-days, including one that hijacks Windows devices

These updates from Microsoft fix serious security issues, including three that attackers are already exploiting to take control of Windows systems.

In total, the security update resolves 57 Microsoft security vulnerabilities. Microsoft isn’t releasing new features for Windows 10 anymore, so Windows 10 users will only see security updates and fixes for bugs introduced by previous security updates.

What’s been fixed

Microsoft releases important security updates on the second Tuesday of every month—known as “Patch Tuesday.” This month’s patches fix critical flaws in Windows 10, Windows 11, Windows Server, Office, and related services.

There are three zero‑days: CVE‑2025‑62221 is an actively exploited privilege‑escalation bug in the Windows Cloud Files Mini Filter Driver. Two are publicly disclosed flaws: CVE-2025-64671, which is a GitHub Copilot for JetBrains remote code execution (RCE) vulnerability, and CVE‑2025‑54100, an RCE issue in Windows PowerShell.

PowerShell received some extra attention, as from now on users will be warned whenever the Invoke‑WebRequest command fetches web pages without safe parameters.​

The warning is to prevent accidental script execution from web content. It highlights the risk that script code embedded in a downloaded page might run during parsing, and recommends using the -UseBasicParsing switch to avoid running any page scripts.

There is no explicit statement from Microsoft tying the new Invoke‑WebRequest warning directly to ClickFix, but it clearly addresses the abuse pattern that ClickFix and similar campaigns rely on: tricking users into running web‑fetched PowerShell code without understanding what it does.

How to apply fixes and check you’re protected

These updates fix security problems and keep your Windows PC protected. Here’s how to make sure you’re up to date:

1. Open Settings

  • Click the Start button (the Windows logo at the bottom left of your screen).
  • Click on Settings (it looks like a little gear).

2. Go to Windows Update

  • In the Settings window, select Windows Update (usually at the bottom of the menu on the left).

3. Check for updates

  • Click the button that says Check for updates.
  • Windows will search for the latest Patch Tuesday updates.
  • If you have selected automatic updates earlier, you may see this under Update history:
Successfully installed security updates
  • Or you may see a Restart required message, which means all you have to do is restart your system and you’re done updating.
  • If not, continue with the steps below.

4. Download and Install

  • If updates are found, they’ll start downloading right away. Once complete, you’ll see a button that says Install or Restart now.
  • Click Install if needed and follow any prompts. Your computer will usually need a restart to finish the update. If it does, click Restart now.

5. Double-check you’re up to date

  • After restarting, go back to Windows Update and check again. If it says You’re up to date, you’re all set!
You're up to date

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

GhostFrame phishing kit fuels widespread attacks against millions

GhostFrame is a new phishing-as-a-service (PhaaS) kit, tracked since September 2025, that has already powered more than a million phishing attacks.

Threat analysts spotted a series of phishing attacks featuring tools and techniques they hadn’t seen before. A few months later, they had linked over a million attempts to this same kit, which they named GhostFrame for its stealthy use of iframes. The kit hides its malicious activity inside iframes loaded from constantly changing subdomains.

An iframe is a small browser window embedded inside a web page, allowing content to load from another site without sending you away–like an embedded YouTube video or a Google Map. That embedded bit is usually an iframe and is normally harmless.

GhostFrame abuses it in several ways. It dynamically generates a unique subdomain for each victim and can rotate subdomains even during an active session, undermining domain‑based detection and blocking. It also includes several anti‑analysis tricks: disabling right‑click, blocking common keyboard shortcuts, and interfering with browser developer tools, which makes it harder for analysts or cautious users to inspect what is going on behind the scenes.

As a PhaaS kit, GhostFrame is able to spoof legitimate services by adjusting page titles and favicons to match the brand being impersonated. This and its detection-evasion techniques show how PhaaS developers are innovating around web architecture (iframes, subdomains, streaming features) and not just improving email templates.

Hiding sign-in forms inside non‑obvious features (like image streaming or large‑file handlers) is another attempt to get around static content scanners. Think of it as attackers hiding a fake login box inside a “video player” instead of putting the login box directly on the page, so many security tools don’t realize it’s a login box at all. Those tools are often tuned to look for normal HTML forms and password fields in the page code, and here the sensitive bits are tucked away in a feature that is supposed to handle big image or file data streams.

Normally, an image‑streaming or large‑file function is just a way to deliver big images or other “binary large objects” (BLOBs) efficiently to the browser. Instead of putting the login form directly on the page, GhostFrame turns it into what looks like image data. To the user, it looks just like a real Microsoft 365 login screen, but to a basic scanner reading the HTML, it looks like regular, harmless image handling.

Generally speaking, the rise of GhostFrame illuminates a trend that PhaaS is arming less-skilled cybercriminals while raising the bar for defenders. We recently covered Sneaky 2FA and Lighthouse as examples of PhaaS kits that are extremely popular among attackers.

So, what can we do?

Pairing a password manager with multi-factor authentication (MFA) offers the best protection.

But as always, you’re the first line of defense. Don’t click on links in unsolicited messages of any type before verifying and confirming they were sent by someone you trust. Staying informed is important as well, because you know what to expect and what to look for.

And remember: it’s not just about trusting what you see on the screen. Layered security stops attackers before they can get anywhere.

Another effective security layer to defend against phishing attacks is Malwarebytes’ free browser extension, Browser Guard, which detects and blocks phishing attacks heuristically.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Prompt injection is a problem that may never be fixed, warns NCSC

Prompt injection is shaping up to be one of the most stubborn problems in AI security, and the UK’s National Cyber Security Centre (NCSC) has warned that it may never be “fixed” in the way SQL injection was.

Two years ago, the NCSC said prompt injection might turn out to be the “SQL injection of the future.” Apparently, they have come to realize it’s even worse.

Prompt injection works because AI models can’t tell the difference between the app’s instructions and the attacker’s instructions, so they sometimes obey the wrong one.

To avoid this, AI providers set up their models with guardrails: tools that help developers stop agents from doing things they shouldn’t, either intentionally or unintentionally. For example, if you tried to tell an agent to explain how to produce anthrax spores at scale, guardrails would ideally detect that request as undesirable and refuse to acknowledge it.

Getting an AI to go outside those boundaries is often referred to as jailbreaking. Guardrails are the safety systems that try to keep AI models from saying or doing harmful things. Jailbreaking is when someone crafts one or more prompts to get around those safety systems and make the model do what it’s not supposed to do. Prompt injection is a specific way of doing that: An attacker hides their own instructions inside user input or external content, so the model follows those hidden instructions instead of the original guardrails.

The danger grows when Large Language Models (LLMs), like ChatGPT, Claude or Gemini, stop being chatbots in a box and start acting as “autonomous agents” that can move money, read email, or change settings. If a model is wired into a bank’s internal tools, HR systems, or developer pipelines, a successful prompt injection stops being an embarrassing answer and becomes a potential data breach or fraud incident.

We’ve already seen several methods of prompt injection emerge. For example, researchers found that posting embedded instructions on Reddit could potentially get agentic browsers to drain the user’s bank account. Or attackers could use specially crafted dodgy documents to corrupt an AI. Even seemingly harmless images can be weaponized in prompt injection attacks.

Why we shouldn’t compare prompt injection with SQL injection

The temptation to frame prompt injection as “SQL injection for AI” is understandable. Both are injection attacks that smuggle harmful instructions into something that should have been safe. But the NCSC stresses that this comparison is dangerous if it leads teams to assume that a similar one‑shot fix is around the corner.

The comparison to SQL injection attacks alone was enough to make me nervous. The first documented SQL injection exploit was in 1998 by cybersecurity researcher Jeff Forristal, and we still see them today, 27 years later. 

SQL injection became manageable because developers could draw a firm line between commands and untrusted input, and then enforce that line with libraries and frameworks. With LLMs, that line simply does not exist inside the model: Every token is fair game for interpretation as an instruction. That is why the NCSC believes prompt injection may never be totally mitigated and could drive a wave of data breaches as more systems plug LLMs into sensitive back‑ends.

Does this mean we have set up our AI models wrong? Maybe. Under the hood of an LLM, there’s no distinction made between data or instructions; it simply predicts the most likely next token from the text so far. This can lead to “confused deputy attacks.”

The NCSC warns that as more organizations bolt generative AI onto existing applications without designing for prompt injection from the start, the industry could see a surge of incidents similar to the SQL injection‑driven breaches of 10—15 years ago. Possibly even worse, because the possible failure modes are uncharted territory for now.

What can users do?

The NCSC provides advice for developers to reduce the risks of prompt injection. But how can we, as users, stay safe?

  • Take advice provided by AI agents with a grain of salt. Double-check what they’re telling you, especially when it’s important.
  • Limit the powers you provide to agentic browsers or other agents. Don’t let them handle large financial transactions or delete files. Take warning from this story where a developer found their entire D drive deleted.
  • Only connect AI assistants to the minimum data and systems they truly need, and keep anything that would be catastrophic to lose out of their control.
  • Treat AI‑driven workflows like any other exposed surface and log interactions so unusual behavior can be spotted and investigated.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

EU fines X $140m, tied to verification rules that make impostor scams easier

The European Commission slapped social networking company X with a €120 million ($140 million) fine last week for what it says was a lack of transparency with its European users.

The fine, the first ever penalty under the EU’s landmark Digital Services Act, addressed three specific violations with allocated penalties.

The first was a deceptive blue checkmark system. X touted this feature, first introduced by Musk when he bought Twitter in 2022, as a way to verify your identity on X. However, the Commission accused it of failing to actually verify users. It said:

“On X, anyone can pay to obtain the ‘verified’ status without the company meaningfully verifying who is behind the account, making it difficult for users to judge the authenticity of accounts and content they engage with.”

The company also blocked researchers from accessing its public data, the Commission complained, arguing that it undermined research into systemic risks in the EU.

Finally, the fine covers a lack of transparency around X’s advertising records. Its advertising repository doesn’t support the DSA’s standards, the Commission said, accusing it of lacking critical information such as advertising topic and content.

This makes it more difficult for researchers and the public to evaluate potential risks in online advertising according to the Commission.

Before Musk took over Twitter and renamed it to X, the company would independently verify select accounts using information including institutional email addresses to prove the owners’ identities. Today, you can get a blue checkmark that says you’re verified for $8 per month if you have an account on X that has been active for 30 days and can prove you own your phone number. X killed off the old verification system, with its authentic, notable, and active requirement, on April 1, 2023.

An explosion in imposter accounts

The tricky thing about weaker verification measures is that people can abuse them. Within days of Musk announcing the new blue checkmark verifications, someone registered a fake account for pharmaceutical company Eli Lilly and tweeted “insulin is free now”, tanking the stock over 4%.

Other impersonators verifying fake accounts at the time targeted Tesla, Trump, and Tony Blair, among others.

Weak verification measures are especially dangerous in an era where fake accounts are rife. Many people have fallen victim to fake social media accounts that scammers set up to impersonate legitimate brands’ customer support.

Musk, who threatened a court battle when the EC released its preliminary findings on the investigation last year, confirmed that X deactivated the EC’s advertising account in retaliation, but also called for the abolition of the EU.

This isn’t the social media company’s first tussle with regulators. In May 2022, before Musk bought it, Twitter settled with the FTC and DoJ for $150 million over allegations that it used peoples’ non-public security numbers for targeted advertising.

There are also other ongoing DSA-related investigations into X. The EU is probing its recommendation system. Ireland is looking into its handling of customer complaints about online content.

What comes next

X has 60 working days to address the checkmark violations and 90 days for advertising and researcher access, although given Musk’s previous commentary we wouldn’t be surprised to see him take the EU to court.

Failure to comply would trigger additional periodic penalties. The DSA allows fines up to 6% of global revenue.

Meanwhile, the core problem persists: anyone can still buy a ‘verified’ checkmark from X with extremely weak verification. So if anyone with a blue checkmark contacts you on the platform, don’t take their authenticity for granted.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Deepfakes, AI resumes, and the growing threat of fake applicants

Recruiters expect the odd exaggerated resume, but many companies, including us here at Malwarebytes, are now dealing with something far more serious: job applicants who aren’t real people at all.

From fabricated identities to AI-generated resumes and outsourced impostor interviews, hiring pipelines have become a new way for attackers to sneak into organizations.

Fake applicants aren’t just a minor HR inconvenience anymore but a genuine security risk. So, what’s the purpose behind it, and what should you look out for?

How these fake applicants operate

These applicants don’t just fire off a sketchy resume and hope for the best. Many use polished, coordinated tactics designed to slip through screening.

AI-generated resumes

AI-generated resumes are now one of the most common signs of a fake applicant. Language models can produce polished, keyword-heavy resumes in seconds, and scammers often generate dozens of variations to see which one gets past an Applicant Tracking System. In some cases, entire profiles are generated at the same time.

These resumes often look flawless on paper but fall apart when you ask about specific projects, timelines, or achievements. Hiring teams have reported waves of nearly identical resumes for unrelated positions, or applicants whose written materials are far more detailed than anything they can explain in conversation. Some have even received multiple resumes with the same formatting quirks, phrasing, or project descriptions.

Fake or borrowed identities

Impersonation is common. Scammers use AI-generated or stolen profile photos, fake addresses, and VoIP phone numbers to look legitimate. LinkedIn activity is usually sparse, or you’ll find several nearly identical profiles using the same name with slightly different skills.

At Malwarebytes, as in this Register article, we’ve noticed that the details applicants provide don’t always match what we see during the interview. In some cases, the same name and phone number have appeared across multiple applications, each supported by a freshly tailored resume. Further discrepancies occur in many instances where the applicant claims to be located in one country, but calls from another country entirely, usually in Asia.

Outsourced, scripted, and deepfake interviews

Fraudulent interviews tend to follow a familiar pattern. Introductions are short and vague, and answers arrive after long, noticeable pauses, as if the person is being coached off-screen. Many try to keep the camera off, or ask to complete tests offline instead of live.

In more advanced cases, you might see the telltale signs of real-time filters or deepfake tools, like mismatched lip-sync, unnatural blinking, or distorted edges. Most scammers still rely on simpler tricks like camera avoidance or off-screen coaching, but there have been reports of attackers using deepfake video or voice clones in interviews. It’s still rare, but it shows how quickly these tools are evolving.

Why they’re doing it

Scammers have a range of motives, from fraud to full system access.

Financial gain

For some groups, the goal is simple: money. They target remote, well-paid roles and then subcontract the work to cheaper labor behind the scenes. The fraudulent applicant keeps the salary while someone else quietly does the job at a fraction of the cost. It’s a volume game, and the more applications they get through, the more income they can generate.

Identity or documentation fraud

Others are trying to build a paper trail. A “successful hire” can provide employment verification, payroll history, and official contract letters. These documents can later support visa applications, bank loans, or other kinds of identity or financial fraud. In these cases, the scammer may never even intend to start work. They just need the paperwork that makes them look legitimate.

Algorithm testing and data harvesting

Some operations use job applications as a way to probe and learn. They send out thousands of resumes to test how screening software responds, to reverse-engineer what gets past filters, and to capture recruiter email patterns for future campaigns. By doing this at scale, they train automation that can mimic real applicants more convincingly over time.

System access for cybercrime

This is where the stakes get higher. Landing a remote role can give scammers access to internal systems, company data, and intellectual property—anything the job legitimately touches.

Even when the scammer isn’t hired, simply entering your hiring pipeline exposes internal details: how your team communicates, who makes what decisions, which roles have which tools. That information can be enough to craft a convincing impersonation later. At that point, the hiring process becomes an unguarded door into the organization.

The wider risk (not just to recruiters)

Recruiters aren’t the only ones affected. Everyday people on LinkedIn or job sites can get caught in the fallout too.

Fake applicant networks rely on scraping public profiles to build believable identities. LinkedIn added anti-bot checks in 2023, but fake profiles still get through, which means your name, photo, or job history could be copied and reused without your knowledge.

They also send out fake connection requests that lead to phishing messages, malicious job offers, or attempts to collect personal information. Recent research from the University of Portsmouth found that fake social media profiles are more common than many people realise:

80% of respondents said they’d encountered suspicious accounts, and 77% had received link requests from strangers.

It’s a reminder that anyone on LinkedIn can be targeted, not just recruiters, and that these profiles often work by building trust first and slipping in malicious links or requests later.

How recruiters can protect themselves

You can tighten screening without discriminating or adding friction by following these steps:

Verify identity earlier

Start with a camera-on video call whenever you can. Look for the subtle giveaways of filters or deepfakes: unnatural blinking, lip-sync that’s slightly off, or edges of the face that seem to warp or lag. If something feels odd, a simple request like “Please adjust your glasses” or “touch your cheek for a moment” can quickly show whether you’re speaking to a real person.

Cross-check details

Make sure the basics line up. The applicant’s face should match their documents, and their time zone should match where they say they live. Work history should hold up when you check references. A quick search can reveal duplicate resumes, recycled profiles, or LinkedIn accounts with only a few months of activity.

Watch for classic red flags

Most fake applicants slip when the questions get personal or specific. A resume that’s polished but hollow, a communication style that changes between messages, or hesitation when discussing timelines or past roles can all signal coaching. Long pauses before answers often hint that someone off-screen may be feeding responses.

Secure onboarding

If someone does pass the process, treat early access carefully. Limit what new hires can reach, require multi-factor authentication from day one, and make sure their device has been checked before it touches your network. Bringing in your security team early helps ensure that recruitment fraud doesn’t become an accidental entry point.


Final thoughts

Recruiting used to be about finding the best talent. Today, it often includes identity verification and security awareness.

As remote work becomes the norm, scammers are getting smarter. Fake applicants might show up as a nuisance, but the risks range from compliance issues to data loss—or even full-scale breaches.

Spotting the signs early, and building stronger screening processes, protects not just your hiring pipeline, but your organization as a whole.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

How phishers hide banking scams behind free Cloudflare Pages

During a recent investigation, we uncovered a phishing operation that combines free hosting on developer platforms with compromised legitimate websites to build convincing banking and insurance login portals. These fake pages don’t just grab a username and password–they also ask for answers to secret questions and other “backup” data that attackers can use to bypass multi-factor authentication and account recovery protections.

Instead of sending stolen data to a traditional command-and-control server, the kit forwards every submission to a Telegram bot. That gives the attackers a live feed of fresh logins they can use right away. It also sidesteps many domain-based blocking strategies and makes swapping infrastructure very easy.​

Phishing groups increasingly use services like Cloudflare Pages (*.pages.dev) to host their fake portals, sometimes copying a real login screen almost pixel for pixel. In this case, the actors spun up subdomains impersonating financial and healthcare providers. The first one we found was impersonating Heartland bank Arvest.

fake Arvest log in page
Fake Arvest login page

On closer look, the phishing site shows visitors two “failed login” screens, prompts for security questions, and then sends all credentials and answers to a Telegram bot.

Comparing their infrastructure with other sites, we found one impersonating a much more widely known brand: United Healthcare.

HealthSafe ID overpayment refund
HealthSafe ID overpayment refund

In this case, the phishers abused a compromised website as a redirector. Attackers took over a legitimate-looking domain like biancalentinidesigns[.]com and saddle it with long, obscure paths for phishing or redirection. Emails link to the real domain first, which then forwards the victim to the active Cloudflare pages phishing site. Messages containing a familiar or benign-looking domain are more likely to slip past spam filters than links that go straight to an obviously new cloud-hosted subdomain.​

Cloud-based hosting also makes takedowns harder. If one *.pages.dev hostname gets reported and removed, attackers can quickly deploy the same kit under another random subdomain and resume operations.​

The phishing kit at the heart of this campaign follows a multi-step pattern designed to look like a normal sign-in flow while extracting as much sensitive data as possible.​

Instead of using a regular form submission to a visible backend, JavaScript harvests the fields and bundles them into a message sent straight to the Telegram API.. That message can include the victim’s IP address, user agent, and all captured fields, giving criminals a tidy snapshot they can use to bypass defenses or sign in from a similar environment.​

The exfiltration mechanism is one of the most worrying parts. Rather than pushing credentials to a single hosted panel, the kit posts them into one or more Telegram chats using bot tokens and chat IDs hardcoded in the JavaScript. As soon as a victim submits a form, the operator receives a message in their Telegram client with the details, ready for immediate use or resale.​

This approach offers several advantages for the attackers: they can change bots and chat IDs frequently, they do not need to maintain their own server, and many security controls pay less attention to traffic that looks like a normal connection to a well-known messaging platform. Cycling multiple bots and chats gives them redundancy if one token is reported and revoked.​

What an attack might look like

Putting all the pieces together, a victim’s experience in this kind of campaign often looks like this:​

  • They receive a phishing email about banking or health benefits: “Your online banking access is restricted,” or “Urgent: United Health benefits update.”
  • The link points to a legitimate but compromised site, using a long or strange path that does not raise instant suspicion.​
  • That hacked site redirects, silently or after a brief delay, to a *.pages.dev phishing site that looks almost identical to the impersonated brand.​
  • After entering their username and password, the victim sees an error or extra verification step and is asked to provide answers to secret questions or more personal and financial information.​
  • Behind the scenes, each submitted field is captured in JavaScript and sent to a Telegram bot, where the attacker can use or sell it immediately.​

From the victim’s point of view, nothing seems unusual beyond an odd-looking link and a failed sign-in. For the attackers, the mix of free hosting, compromised redirectors, and Telegram-based exfiltration gives them speed, scale, and resilience.

The bigger trend behind this campaign is clear: by leaning on free web hosting and mainstream messaging platforms, phishing actors avoid many of the choke points defenders used to rely on, like single malicious IPs or obviously shady domains. Spinning up new infrastructure is cheap, fast, and largely invisible to victims.

How to stay safe

Education and a healthy dose of skepticism are key components to staying safe. A few habits can help you avoid these portals:​

  • Always check the full domain name, not just the logo or page design. Banks and health insurers don’t host sign-in pages on generic developer domains like *.pages.dev*.netlify.app, or on strange paths on unrelated sites.​
  • Don’t click sign-in or benefits links in unsolicited emails or texts. Instead, go to the institution’s site via a bookmark or by typing the address yourself.​
  • Treat surprise “extra security” prompts after a failed login with caution, especially if they ask for answers to security questions, card numbers, or email passwords.​
  • If anything about the link, timing, or requested information feels wrong, stop and contact the provider using trusted contact information from their official site.
  • Use an up-to-date anti-malware solution with a web protection component.

Pro tip: Malwarebytes free Browser Guard extension blocked these websites.

Browser Guard Phishing block

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Scammers harvesting Facebook photos to stage fake kidnappings, warns FBI

The FBI has warned about a new type of scam where your Facebook pictures are harvested to act as “proof-of-life” pictures in a virtual kidnapping.

The scammers pretend they have kidnapped somebody and contact friends and next of kin to demand a ransom for their release. While the alleged victim is really just going about their normal day, criminals show the family real Facebook photos to “prove” that person is still alive but in their custody.

This attack resembles Facebook cloning but with a darker twist. Instead of just impersonating you to scam your friends, attackers weaponize your pictures to stage fake proof‑of‑life evidence.

Both scams feed on oversharing. Public posts give criminals more than enough information to impersonate you, copy your life, and convince your loved ones something is wrong.

This alert focuses on criminals scraping photos from social media (usually Facebook, but also LinkedIn, X, or any public profile) then manipulating those images with AI or simple editing to use during extortion attempts. If you know what to look for, you might spot inconsistencies like missing tattoos, unusual lighting, or proportions that don’t quite match.

Scammers rely on panic. They push tight deadlines, threaten violence, and try to force split-second decisions. That emotional pressure is part of their playbook.

In recent years, the FBI has also warned about synthetic media and deepfakes, like explicit images generated from benign photos and then used for sextortion, which is a closely related pattern of abuse of user‑posted pictures. Together, these warnings point to a trend: ordinary profile photos, holiday snaps, and professional headshots are increasingly weaponized for extortion rather than classic account hacking.

What you can do

To make it harder for criminals to use these tactics, be mindful of what information you share on social media. Share pictures of yourself, or your children, only with actual friends and not for the whole world to find. And when you’re travelling, post the beautiful pictures you have taken when you’re back, not while you’re away from home.

Facebook’s built-in privacy tool lets you quickly adjust:

  • Who can see your posts.
  • Who can see your profile information.
  • App and website permissions.

If you’re on the receiving end of a virtual kidnapping attempt:

  • Establish a code word only you and your loved ones know that you can use to prove it’s really you.
  • Always attempt to contact the alleged victim before considering paying any ransom demand.
  • Keep records of every communication with the scammers. They can be helpful in a police investigation.
  • Report the incident to the FBI’s Internet Crime Complaint Center at www.ic3.gov.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (December 1 – December 7)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Leaks show Intellexa burning zero-days to keep Predator spyware running

Intellexa is a well-known commercial spyware vendor, servicing governments and large corporations. Its main product is the Predator spyware.

An investigation by several independent parties describes Intellexa as one of the most notorious mercenary spyware vendors, still operating its Predator platform and hitting new targets even after being placed on US sanctions lists and being under active investigation in Greece.

The investigation draws on highly sensitive documents and other materials leaked from the company, including internal records, sales and marketing material, and training videos. Amnesty International researchers reviewed the material to verify the evidence.

To me, the most interesting part is Intellexa’s continuous use of zero-days against mobile browsers. Google’s Threat Analysis Group (TAG) posted a blog about that, including a list of 15 unique zero-days.

Intellexa can afford to buy and burn zero-day vulnerabilities. They buy them from hackers and use them until the bugs are discovered and patched–at which point they are “burned” because they no longer work against updated systems.

The price for such vulnerabilities depends on the targeted device or application and the impact of exploitation. For example, you can expect to pay in the range of $100,000 to $300,000 for a robust, weaponized Remote Code Excecution (RCE) exploit against Chrome with sandbox bypass suitable for reliable, at‑scale deployment in a mercenary spyware platform. And in 2019, zero-day exploit broker Zerodium offered millions for zero-click full chain exploits with persistence against Android and iPhones.

Which is why only governments and well-resourced organizations can afford to hire Intellexa to spy on the people they’re interested in.

The Google TAG blog states:

“Partnering with our colleagues at CitizenLab in 2023, we captured a full iOS zero-day exploit chain used in the wild against targets in Egypt. Developed by Intellexa, this exploit chain was used to install spyware publicly known as Predator surreptitiously onto a device.”

To slow down the “burn” rate of its exploits, Intellexa delivers one-time links directly to targets through end-to-end encrypted messaging apps. This is a common method: last year we reported how the NSO Group was ordered to hand over the code for Pegasus and other spyware products that were used to spy on WhatsApp users.

The fewer people who see an exploit link, the harder it is for researchers to capture and analyze it. Intellexa also uses malicious ads on third-party platforms to fingerprint visitors and redirect those who match its target profiles to its exploit delivery servers.

This zero-click infection mechanism, dubbed “Aladdin,” is believed to still be operational and actively developed. It leverages the commercial mobile advertising system to deliver malware. That means a malicious ad could appear on any website that serves ads, such as a trusted news website or mobile app, and look completely ordinary. If you’re not in the target group, nothing happens. If you are, simply viewing the ad is enough to trigger the infection on your device, no need to click.

zero click infection chain
Zero-click infection chain
Image courtesy of Amnesty International

How to stay safe

While most of us will probably never have to worry about being in the target group, there are still practical steps you can take:

  • Use an ad blocker. Malwarebytes Browser Guard is a good start. Did I mention it’s a free browser extension that works on Chrome, Firefox, Edge, and Safari? And it should work on most other Chromium based browsers (I even use it on Comet).
  • Keep your software updated. When it comes to zero-days, updating your software only helps after researchers discover the vulnerabilities. However, once the flaws become public, less sophisticated cybercriminals often start exploiting them, so patching remains essential to block these more common attacks.
  • Use a real-time anti-malware solution on your devices.
  • Don’t open unsolicited messages from unknown senders. Opening them could be enough to start a compromise of your device.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Canadian police trialing facial recognition bodycams

A municipal police force in Canada is now using facial recognition bodycams, it was revealed this week. The police service in the prairie city of Edmonton is trialing technology from US-based Axon, which makes products for the military and law enforcement.

Up to 50 officers are taking part in the trial this month, according to reports. Officers won’t turn the cameras on in the field until they’re actively investigating or enforcing, representatives from Axon said.

When the cameras are activated, the recognition software will run in the background, not reporting anything to the wearer. The camera captures images of anyone within roughly four feet of the officer and sends them to a cloud service, where it will be compared against 6,341 people already flagged in the police system. According to police and Axon, images that don’t match the list will be deleted, and the database is entirely owned by the Police Service, meaning that Axon doesn’t get to see it.

This represents a turnaround for Axon. In 2019, its first ethics board report said that facial recognition wasn’t reliable enough for body cameras.

CEO Rick Smith said at the time:

“Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board’s recommendation, Axon will not be commercializing face matching products on our body cameras at this time.”

Two years later, nine of the board’s members resigned after the company reportedly went against their recommendations by pursuing plans for taser-equipped drones. Axon subsequently put the drone project on hold.

Gideon Christian, an associated law professor at the University of Calgary (in Alberta, the same province as Edmonton), told Yahoo News that the Edmonton Police Service’s move would transform bodycams from a tool making police officers accountable to a tool of mass surveillance:

“This tool is basically now being thrown from a tool for police accountability and transparency to a tool for mass surveillance of members of the public.”

Policy spaghetti in the US and further afield

This wouldn’t be the first time that police have tried facial recognition, often with lamentable results. The American Civil Liberties Union identified at least seven wrongful arrests in the US thanks to inaccurate facial recognition results, and that was in April 2024. Most if not all of those incidents involved black people, it said. Facial recognition datasets have been found to be racially biased.

In June 2024, police in Detroit agreed not to make arrests based purely on facial recognition as part of a settlement for the wrongful arrest of Robin Williams. Williams, a person of color, was arrested for theft in front of his wife and daughter after detectives relied heavily on an inaccurate facial recognition match.

More broadly in the US, 15 states had limited police use of facial recognition as of January this year, although some jurisdictions are reversing course. New Orleans reinstated its use in 2022 after a spike in homicides. Police have also been known to request searches from law enforcement in neighboring cities if they are banned from using the technology in their own municipality.

Across the Atlantic, things are equally mixed. The EU AI Act bans live facial recognition in public spaces for law enforcement, with narrow exceptions. The UK, meanwhile, which hasn’t been a part of Europe since 2018, doesn’t have any dedicated facial recognition legislation. It has already deployed the technology for some police forces, which are often used to track children. UK prime minister Keir Starmer announced plans to use facial recognition tech more widely last year, prompting rebuke from privacy advocates.

The Edmonton Police Force will review the results of the trial and decide whether to move forward with broader use of the technology in 2026.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.