IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Prompt injection is a problem that may never be fixed, warns NCSC

Prompt injection is shaping up to be one of the most stubborn problems in AI security, and the UK’s National Cyber Security Centre (NCSC) has warned that it may never be “fixed” in the way SQL injection was.

Two years ago, the NCSC said prompt injection might turn out to be the “SQL injection of the future.” Apparently, they have come to realize it’s even worse.

Prompt injection works because AI models can’t tell the difference between the app’s instructions and the attacker’s instructions, so they sometimes obey the wrong one.

To avoid this, AI providers set up their models with guardrails: tools that help developers stop agents from doing things they shouldn’t, either intentionally or unintentionally. For example, if you tried to tell an agent to explain how to produce anthrax spores at scale, guardrails would ideally detect that request as undesirable and refuse to acknowledge it.

Getting an AI to go outside those boundaries is often referred to as jailbreaking. Guardrails are the safety systems that try to keep AI models from saying or doing harmful things. Jailbreaking is when someone crafts one or more prompts to get around those safety systems and make the model do what it’s not supposed to do. Prompt injection is a specific way of doing that: An attacker hides their own instructions inside user input or external content, so the model follows those hidden instructions instead of the original guardrails.

The danger grows when Large Language Models (LLMs), like ChatGPT, Claude or Gemini, stop being chatbots in a box and start acting as “autonomous agents” that can move money, read email, or change settings. If a model is wired into a bank’s internal tools, HR systems, or developer pipelines, a successful prompt injection stops being an embarrassing answer and becomes a potential data breach or fraud incident.

We’ve already seen several methods of prompt injection emerge. For example, researchers found that posting embedded instructions on Reddit could potentially get agentic browsers to drain the user’s bank account. Or attackers could use specially crafted dodgy documents to corrupt an AI. Even seemingly harmless images can be weaponized in prompt injection attacks.

Why we shouldn’t compare prompt injection with SQL injection

The temptation to frame prompt injection as “SQL injection for AI” is understandable. Both are injection attacks that smuggle harmful instructions into something that should have been safe. But the NCSC stresses that this comparison is dangerous if it leads teams to assume that a similar one‑shot fix is around the corner.

The comparison to SQL injection attacks alone was enough to make me nervous. The first documented SQL injection exploit was in 1998 by cybersecurity researcher Jeff Forristal, and we still see them today, 27 years later. 

SQL injection became manageable because developers could draw a firm line between commands and untrusted input, and then enforce that line with libraries and frameworks. With LLMs, that line simply does not exist inside the model: Every token is fair game for interpretation as an instruction. That is why the NCSC believes prompt injection may never be totally mitigated and could drive a wave of data breaches as more systems plug LLMs into sensitive back‑ends.

Does this mean we have set up our AI models wrong? Maybe. Under the hood of an LLM, there’s no distinction made between data or instructions; it simply predicts the most likely next token from the text so far. This can lead to “confused deputy attacks.”

The NCSC warns that as more organizations bolt generative AI onto existing applications without designing for prompt injection from the start, the industry could see a surge of incidents similar to the SQL injection‑driven breaches of 10—15 years ago. Possibly even worse, because the possible failure modes are uncharted territory for now.

What can users do?

The NCSC provides advice for developers to reduce the risks of prompt injection. But how can we, as users, stay safe?

  • Take advice provided by AI agents with a grain of salt. Double-check what they’re telling you, especially when it’s important.
  • Limit the powers you provide to agentic browsers or other agents. Don’t let them handle large financial transactions or delete files. Take warning from this story where a developer found their entire D drive deleted.
  • Only connect AI assistants to the minimum data and systems they truly need, and keep anything that would be catastrophic to lose out of their control.
  • Treat AI‑driven workflows like any other exposed surface and log interactions so unusual behavior can be spotted and investigated.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

EU fines X $140m, tied to verification rules that make impostor scams easier

The European Commission slapped social networking company X with a €120 million ($140 million) fine last week for what it says was a lack of transparency with its European users.

The fine, the first ever penalty under the EU’s landmark Digital Services Act, addressed three specific violations with allocated penalties.

The first was a deceptive blue checkmark system. X touted this feature, first introduced by Musk when he bought Twitter in 2022, as a way to verify your identity on X. However, the Commission accused it of failing to actually verify users. It said:

“On X, anyone can pay to obtain the ‘verified’ status without the company meaningfully verifying who is behind the account, making it difficult for users to judge the authenticity of accounts and content they engage with.”

The company also blocked researchers from accessing its public data, the Commission complained, arguing that it undermined research into systemic risks in the EU.

Finally, the fine covers a lack of transparency around X’s advertising records. Its advertising repository doesn’t support the DSA’s standards, the Commission said, accusing it of lacking critical information such as advertising topic and content.

This makes it more difficult for researchers and the public to evaluate potential risks in online advertising according to the Commission.

Before Musk took over Twitter and renamed it to X, the company would independently verify select accounts using information including institutional email addresses to prove the owners’ identities. Today, you can get a blue checkmark that says you’re verified for $8 per month if you have an account on X that has been active for 30 days and can prove you own your phone number. X killed off the old verification system, with its authentic, notable, and active requirement, on April 1, 2023.

An explosion in imposter accounts

The tricky thing about weaker verification measures is that people can abuse them. Within days of Musk announcing the new blue checkmark verifications, someone registered a fake account for pharmaceutical company Eli Lilly and tweeted “insulin is free now”, tanking the stock over 4%.

Other impersonators verifying fake accounts at the time targeted Tesla, Trump, and Tony Blair, among others.

Weak verification measures are especially dangerous in an era where fake accounts are rife. Many people have fallen victim to fake social media accounts that scammers set up to impersonate legitimate brands’ customer support.

Musk, who threatened a court battle when the EC released its preliminary findings on the investigation last year, confirmed that X deactivated the EC’s advertising account in retaliation, but also called for the abolition of the EU.

This isn’t the social media company’s first tussle with regulators. In May 2022, before Musk bought it, Twitter settled with the FTC and DoJ for $150 million over allegations that it used peoples’ non-public security numbers for targeted advertising.

There are also other ongoing DSA-related investigations into X. The EU is probing its recommendation system. Ireland is looking into its handling of customer complaints about online content.

What comes next

X has 60 working days to address the checkmark violations and 90 days for advertising and researcher access, although given Musk’s previous commentary we wouldn’t be surprised to see him take the EU to court.

Failure to comply would trigger additional periodic penalties. The DSA allows fines up to 6% of global revenue.

Meanwhile, the core problem persists: anyone can still buy a ‘verified’ checkmark from X with extremely weak verification. So if anyone with a blue checkmark contacts you on the platform, don’t take their authenticity for granted.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Deepfakes, AI resumes, and the growing threat of fake applicants

Recruiters expect the odd exaggerated resume, but many companies, including us here at Malwarebytes, are now dealing with something far more serious: job applicants who aren’t real people at all.

From fabricated identities to AI-generated resumes and outsourced impostor interviews, hiring pipelines have become a new way for attackers to sneak into organizations.

Fake applicants aren’t just a minor HR inconvenience anymore but a genuine security risk. So, what’s the purpose behind it, and what should you look out for?

How these fake applicants operate

These applicants don’t just fire off a sketchy resume and hope for the best. Many use polished, coordinated tactics designed to slip through screening.

AI-generated resumes

AI-generated resumes are now one of the most common signs of a fake applicant. Language models can produce polished, keyword-heavy resumes in seconds, and scammers often generate dozens of variations to see which one gets past an Applicant Tracking System. In some cases, entire profiles are generated at the same time.

These resumes often look flawless on paper but fall apart when you ask about specific projects, timelines, or achievements. Hiring teams have reported waves of nearly identical resumes for unrelated positions, or applicants whose written materials are far more detailed than anything they can explain in conversation. Some have even received multiple resumes with the same formatting quirks, phrasing, or project descriptions.

Fake or borrowed identities

Impersonation is common. Scammers use AI-generated or stolen profile photos, fake addresses, and VoIP phone numbers to look legitimate. LinkedIn activity is usually sparse, or you’ll find several nearly identical profiles using the same name with slightly different skills.

At Malwarebytes, as in this Register article, we’ve noticed that the details applicants provide don’t always match what we see during the interview. In some cases, the same name and phone number have appeared across multiple applications, each supported by a freshly tailored resume. Further discrepancies occur in many instances where the applicant claims to be located in one country, but calls from another country entirely, usually in Asia.

Outsourced, scripted, and deepfake interviews

Fraudulent interviews tend to follow a familiar pattern. Introductions are short and vague, and answers arrive after long, noticeable pauses, as if the person is being coached off-screen. Many try to keep the camera off, or ask to complete tests offline instead of live.

In more advanced cases, you might see the telltale signs of real-time filters or deepfake tools, like mismatched lip-sync, unnatural blinking, or distorted edges. Most scammers still rely on simpler tricks like camera avoidance or off-screen coaching, but there have been reports of attackers using deepfake video or voice clones in interviews. It’s still rare, but it shows how quickly these tools are evolving.

Why they’re doing it

Scammers have a range of motives, from fraud to full system access.

Financial gain

For some groups, the goal is simple: money. They target remote, well-paid roles and then subcontract the work to cheaper labor behind the scenes. The fraudulent applicant keeps the salary while someone else quietly does the job at a fraction of the cost. It’s a volume game, and the more applications they get through, the more income they can generate.

Identity or documentation fraud

Others are trying to build a paper trail. A “successful hire” can provide employment verification, payroll history, and official contract letters. These documents can later support visa applications, bank loans, or other kinds of identity or financial fraud. In these cases, the scammer may never even intend to start work. They just need the paperwork that makes them look legitimate.

Algorithm testing and data harvesting

Some operations use job applications as a way to probe and learn. They send out thousands of resumes to test how screening software responds, to reverse-engineer what gets past filters, and to capture recruiter email patterns for future campaigns. By doing this at scale, they train automation that can mimic real applicants more convincingly over time.

System access for cybercrime

This is where the stakes get higher. Landing a remote role can give scammers access to internal systems, company data, and intellectual property—anything the job legitimately touches.

Even when the scammer isn’t hired, simply entering your hiring pipeline exposes internal details: how your team communicates, who makes what decisions, which roles have which tools. That information can be enough to craft a convincing impersonation later. At that point, the hiring process becomes an unguarded door into the organization.

The wider risk (not just to recruiters)

Recruiters aren’t the only ones affected. Everyday people on LinkedIn or job sites can get caught in the fallout too.

Fake applicant networks rely on scraping public profiles to build believable identities. LinkedIn added anti-bot checks in 2023, but fake profiles still get through, which means your name, photo, or job history could be copied and reused without your knowledge.

They also send out fake connection requests that lead to phishing messages, malicious job offers, or attempts to collect personal information. Recent research from the University of Portsmouth found that fake social media profiles are more common than many people realise:

80% of respondents said they’d encountered suspicious accounts, and 77% had received link requests from strangers.

It’s a reminder that anyone on LinkedIn can be targeted, not just recruiters, and that these profiles often work by building trust first and slipping in malicious links or requests later.

How recruiters can protect themselves

You can tighten screening without discriminating or adding friction by following these steps:

Verify identity earlier

Start with a camera-on video call whenever you can. Look for the subtle giveaways of filters or deepfakes: unnatural blinking, lip-sync that’s slightly off, or edges of the face that seem to warp or lag. If something feels odd, a simple request like “Please adjust your glasses” or “touch your cheek for a moment” can quickly show whether you’re speaking to a real person.

Cross-check details

Make sure the basics line up. The applicant’s face should match their documents, and their time zone should match where they say they live. Work history should hold up when you check references. A quick search can reveal duplicate resumes, recycled profiles, or LinkedIn accounts with only a few months of activity.

Watch for classic red flags

Most fake applicants slip when the questions get personal or specific. A resume that’s polished but hollow, a communication style that changes between messages, or hesitation when discussing timelines or past roles can all signal coaching. Long pauses before answers often hint that someone off-screen may be feeding responses.

Secure onboarding

If someone does pass the process, treat early access carefully. Limit what new hires can reach, require multi-factor authentication from day one, and make sure their device has been checked before it touches your network. Bringing in your security team early helps ensure that recruitment fraud doesn’t become an accidental entry point.


Final thoughts

Recruiting used to be about finding the best talent. Today, it often includes identity verification and security awareness.

As remote work becomes the norm, scammers are getting smarter. Fake applicants might show up as a nuisance, but the risks range from compliance issues to data loss—or even full-scale breaches.

Spotting the signs early, and building stronger screening processes, protects not just your hiring pipeline, but your organization as a whole.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

How phishers hide banking scams behind free Cloudflare Pages

During a recent investigation, we uncovered a phishing operation that combines free hosting on developer platforms with compromised legitimate websites to build convincing banking and insurance login portals. These fake pages don’t just grab a username and password–they also ask for answers to secret questions and other “backup” data that attackers can use to bypass multi-factor authentication and account recovery protections.

Instead of sending stolen data to a traditional command-and-control server, the kit forwards every submission to a Telegram bot. That gives the attackers a live feed of fresh logins they can use right away. It also sidesteps many domain-based blocking strategies and makes swapping infrastructure very easy.​

Phishing groups increasingly use services like Cloudflare Pages (*.pages.dev) to host their fake portals, sometimes copying a real login screen almost pixel for pixel. In this case, the actors spun up subdomains impersonating financial and healthcare providers. The first one we found was impersonating Heartland bank Arvest.

fake Arvest log in page
Fake Arvest login page

On closer look, the phishing site shows visitors two “failed login” screens, prompts for security questions, and then sends all credentials and answers to a Telegram bot.

Comparing their infrastructure with other sites, we found one impersonating a much more widely known brand: United Healthcare.

HealthSafe ID overpayment refund
HealthSafe ID overpayment refund

In this case, the phishers abused a compromised website as a redirector. Attackers took over a legitimate-looking domain like biancalentinidesigns[.]com and saddle it with long, obscure paths for phishing or redirection. Emails link to the real domain first, which then forwards the victim to the active Cloudflare pages phishing site. Messages containing a familiar or benign-looking domain are more likely to slip past spam filters than links that go straight to an obviously new cloud-hosted subdomain.​

Cloud-based hosting also makes takedowns harder. If one *.pages.dev hostname gets reported and removed, attackers can quickly deploy the same kit under another random subdomain and resume operations.​

The phishing kit at the heart of this campaign follows a multi-step pattern designed to look like a normal sign-in flow while extracting as much sensitive data as possible.​

Instead of using a regular form submission to a visible backend, JavaScript harvests the fields and bundles them into a message sent straight to the Telegram API.. That message can include the victim’s IP address, user agent, and all captured fields, giving criminals a tidy snapshot they can use to bypass defenses or sign in from a similar environment.​

The exfiltration mechanism is one of the most worrying parts. Rather than pushing credentials to a single hosted panel, the kit posts them into one or more Telegram chats using bot tokens and chat IDs hardcoded in the JavaScript. As soon as a victim submits a form, the operator receives a message in their Telegram client with the details, ready for immediate use or resale.​

This approach offers several advantages for the attackers: they can change bots and chat IDs frequently, they do not need to maintain their own server, and many security controls pay less attention to traffic that looks like a normal connection to a well-known messaging platform. Cycling multiple bots and chats gives them redundancy if one token is reported and revoked.​

What an attack might look like

Putting all the pieces together, a victim’s experience in this kind of campaign often looks like this:​

  • They receive a phishing email about banking or health benefits: “Your online banking access is restricted,” or “Urgent: United Health benefits update.”
  • The link points to a legitimate but compromised site, using a long or strange path that does not raise instant suspicion.​
  • That hacked site redirects, silently or after a brief delay, to a *.pages.dev phishing site that looks almost identical to the impersonated brand.​
  • After entering their username and password, the victim sees an error or extra verification step and is asked to provide answers to secret questions or more personal and financial information.​
  • Behind the scenes, each submitted field is captured in JavaScript and sent to a Telegram bot, where the attacker can use or sell it immediately.​

From the victim’s point of view, nothing seems unusual beyond an odd-looking link and a failed sign-in. For the attackers, the mix of free hosting, compromised redirectors, and Telegram-based exfiltration gives them speed, scale, and resilience.

The bigger trend behind this campaign is clear: by leaning on free web hosting and mainstream messaging platforms, phishing actors avoid many of the choke points defenders used to rely on, like single malicious IPs or obviously shady domains. Spinning up new infrastructure is cheap, fast, and largely invisible to victims.

How to stay safe

Education and a healthy dose of skepticism are key components to staying safe. A few habits can help you avoid these portals:​

  • Always check the full domain name, not just the logo or page design. Banks and health insurers don’t host sign-in pages on generic developer domains like *.pages.dev*.netlify.app, or on strange paths on unrelated sites.​
  • Don’t click sign-in or benefits links in unsolicited emails or texts. Instead, go to the institution’s site via a bookmark or by typing the address yourself.​
  • Treat surprise “extra security” prompts after a failed login with caution, especially if they ask for answers to security questions, card numbers, or email passwords.​
  • If anything about the link, timing, or requested information feels wrong, stop and contact the provider using trusted contact information from their official site.
  • Use an up-to-date anti-malware solution with a web protection component.

Pro tip: Malwarebytes free Browser Guard extension blocked these websites.

Browser Guard Phishing block

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Scammers harvesting Facebook photos to stage fake kidnappings, warns FBI

The FBI has warned about a new type of scam where your Facebook pictures are harvested to act as “proof-of-life” pictures in a virtual kidnapping.

The scammers pretend they have kidnapped somebody and contact friends and next of kin to demand a ransom for their release. While the alleged victim is really just going about their normal day, criminals show the family real Facebook photos to “prove” that person is still alive but in their custody.

This attack resembles Facebook cloning but with a darker twist. Instead of just impersonating you to scam your friends, attackers weaponize your pictures to stage fake proof‑of‑life evidence.

Both scams feed on oversharing. Public posts give criminals more than enough information to impersonate you, copy your life, and convince your loved ones something is wrong.

This alert focuses on criminals scraping photos from social media (usually Facebook, but also LinkedIn, X, or any public profile) then manipulating those images with AI or simple editing to use during extortion attempts. If you know what to look for, you might spot inconsistencies like missing tattoos, unusual lighting, or proportions that don’t quite match.

Scammers rely on panic. They push tight deadlines, threaten violence, and try to force split-second decisions. That emotional pressure is part of their playbook.

In recent years, the FBI has also warned about synthetic media and deepfakes, like explicit images generated from benign photos and then used for sextortion, which is a closely related pattern of abuse of user‑posted pictures. Together, these warnings point to a trend: ordinary profile photos, holiday snaps, and professional headshots are increasingly weaponized for extortion rather than classic account hacking.

What you can do

To make it harder for criminals to use these tactics, be mindful of what information you share on social media. Share pictures of yourself, or your children, only with actual friends and not for the whole world to find. And when you’re travelling, post the beautiful pictures you have taken when you’re back, not while you’re away from home.

Facebook’s built-in privacy tool lets you quickly adjust:

  • Who can see your posts.
  • Who can see your profile information.
  • App and website permissions.

If you’re on the receiving end of a virtual kidnapping attempt:

  • Establish a code word only you and your loved ones know that you can use to prove it’s really you.
  • Always attempt to contact the alleged victim before considering paying any ransom demand.
  • Keep records of every communication with the scammers. They can be helpful in a police investigation.
  • Report the incident to the FBI’s Internet Crime Complaint Center at www.ic3.gov.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (December 1 – December 7)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Leaks show Intellexa burning zero-days to keep Predator spyware running

Intellexa is a well-known commercial spyware vendor, servicing governments and large corporations. Its main product is the Predator spyware.

An investigation by several independent parties describes Intellexa as one of the most notorious mercenary spyware vendors, still operating its Predator platform and hitting new targets even after being placed on US sanctions lists and being under active investigation in Greece.

The investigation draws on highly sensitive documents and other materials leaked from the company, including internal records, sales and marketing material, and training videos. Amnesty International researchers reviewed the material to verify the evidence.

To me, the most interesting part is Intellexa’s continuous use of zero-days against mobile browsers. Google’s Threat Analysis Group (TAG) posted a blog about that, including a list of 15 unique zero-days.

Intellexa can afford to buy and burn zero-day vulnerabilities. They buy them from hackers and use them until the bugs are discovered and patched–at which point they are “burned” because they no longer work against updated systems.

The price for such vulnerabilities depends on the targeted device or application and the impact of exploitation. For example, you can expect to pay in the range of $100,000 to $300,000 for a robust, weaponized Remote Code Excecution (RCE) exploit against Chrome with sandbox bypass suitable for reliable, at‑scale deployment in a mercenary spyware platform. And in 2019, zero-day exploit broker Zerodium offered millions for zero-click full chain exploits with persistence against Android and iPhones.

Which is why only governments and well-resourced organizations can afford to hire Intellexa to spy on the people they’re interested in.

The Google TAG blog states:

“Partnering with our colleagues at CitizenLab in 2023, we captured a full iOS zero-day exploit chain used in the wild against targets in Egypt. Developed by Intellexa, this exploit chain was used to install spyware publicly known as Predator surreptitiously onto a device.”

To slow down the “burn” rate of its exploits, Intellexa delivers one-time links directly to targets through end-to-end encrypted messaging apps. This is a common method: last year we reported how the NSO Group was ordered to hand over the code for Pegasus and other spyware products that were used to spy on WhatsApp users.

The fewer people who see an exploit link, the harder it is for researchers to capture and analyze it. Intellexa also uses malicious ads on third-party platforms to fingerprint visitors and redirect those who match its target profiles to its exploit delivery servers.

This zero-click infection mechanism, dubbed “Aladdin,” is believed to still be operational and actively developed. It leverages the commercial mobile advertising system to deliver malware. That means a malicious ad could appear on any website that serves ads, such as a trusted news website or mobile app, and look completely ordinary. If you’re not in the target group, nothing happens. If you are, simply viewing the ad is enough to trigger the infection on your device, no need to click.

zero click infection chain
Zero-click infection chain
Image courtesy of Amnesty International

How to stay safe

While most of us will probably never have to worry about being in the target group, there are still practical steps you can take:

  • Use an ad blocker. Malwarebytes Browser Guard is a good start. Did I mention it’s a free browser extension that works on Chrome, Firefox, Edge, and Safari? And it should work on most other Chromium based browsers (I even use it on Comet).
  • Keep your software updated. When it comes to zero-days, updating your software only helps after researchers discover the vulnerabilities. However, once the flaws become public, less sophisticated cybercriminals often start exploiting them, so patching remains essential to block these more common attacks.
  • Use a real-time anti-malware solution on your devices.
  • Don’t open unsolicited messages from unknown senders. Opening them could be enough to start a compromise of your device.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Canadian police trialing facial recognition bodycams

A municipal police force in Canada is now using facial recognition bodycams, it was revealed this week. The police service in the prairie city of Edmonton is trialing technology from US-based Axon, which makes products for the military and law enforcement.

Up to 50 officers are taking part in the trial this month, according to reports. Officers won’t turn the cameras on in the field until they’re actively investigating or enforcing, representatives from Axon said.

When the cameras are activated, the recognition software will run in the background, not reporting anything to the wearer. The camera captures images of anyone within roughly four feet of the officer and sends them to a cloud service, where it will be compared against 6,341 people already flagged in the police system. According to police and Axon, images that don’t match the list will be deleted, and the database is entirely owned by the Police Service, meaning that Axon doesn’t get to see it.

This represents a turnaround for Axon. In 2019, its first ethics board report said that facial recognition wasn’t reliable enough for body cameras.

CEO Rick Smith said at the time:

“Current face matching technology raises serious ethical concerns. In addition, there are technological limitations to using this technology on body cameras. Consistent with the board’s recommendation, Axon will not be commercializing face matching products on our body cameras at this time.”

Two years later, nine of the board’s members resigned after the company reportedly went against their recommendations by pursuing plans for taser-equipped drones. Axon subsequently put the drone project on hold.

Gideon Christian, an associated law professor at the University of Calgary (in Alberta, the same province as Edmonton), told Yahoo News that the Edmonton Police Service’s move would transform bodycams from a tool making police officers accountable to a tool of mass surveillance:

“This tool is basically now being thrown from a tool for police accountability and transparency to a tool for mass surveillance of members of the public.”

Policy spaghetti in the US and further afield

This wouldn’t be the first time that police have tried facial recognition, often with lamentable results. The American Civil Liberties Union identified at least seven wrongful arrests in the US thanks to inaccurate facial recognition results, and that was in April 2024. Most if not all of those incidents involved black people, it said. Facial recognition datasets have been found to be racially biased.

In June 2024, police in Detroit agreed not to make arrests based purely on facial recognition as part of a settlement for the wrongful arrest of Robin Williams. Williams, a person of color, was arrested for theft in front of his wife and daughter after detectives relied heavily on an inaccurate facial recognition match.

More broadly in the US, 15 states had limited police use of facial recognition as of January this year, although some jurisdictions are reversing course. New Orleans reinstated its use in 2022 after a spike in homicides. Police have also been known to request searches from law enforcement in neighboring cities if they are banned from using the technology in their own municipality.

Across the Atlantic, things are equally mixed. The EU AI Act bans live facial recognition in public spaces for law enforcement, with narrow exceptions. The UK, meanwhile, which hasn’t been a part of Europe since 2018, doesn’t have any dedicated facial recognition legislation. It has already deployed the technology for some police forces, which are often used to track children. UK prime minister Keir Starmer announced plans to use facial recognition tech more widely last year, prompting rebuke from privacy advocates.

The Edmonton Police Force will review the results of the trial and decide whether to move forward with broader use of the technology in 2026.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

How scammers use fake insurance texts to steal your identity

Sometimes it’s hard to understand how some scams work or why criminals would even try them on you.

In this case it may have been a matter of timing. One of my co-workers received this one:

text message insurance scam

“Insurance estimates for certain age ranges:

20-30 ~ 200 – 300/mo
31-40 ~ 270 – 450/mo
41-64 ~ 350 – 500/mo

Please respond with your age and gender for a tailored pricing.”

A few red flags:

  • No company name
  • Unsolicited message from an unknown number
  • They ask for personal information (age, gender)

First off, don’t respond to this kind of message, not even to tell them to get lost. A reply tells the scammer that the number is “responsive,” which only encourages more texts.

And if you provide the sender with the personal details they ask for, those can be used later for social engineering, identity theft, or building a profile for future scams.

How these insurance scams work

Insurance scams fall into two broad groups: scams targeting consumers (to steal money or data) and fraud against insurers (fake or inflated claims). Both ultimately raise premiums and can expose victims to identity theft or legal trouble. Criminals like insurance-themed lures because policies are complex, interactions are infrequent, and high-value payouts make fraud profitable.

Here, we’re looking at the consumer-focused attacks.

Different criminal groups have their own goals and attack methods, but broadly speaking they’re after one of three goals: sell your data to other criminals, scam you out of money, or steal your identity.

Any reply with your details usually leads to bigger asks, like more texts, or a link to a form that wants even more information. For example, the scammer will promise “too good to be true” premiums and all you have to do is fill out this form with your financial details and upload a copy of your ID to prove who you are. That’s everything needed for identity theft.

Scammers also time these attacks around open enrollment periods. During health insurance enrollment windows, it’s common for criminals to pose as licensed agents to sell fake policies or harvest personal and financial information.

How to stay safe from insurance scams

The first thing to remember is not to respond. But if you feel you have to look into it, do some research first. Some good questions to ask yourself before you proceed:

  • Does the sender’s number belong to a trusted organization?
  • Are they offering something sensible or is it really too good to be true?
  • When sent to a website, does the URL in the address bar belong to the organization you expected to visit?
  • Is the information they’re asking for actually required?

You can protect yourself further by:

  • Keeping your browser and other important apps up to date.
  • Use a real-time anti-malware solution with a web protection component.
  • Consult with friends or family to check whether you’re doing the right thing.

After engaging with a suspicious sender, use STOP, our simple scam response framework to help protect against scams. 

  • Slow down: Don’t let urgency or pressure push you into action. Take a breath before responding. Legitimate businesses, like your bank or credit card provider, don’t push immediate action.  
  • Test them: If you’re on a call and feel pressured, ask a question only the real person would know, preferably something that can’t easily be found online. 
  • Opt out: If something feels wrong, hang up or end the conversation. You can always say the connection dropped. 
  • Prove it: Confirm the person is who they say they are by reaching out yourself through a trusted number, website, or method you have used before. 

Pro tip: You can upload suspicious messages of any kind to Malwarebytes Scam Guard. It will tell you whether it’s likely to be a scam and advise you what to do.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Update Chrome now: Google fixes 13 security issues affecting billions

Google has released an update for its Chrome browser that includes 13 security fixes, four of which are classified as high severity. One of these was found in Chrome’s Digital Credentials feature–a tool that lets you share verified information from your digital wallet with websites so you can prove who you are across devices.

Chrome is by far the world’s most popular browser, with an estimated 3.4 billion users. That scale means when Chrome has a security flaw, billions of users are potentially exposed until they update.

That’s why it’s important to install these patches promptly. Staying unpatched means you could be at risk just by browsing the web, and attackers often exploit these kinds of flaws before most users have a chance to update. Always let your browser update itself, and don’t delay restarting the browser as updates usually fix exactly this kind of risk.

How to update Chrome

The latest version number is 143.0.7499.40/.41 for Windows and macOS, and 143.0.7499.40 for Linux. So, if your Chrome is on version 143.0.7499.40 or later, it’s protected from these vulnerabilities.

The easiest way to update is to allow Chrome to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To update manually, click the More menu (three dots), then go to Settings > About Chrome. If an update is available, Chrome will start downloading it. Restart Chrome to complete the update, and you’ll be protected against these vulnerabilities.

You can also find step-by-step instructions in our guide to how to update Chrome on every operating system.

Chrome is up to date

Technical details

One of the vulnerabilities was found in the Digital Credentials feature and is tracked as CVE-2025-13633. As usual Google is keeping the details sparse until most users have updated. The description says:

Use after free in Digital Credentials in Google Chrome prior to 143.0.7499.41 allowed a remote attacker who had compromised the renderer process to potentially exploit heap corruption via a crafted HTML page.

That sounds complicated so let’s break it down.

Use after free (UAF) is a specific type of software vulnerability where a program attempts to access a memory location after it has been freed. That can lead to crashes or, in some cases, let an attackers run their own code.

The renderer process is the part of modern browsers like Chrome that turns HTML, CSS, and JavaScript into the visible webpage you see in a tab. It’s sandboxed for safety, separate from the browser’s main “browser process” that manages tabs, URLs, and network requests. So, for HTML pages, this is essentially the browser’s webpage display engine.

The heap is an area of memory made available for use by the program. The program can request blocks of memory for its use within the heap. In order to allocate a block of some size, the program makes an explicit request by calling the heap allocation operation.

A “remote attacker who had compromised the renderer” means the attacker would already need a foothold (for example, via a malicious browser extension) and then lure you to a site containing specially crafted HTML code.

So, my guess is that this vulnerability could be abused by a malicious extension to steal the information handled through Digital Credentials. The attacker could access information normally requiring a passkey, making it a tempting target for anyone trying to steal sensitive information.

Some of the fixes also apply to other Chromium browsers, so if you use Brave, Edge, or Opera, for example, you should keep an eye out for updates there too.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.