IT NEWS

Insurance company accused of using secret software to illegally collect and sell location data on millions of Americans

Insurance company Allstate and its subsidiary Arity unlawfully collected, used, and sold data about the location and movement of Texans’ cell phones through secretly embedded software in mobile apps, according to Texas Attorney General Ken Paxton.

Attorney General Paxton says the companies didn’t give consumers notice or get their consent, which violates Texas’ new Data Privacy and Security Act.

Arity would pay app developers to incorporate software that tracks consumers’ driving data in their apps. When consumers installed these apps they unwittingly downloaded that software, which allowed Arity to monitor the consumer’s location and movement in real-time.

Using this method, the company collected trillions of miles worth of location data from over 45 million people across the US, and used the data to create the “world’s largest driving behavior database.”

Allstate then used the covertly obtained data to justify raising insurance rates, according to Attorney General Paxton. Allstate is accused of not just using the data for its own business, but also for selling it on to third parties, including other car insurance carriers.

Location and movement data is valuable for insurance companies when they are preparing a quote. By having insight in the driver’s behavior, they can offer a rate that covers the risk better.

Car manufacturers are known to be selling similar data on to insurance companies. Last year, Attorney General Paxton sued General Motors (GM) for the unlawful collection and sale of over 1.5 million Texans’ private driving data to insurance companies, also without their knowledge or consent.

Privacy violation aside, these companies don’t always keep the data safe. Just last week we spoke about a breach at data broker Gravy Analytics, which is said to have led to the loss of millions of people’s sensitive location data.

Back to the Allstate case, the Texas Data Privacy and Security Act (TDPSA) requires clear notice and informed consent regarding how a company will use Texans’ sensitive data. That is something which Allstate allegedly failed to do.

In the press release, Paxton states:

“Our investigation revealed that Allstate and Arity paid mobile apps millions of dollars to install Allstate’s tracking software. The personal data of millions of Americans was sold to insurance companies without their knowledge or consent in violation of the law. Texans deserve better and we will hold all these companies accountable.”

Protect your location data

Sometimes apps ask permission to use your location data and you find yourself wondering, why does this app need to know where my phone is?

This is one possible reason.

Whenever you are asked to share your location data with an app and there’s no clear reason why you might need to, deny the app that permission.

If you have to share your location—for example, when using a map app—choose the “Allow only while using the app” option, so that it will be unable to continuously track your location and movement.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

The new rules for AI and encrypted messaging, with Mallory Knodel (Lock and Code S06E01)

This week on the Lock and Code podcast…

The era of artificial intelligence everything is here, and with it, come everyday surprises into exactly where the next AI tools might pop up.

There are major corporations pushing customer support functions onto AI chatbots, Big Tech platforms offering AI image generation for social media posts, and even Google has defaulted to include AI-powered overviews into everyday searches.

The next gold rush, it seems, is in AI, and for a group of technical and legal researchers at New York University and Cornell University, that could be a major problem.

But to understand their concerns, there’s some explanation needed first, and it starts with Apple’s own plans for AI.

Last October, Apple unveiled a service it is calling Apple Intelligence (“AI,” get it?), which provides the latest iPhones, iPads, and Mac computers with AI-powered writing tools, image generators, proof-reading, and more.

One notable feature in Apple Intelligence is Apple’s “notification summaries.” With Apple Intelligence, users can receive summarized versions of a day’s worth of notifications from their apps. That could be useful for an onslaught of breaking news notifications, or for an old college group thread that won’t shut up.

The summaries themselves are hit-or-miss with users—one iPhone customer learned of his own breakup from an Apple Intelligence summary that said: “No longer in a relationship; wants belongings from the apartment.”

What’s more interesting about the summaries, though, is how they interact with Apple’s messaging and text app, Messages.

Messages is what is called an “end-to-end encrypted” messaging app. That means that only a message’s sender and its recipient can read the message itself. Even Apple, which moves the message along from one iPhone to another, cannot read the message.

But if Apple cannot read the messages sent on its own Messages app, then how is Apple Intelligence able to summarize them for users?

That’s one of the questions that Mallory Knodel and her team at New York University and Cornell University tried to answer with a new paper on the compatibility between AI tools and end-to-end encrypted messaging apps.

Make no mistake, this research isn’t into whether AI is “breaking” encryption by doing impressive computations at never-before-observed speeds. Instead, it’s about whether or not the promise of end-to-end encryption—of confidentiality—can be upheld when the messages sent through that promise can be analyzed by separate AI tools.

And while the question may sound abstract, it’s far from being so. Already, AI bots can enter digital Zoom meetings to take notes. What happens if Zoom permits those same AI chatbots to enter meetings that users have chosen to be end-to-end encrypted? Is the chatbot another party to that conversation, and if so, what is the impact?

Today, on the Lock and Code podcast with host David Ruiz, we speak with lead author and encryption expert Mallory Knodel on whether AI assistants can be compatible with end-to-end encrypted messaging apps, what motivations could sway current privacy champions into chasing AI development instead, and why these two technologies cannot co-exist in certain implementations.

“An encrypted messaging app, at its essence is encryption, and you can’t trade that away—the privacy or the confidentiality guarantees—for something else like AI if it’s fundamentally incompatible with those features.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

iMessage text gets recipient to disable phishing protection so they can be phished

A smishing (SMS phishing) campaign is targeting iMessage users, attempting to socially engineer them into bypassing Apple’s built in phishing protection.

For months, iMessage users have been posting examples online of how phishers are trying to get around this protection. And, now, the campign is gaining traction, according to our friends at BleepingComputer.

It works like this: Under normal circumstances, iMessage will disable all links in messages from unknown senders to protect the user against clicking them by accident. However, if a user replies to a message or adds the sender to their contact list, the links are enabled, allowing the person to click on the link.

The text of the messages comes in all the variations that phishers love to use:

But they all end in a similar way to this:

smishing instructions

“(Please reply Y, then exit the SMS, re-open the SMS activation link, or copy the link to open in Safari)”

Replying with Y (or actually anything) will enable the links and turn off iMessage’s built-in phishing protection. Clicking the link will then lead the recipient to whatever malicious website the phisher had in mind. Even if the user just replies with “Y” and then decides not to follow the link—because it looks slightly off—the phishers will know that they have found a likely target for more attacks.

It’s also important to know that there are similar instructions for the Chrome browser:

Chrome instructions

“Reply with 1, exit the SMS message, and reopen the SMS activation link, or copy the link to Google Chrome to open it.)”

How to avoid smishing scams

  • Never reply to suspicious messages, even if it’s only a “Y” or “1.” It will tell the phishers they have a live number and they will bombard you with more attempts.
  • Never add a number you don’t know to your Contacts as that will disable the iMessage protection as well.
  • Don’t assume any message is the real deal. If you’re being asked to do something, contact the company directly via a known method you trust. If it turns out to be a fake, you should be able to report it to them, there and then.
  • If you live somewhere with a Do Not Call list or spam reporting service, make full use of it. Report bogus messages and numbers.
  • Your mobile device may already have some form of “safe” message ID enabled without you knowing. It’s tricky to give specific advice here because of the sheer difference of options available on models of phone, but the Options / Safety / Security / Privacy menus are a good place to start.
  • Check the link before you click it or copy it in your browser. Is it exactly what you would expect it to be? Scammers often use typosquatting techniques (for example evri[.]top instead of the legitimate evri[.]com, or they fabricate a link that uses the subdomain to make it look legitimate (for example usps.com-track.infoam[.]xyz). If it doesn’t look real then don’t click on it.
  • If a message sounds too good (or bad) to be true, it probably is.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (January 6 – January 12)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

BayMark Health Services sends breach notifications after ransomware attack

BayMark Health Services, Inc. (BayMark) notified an unknown number of patients that attackers stole their personal and health information.

BayMark profiles itself as North America’s largest provider of medication-assisted treatment (MAT) for substance use disorders helping tens of thousands of individuals with recovery.

In a breach notification, the company disclosed that on October 11, 2024 it learned about an incident that disrupted the operations of some of its IT systems. This incident consisted of an unauthorized party accessing some of the files on BayMark’s systems between September 24 and October 14 of last year.

An investigation showed that the exposed files contained information that varied per patient but could have included the patient’s name and one or more of the following:

  • Social Security number (SSN)
  • Driver’s license number
  • Date of birth
  • The services received and the dates of service
  • Insurance information
  • Treating provider
  • Treatment and/or diagnostic information

While BayMark did not provide any information about the number of victims or the nature of the accident, it has been separately reported that the RansomHub ransomware group has BayMark listed on their leak site.

The RansomHub ransomware group claims to have exfiltrated an enormous 1.5 terabytes of sensitive data from BayMark Health Services.

BayMark’s listing on RansomHub leak site
BayMark’s listing on RansomHub leak site

The date on the dark web site matches the date published in the breach notification. Further, the fact that the data are listed as “published” means that BayMark did not pay the ransom, which is confirmed by the cybercriminals you click through on the company’s tile.

BayMark’s expanded listing on RansomHub leak site

Here, the ransomware group lays blame on the company itself. This isn’t rare for a ransomware group, as the tactics and vernacular are often based around shame, guilt, and a pre-teen-like arrogance. As claimed in the dark web site:

One of the few companies from Texas that does not value its data. For a nominal fee, they could have not worried about anything, improved their network and protected themselves. But they chose the path of destroying their reputation, publishing sensitive data and publicizing it in the media.

{names}

These people decided to do other things than their company. BayMark Health Services is dedicated to providing treatment tailored to meet each person regardless of where they are in their recovery journey. BayMark provides a full continuum of care, integrating evidence-based practices, clinical counseling, recovery support, and medical services.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Google Chrome AI extensions deliver info-stealing malware in broad attack

Small businesses and boutique organizations should use caution when leaning on browser-friendly artificial intelligence (AI) tools to generate ideas, content, and marketing copy, as a set of Google Chrome extensions were recently compromised to deliver info-stealing malware disguised as legitimate updates.

Analyzed by researchers at Extension Total, the cybercriminal campaign has managed to take over the accounts of at least 36 Google Chrome extensions that provide AI and VPN services. The compromised extensions include “Bard AI Chat,” “ChatGPT for Google Meet,” “ChatGPT App,” “ChatGPT Quick Access,” “VPNCity,” “Internxt VPN,” and more, which are used by an estimated total of 2.6 million people.

Though these browser extensions borrow the names of the most popular AI tools available today, they are third-party tools that are not developed by Open AI—the company behind ChatGPT—or Google.

In response to the attack, many of the compromised browser extensions removed their tools from the Google Chrome web store to protect users. However, other extensions remain available and in the control of cybercriminals, making them dangerous to download.

There isn’t a startup, small business, or solo practitioner today who can run their operations without a web browser, and the most popular web browser in the world—by far—is Google Chrome.

But this cybercriminal campaign has not compromised Google Chrome itself.

Instead, it has compromised a series of extensions for Google Chrome that could prove attractive to many small businesses looking to harness AI, whether to write email newsletters, edit blogs, or even get ideas for marketing strategies in the new year. These third-party browser extensions, when they were still available, allowed users to directly ask questions to AI tools without needing to navigate away from a current web page.

But with the new attack, those same browser extensions are now delivering fraudulent updates that carry malicious code that can steal an employee’s data.

According to an investigation published by one of the compromised browser extension companies, the malware used in this attack sought data for Facebook Ads accounts. That may sound like a narrow goal, but considering that so many businesses rely on promotion and visibility through Facebook Ads, it isn’t uncommon that this information might be stored on an employee’s computer.

For a full list of compromised extensions, visit here.

Until fixes are released for every compromised extension, warn your employees about which browser extensions are safe to use, and consider creating a policy about only trusting first-party browser extensions for work.

For all other threats, try Malwarebytes Teams, which provides always-on protection against malware, ransomware, spyware, and more, along with 24/7 dedicated, human support.

Massive breach at location data seller: “Millions” of users affected

Like many other data brokers, Gravy is a company you may never have heard of, but it almost certainly knows a lot about you if you’re a US citizen.

Data brokers come in different shapes and sizes. What they have in common is that they gather personally identifiable data from various sources—from publicly available data to stolen datasets—and then sell the gathered data on. Gravy Analytics specializes in location intelligence, meaning it collects sensitive phone location and behavior data.

One of the buyers is the US government who increasingly circumvents the need to get a warrant by simply buying what they want to know from a data broker. Ironic, given that the FTC sued Gravy Analytics after saying it routinely collects sensitive phone location and behavior data without getting the consent of consumers.

In the complaint last month, the FTC claimed:

“Respondents {Gravy Analytics and Venntel, a wholly owned subsidiary of Gravy Analytics) have bought, obtained, and collected precise consumer location data and offered for sale, sold, and distributed products and services created from or based on the consumer location data.”

Data brokers have drawn attention this year by leaking several large databases, with the worst being the National Public Data leak. The data breach made international headlines because it affected hundreds of millions of people, and it included Social Security Numbers.

And now, apparently, it’s Gravy Analytics’ turn to be breached. According to 404 Media, cybercriminals breached Gravy Analytics and stole a massive amount of data, including customer lists, information on the broader industry, and location data harvested from smartphones which show peoples’ precise movements.

The cybercriminals claim to have stolen 17TB of data and are threatening to publish the data. Considering the sensitivity of location data for some groups, this breach could potentially be just as significant as the National Public Data leak.

To prove their possession of the data, the cybercriminals have shared three samples on a Russian forum, exposing millions of location points across the US, Russia, and Europe.

Gravy Analytics location data

The researcher that posted this map extracted the names of 3455 apps that leaked this information. Many of these apps are games, but we also noted Tinder and a host of apps that are promoted as TikTok video downloaders.

list of apps that provided location data

404 Media reports that the personal data of millions of users is affected.

The Gravy Analytics website is down at the moment of writing and nobody at the company has answered any queries with an official reaction.

The whole ordeal, whether the data will be published or not, proves once again why data brokers should stop trading health and location data.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

GroupGreeting e-card site attacked in “zqxq” campaign 

This article was researched and written by Stefan Dasic, manager, research and response for ThreatDown, powered by Malwarebytes

Malwarebytes recently uncovered a widespread cyberattack—referred to here as the “zqxq” campaign as it closely mirrors NDSW/NDSX-style malware behavior—that compromised GroupGreeting[.]com, a popular platform used by major enterprises to send digital greeting cards.  

Upon learning of the attack, GroupGreeting quickly responded and resolved the threat.

This attack is part of a broader malicious campaign that takes advantage of trusted websites with high traffic, especially those that could experience a spike in visitors during busy seasons like the winter holidays. That includes greeting card websites, like GroupGreeting[.]com, that allow users to send group e-cards for birthdays, retirements, weddings, and, of course, holidays like Christmas and New Year’s.  

According to public data, over 2,800 websites have been hit with similar malicious code. The seasonal increase in user interactions with greeting card sites provides ample opportunities for cybercriminals to quietly inject malware and target unsuspecting visitors. 

Explaining the “zqxq” malware

Understanding this cybercriminal campaign requires a little bit of understanding of the web. Online today, nearly every single modern webpage uses a programming language called JavaScript. JavaScript allows developers to make interactive webpages, but it can also be vulnerable to attacks, as cybercriminals can “inject” pieces of JavaScript into a website that are not approved by the site’s developers. 

At the core of this breach is an obfuscated JavaScript snippet designed to blend in with legitimate site files. Hidden within themes, plugins, or other critical scripts, the malicious code uses scrambled variables (e.g., zqxq) and custom functions (HttpClient, rand, token) to evade detection and hamper analysis. 

image
image 249e18

Despite its complexity, the malware performs some very typical functions seen in large-scale JavaScript injection campaigns: 

  • Token generation and redirection. Generates random tokens (rand() + rand()) for queries or URLs, a technique often used in Traffic Direction Systems (TDS) to disguise malicious links. 
  • Conditional checks and evasion. References properties in navigator, document, window, or screen to determine if the user has visited before, or to avoid re-infecting the same machine. This helps keep the campaign under the radar by reducing repeated alerts. 
  • Remote payload retrieval. Uses an XMLHttpRequest (labeled as HttpClient in the code) to silently fetch further malicious scripts or to redirect visitors to exploit kits, phishing sites, or other malicious destinations. 

Overlap with NDSW/NDSX and TDS Parrot campaigns 

Though Malwarebytes recently discovered the attack on GroupGreeting[.]com, the malware campaign bears similarities to another malware injection campaign that is referred to as both “NDSW/NDSX” and “TDS Parrot.” 

According to security researchers from Sucuri, who label these attacks under the “NDSW/NDSX” moniker, this campaign accounted for 43,106 detections in 2024. Similar research was published by Unit 42, which refers to the campaign as “TDS Parrot.”  

From these analyses, we can identify the following parallels to known NDSW/NDSX or TDS Parrot malware campaigns: 

  • Obfuscated redirect scripts. Much like NDSW/NDSX, the zqxq script deeply obfuscates its variables, methods, and flow. The layering of functions (Q, d, rand, token) and the repeated usage of base64-like decoding are standard indicators of TDS JavaScript-based threats. 
  • Traffic Distribution System behavior. After running checks (e.g., domain name, cookies), these scripts funnel traffic to external pages hosting additional malware payloads or phishing sites. This is precisely how TDS Parrot campaigns divert user traffic across multiple malicious domains to maximize infection rates. 
  • Large-scale website infections. Both NDSW/NDSX and the zqxq campaign have infected thousands of websites, suggesting a systematic approach—possibly automated—that exploits vulnerabilities in popular CMS platforms (like WordPress, Joomla, or Magento) or outdated plugins, similar to documented TDS Parrot behaviors. 

Analysis of the breach and why GroupGreeting was a prime target 

Cybercriminals hardly strike at random. Instead, the attack on GroupGreeting was likely coordinated because of its potential for success. Here are a few reasons why: 

  • High-profile site. GroupGreeting boasts over 25,000 workplace clients, including major brands like Airbnb, Coca-Cola, and eBay, making it a lucrative target. Visitors are more inclined to trust links from a service they deem reputable. 
  • Seasonal traffic spikes. During holidays and other high-traffic periods, the site sees a surge in e-card use. Cybercriminals exploit this surge to maximize the spread of redirects and malware. 
  • Sophisticated persistence. Malicious code can hide in multiple files or within the database. Deleting one infected file may not remove all traces, allowing reinfection to occur. 
  • Potential consequences. Once the malware activates in a user’s browser, it typically redirects them to external domains that host secondary payloads. These payloads can range from phishing pages—designed to steal credentials—to more devastating forms of malware like info stealers or ransomware. Attackers often generate random or “tokenized” URLs, making it difficult for basic blocklists to keep pace. 

Prevention and remediation 

  • Timely patching and updates. Attacks often succeed by exploiting vulnerabilities in outdated CMS installations or plugins, underscoring the importance of regular updates. 
  • File integrity checks. Automated monitoring systems can detect and flag any unauthorized file changes, prompting swift action. 
  • User training. Educate users on potential risks and signs of compromise—even “safe” or well-known websites can be hijacked. 
image aef924

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

US Cyber Trust Mark logo for smart devices is coming

The White House announced the launch of the US Cyber Trust Mark which aims to help buyers make an informed choice about the purchase of wireless internet-connected devices, such as baby monitors, doorbells, thermostats, and more.

The cybersecurity labeling program for wireless consumer Internet of Things (IoT) products is voluntary but the participants include several major manufacturers, retailers, and trade associations for popular electronics, appliances, and consumer products. The companies and groups said they are committed to increase cybersecurity for the products they sell.

Justin Brookman, director of technology policy at the consumer watchdog organization Consumer Reports, lauded the government effort and the companies that have already pledged their participation.

“Consumer Reports is eager to see this program deliver a meaningful U.S. Cyber Trust Mark that lets consumers know their connected devices meet fundamental cybersecurity standards,” Brookman said in a news release. “The mark will also inform consumers whether or not a company plans to stand behind the product with software updates and for how long.”

The Federal Communications Commission (FCC) proposed and created the labelling program and hopes it will raise the bar for cybersecurity across common devices, including smart refrigerators, smart microwaves, smart televisions, smart climate control systems, smart fitness trackers, and more.

The idea is that the Cyber Trust Mark logo will be accompanied by a QR code that consumers can scan for easy-to-understand details about the security of the product, such as the support period for the product and whether software patches and security updates are automatic.

The program is challenging because of the wide variety of consumer IoT products on the market that communicate over wireless networks. These products are built on different technologies, each with their own security pitfalls, so it will be hard to compare them, but at least the consumer will be able to find some basic—but important—information.

Even though participation is voluntary, manufacturers will be incentivized to make their smart devices more secure to keep the business of consumers who will choose products that only have the Cyber Trust Mark.

As we explained recently, the “Internet of Things” is the now-accepted term to describe countless home products that connect to the internet so that they can be controlled and monitored from a mobile app or from a web browser on your computer. The benefits are obvious for shoppers. Thermostats can be turned off during vacation, home doorbells can be answered while at work, and gaming consoles can download videogames as children sleep.”

And in 2024 we saw several mishaps ranging from privacy risks to downright unacceptable abuse. So, if we can avoid these incidents from happening again, then it surely is worth the trouble.

The testing of whether a product deserves the Cyber Trust Mark will be done by accredited labs and against established cybersecurity criteria from the National Institute of Standards and Technology (NIST).

The Cyber Trust Mark has been under constructions for quite a while was approved in a bipartisan unanimous vote last March, but we can expect the first logos to show up this year. And Anne Neuberger, deputy national security adviser for cyber, revealed that there are plans to release another executive order saying that, beginning in 2027, the Federal government will only buy devices that have the Cyber Trust Mark label on them.

For now, the program does not apply to personal computers, smartphones, and routers.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

AI-supported spear phishing fools more than 50% of targets

One of the first things everyone predicted when artificial intelligence (AI) became more commonplace was that it would assist cybercriminals in making their phishing campaigns more effective.

Now, researchers have conducted a scientific study into the effectiveness of AI supported spear phishing, and the results line up with everyone’s expectations: AI is making it easier to do crimes.

The study, titled Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects, evaluates the capability of large language models (LLMs) to conduct personalized phishing attacks and compares their performance with human experts and AI models from last year.

To this end the researchers developed and tested an AI-powered tool to automate spear phishing campaigns. They used AI agents based on GPT-4o and Claude 3.5 Sonnet to search the web for available information on a target and use this for highly personalized phishing messages.

With these tools, the researchers achieved a click-through rate (CTR) that marketing departments can only dream of, at 54%. The control group received arbitrary phishing emails and achieved a CTR of 12% (roughly 1 in 8 people clicked the link).

Another group was tested against an email generated by human experts which proved to be just as effective as the fully AI automated emails and got a 54% CTR. But the human experts did this at 30 times the cost of the AI automated tools.

The AI tools with human assistance outperformed the CTR of these groups by scoring 56% at 4 times the cost of the AI automated tools. This means that some (expert) human input can improve the CTR, but is it enough to invest the time? Cybercriminals are proverbially lazy, which means they often exhibit a preference for efficiency and minimal effort in their operations, so we don’t expect them to think the extra 2% to be worth the investment.

The research also showed a significant improvement of the deceptive capabilities of AI models compared to last year, where studies found that AI models needed human assistance to perform on par with human experts.

The key to the success of a phishing email is the level of personalization that can be achieved by the AI assisted method and the base for that personalization can be provided by an AI web-browsing agent that crawls publicly available information.

Example from the paper showing how collected information is used to write a spear phishing email
Example from the paper showing how collected information is used to write a spear phishing email

Based on information found online about the target, they are invited to participate in a project that aligns with their interest and presented with a link to a site where they can find more details.

The AI-gathered information was accurate and useful in 88% of cases and only produced inaccurate profiles for 4% of the participants.

Other bad news is that the researchers found that the guardrails which are supposed to stop AI models from assisting cybercriminals are not a noteworthy barrier for creating phishing mails with any of the tested models.

The good news is that LLMs are also getting better at recognizing phishing emails. Claude 3.5 Sonnet scored well above 90% with only a few false alarms and detected several emails that passed human detection. Although it struggles with some phishing emails that are clearly suspicious to most humans.

If you’re looking for some guidance how to recognize AI assisted phishing emails, we’d like you to read: How to recognize AI-generated phishing mails. But the best way is to always remember the general advice not to click on any links in unsolicited emails.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.