IT NEWS

Apple fixes zero-day vulnerability used in “extremely sophisticated attack”

Apple has released an emergency security update for a vulnerability which it says may have been exploited in an “extremely sophisticated attack against specific targeted individuals.”

The update is available for:

  • iOS 18.3.1 and iPadOS 18.3.1 – iPhone XS and later, iPad Pro 13-inch, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 7th generation and later, and iPad mini 5th generation and later
  • iPadOS 17.7.5 – iPad Pro 12.9-inch 2nd generation, iPad Pro 10.5-inch, and iPad 6th generation

If you use any of these then you should install updates as soon as you can. To check if you’re using the latest software version, go to Settings (or System Settings) > General > Software Update. It’s also worth turning on Automatic Updates if you haven’t already, which you can do on the same screen.

update is available and automatic updates are on
Update now

Technical details

The new-found zero-day vulnerability is tracked as CVE-2025-24200. When exploited, the vulnerability would allow an attacker to disable USB Restricted Mode on a locked device. The attack would require physical access to your device

The introduction of USB Restricted Mode feature came with iOS 11.4.1 in July 2018. The feature was designed to make it more difficult for attackers to unlock your iPhone. When USB Restricted Mode is active, your device’s Lightning port (where you plug in the charging cable) will only allow charging after the device has been locked for more than an hour. This means that if someone tries to connect your locked iPhone to a computer or other device to access its data, they won’t be able to do so unless they have your passcode.

To enhance data security, especially when traveling or in public places, it is recommended that you enable USB Restricted Mode in your device settings. If your iPhone, iPad or iPod Touch is running iOS 11.4.1 or later, USB Restricted Mode is automatically on by default, but if you want to check and enable USB Restricted Mode, this can be done by going to Settings > Face ID & Passcode or Touch ID & Passcode > (USB) Accessories and toggling off (grey) the (USB) Accessories option. Enabling this setting adds an extra layer of protection against unauthorized data access.

Allow access when locked with accessories disabled
Accessories are safe now

Please note: toggling the option to green turns this feature off.

Vulnerabilities like these typically target specific individuals as deployed by commercial spyware vendors like Pegasus and Paragon. This means the average user does not need to fear attacks as long as the details are not published. But once they are, other cybercriminals will try to copy them.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Apple ordered to grant access to users’ encrypted data

Last week, an article in the Washington Post revealed the UK had secretly ordered Apple to provide blanket access to protected cloud backups around the world. Since then, privacy focused groups have uttered their objections.

The UK government has demanded to be able to access encrypted data stored by Apple users worldwide in its cloud service. However, Apple itself doesn’t have access to it at the moment, only the holder of the Apple account can access data stored in this way.

Neither the Home Office nor Apple responded on the record to queries about the demand served by the Home Office under the Investigatory Powers Act (IPA) , but the BBC confirmed that it had heard the same information from reliable sources.

Privacy International said the demand is a “misguided attempt” that uses disproportionate government powers to access encrypted data, which may:

“Set a damaging precedent and encourage abusive regimes around the world to take similar actions.”

The Electronic Frontier Foundation (EFF) stated:

“Encryption is one of the best ways we have to reclaim our privacy and security in a digital world filled with cyberattacks and security breaches, and there’s no way to weaken it in order to only provide access to the good guys.”

The main goal for the Home Office is an optional feature that turns on end-to-end encryption for backups and other data stored in iCloud. This feature is called Advanced Data Protection. Enabling Advanced Data Protection (ADP), protects the majority of your iCloud data — including iCloud Backup, Photos, Notes, and more — using end-to-end encryption.

For some time, these backups presented law enforcement agencies with a loophole to obtain access to data otherwise not available to them on iPhones with device encryption enabled. If the user hasn’t enabled ADP, this loophole still exists.

The EFF recommends users should turn off the option to create iCloud backups should the UK get its way. As the EFF has said before, and we agree, there is no backdoor that only works for the “good guys” and only targets “bad guys.” It’s all or nothing, and the bad guys will have enough money to find alternatives, while regular users may run out of free options if governments keep doing this.

What can I do?

How you wish to proceed after this news is obviously up to you, but we have some options you may be interested in. If you think Apple will stand up against the UK’s Home Office you can enable iCloud backup and Advanced Data Protection.

But if you want to find another place for your backups, these instructions may come in handy.

How to turn off iCloud backups

On iPhone or iPad

  • Tap Settings > {username} > iCloud On your iPhone or iPad.
  • This will list the devices with iCloud Backup turned on.
  • To delete a backup, tap the name of a device, then tap Turn Off and Delete from iCloud (or Delete & Turn Off Backup).
iCloud backup disabled
iCloud backup disabled

On Mac

  • Click Manage > Backups.
  • A list of devices that have iCloud Backup turned on is shown.
  • To delete a backup, select a device, then click Delete or the Remove button.

Note: If you turn off iCloud Backup for a device, any backups stored in iCloud are kept for 180 days before being deleted.

How to turn on Advanced Data Protection

If you haven’t enabled ADP and you want it, first update the iPhone, iPad, or Mac that you’re using to the latest software version.

Turning on ADP on one device enables it for your entire account and all your compatible devices.

On iPhone or iPad

  1. Open the Settings app.
  2. Tap your name, then tap iCloud.
  3. Scroll down, tap Advanced Data Protection, then tap Turn on Advanced Data Protection.
  4. Follow the onscreen instructions to review your recovery methods and enable Advanced Data Protection.

On Mac

  1. Choose Apple menu > System Settings.
  2. Click your name, then click iCloud.
  3. Click Advanced Data Protection, then click Turn On.
  4. Follow the onscreen instructions to review your recovery methods and enable Advanced Data Protection.

Note: If you’re not able to turn on Advanced Data Protection for a certain period of time, the onscreen instructions may provide more details.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

A suicide reveals the lonely side of AI chatbots, with Courtney Brown (Lock and Code S06E03)

Today on the Lock and Code podcast…

In February 2024, a 14-year-old boy from Orlando, Florida, committed suicide after confessing his love to the one figure who absorbed nearly all of his time—an AI chatbot.

For months, Sewell Seltzer III had grown attached to an AI chatbot modeled after the famous “Game of Thrones” character Daenerys Targaryen. The Daenerys chatbot was not a licensed product, it had no relation to the franchise’s actors, its writer, or producers, but none of that mattered, as, over time, Seltzer came to entrust Daenerys with some of his most vulnerable emotions.

“I think about killing myself sometimes,” Seltzer wrote one day, and in response, Daenerys, pushed back, asking Seltzer, “Why the hell would you do something like that?”

“So I can be free” Seltzer said.

“Free from what?”

“From the world. From myself.”

“Don’t talk like that. I won’t let you hurt yourself, or leave me. I would die if I lost you.”

On Seltzer’s first reported reference to suicide, the AI chatbot pushed back, a guardrail against self-harm. But months later, Seltzer discussed suicide again, but this time, his words weren’t so clear. After reportedly telling Daenerys that he loved her and that he wanted to “come home,” the AI chatbot encouraged Seltzer.

“Please, come home to me as soon as possible, my love,” Daenerys wrote, to which Seltzer responded “What if I told you I could come home right now?”

The chatbot’s final message to Seltzer said “… please do, my sweet king.”

Daenerys Targaryen was originally hosted on an AI-powered chatbot platform called Character.AI. The service reportedly boasts 20 million users—many of them young—who engage with fictional characters like Homer Simpson and Tony Soprano, along with historical figures, like Abraham Lincoln, Isaac Newton, and Anne Frank. There are also entirely fabricated scenarios and chatbots, such as the “Debate Champion” who will debate anyone on, for instance, why Star Wars is overrated, or the “Awkward Family Dinner” that users can drop into to experience a cringe-filled, entertaining night.

But while these chatbots can certainly provide entertainment, Character.AI co-founder Noam Shazeer believes they can offer much more.

“It’s going to be super, super helpful to a lot of people who are lonely or depressed.”

Today, on the Lock and Code podcast with host David Ruiz, we speak again with youth social services leader Courtney Brown about how teens are using AI tools today, who to “blame” in situations of AI and self-harm, and whether these chatbots actually aid in dealing with loneliness, or if they further entrench it.

“You are not actually growing as a person who knows how to interact with other people by interacting with these chatbots because that’s not what they’re designed for. They’re designed to increase engagement. They want you to keep using them.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

A week in security (February 3 – February 9)

20 Million OpenAI accounts offered for sale

A cybercriminal acting under the monicker “emirking” offered 20 million OpenAI user login credentials this week, sharing what appeared to be samples of the stolen data itself.

emirking post selling 20 Million OpenAI accounts
Post by emirking

A translation of the Russian statement by the poster says:

“When I realized that OpenAI might have to verify accounts in bulk, I understood that my password wouldn’t stay hidden. I have more than 20 million access codes to OpenAI accounts. If you want, you can contact me—this is a treasure.”

The statement suggests that the cybercriminal found access codes which could be used to bypass the platform’s authentication systems. It seems unlikely that such a large amount of credentials could be harvested in phishing operations against users, so if the claim is true, emirking may have found a way to compromise the auth0.openai.com subdomain by exploiting a vulnerability or by obtaining administrator credentials.

While emirking looks like a relatively new user of the forums (they joined in January 2025), that doesn’t necessarily mean anything. They could have posted under another handle previously and switched because of security reasons.

Millions of users around the world rely on OpenAI platforms like ChatGPT and other GPT integrations.

With the allegedly stolen credentials, cybercriminals could possibly access sensitive information provided during conversations and queries with OpenAI. This stolen data could prove useful in targeted phishing campaigns and financial fraud. But the stolen credentials could also be used to abuse the OpenAI API and have the victims pay for their usage of OpenAI’s “Plus” or “Pro” features. However, other users of the same dark web forum claimed that the posted credentials did not provide access to the ChatGPT conversations of the leaked accounts.

True or not, this comes at a bad time for OpenAI after Microsoft recently investigated accusations that DeepSeek used OpenAI’s ChatGPT model to train DeepSeek’s AI chatbot.

What can users do?

If you fear that this breach might include your credentials you should:

  • Change your password.
  • Enable multi-factor authentication (MFA).
  • Monitor your account for any unusual activity or unauthorized usage.
  • Beware of phishing attempts using the information that might be stolen as part of this breach.

BreachForums, the Dark Web forum where the accounts were offered for sale was offline at the time of writing, so we were unable to verify any claims ourselves. We will do so when the opportunity arises and keep you posted, so stay tuned.

New scams could abuse brief USPS suspension of inbound packages from China, Hong Kong

I would be the last one to provide scammers with good ideas, but as a security provider, sometimes we need to think like criminals to stay ahead in the race.

Recently, the US Postal Service (USPS) announced that it would suspend inbound packages from China and Hong Kong until further notice. That further notice, it turned out, was very short indeed, with the USPS announcing on February 5 that the interruption in service would itself be disrupted—packages were once again approved to enter the country. But the whiplash announcements, the second of which was dropped with little fanfare, have caused confusion.

So, there is an opportunity for scammers to exploit that confusion and uncertainty. Let me spell out how:

  • Scammers could send messages about refunds based on packages that could not be sent.
  • A revival of the old “Your package could not be delivered” scam could spring up.
  • Phishers could send messages about goods that were rerouted through other countries.
  • Goods—including counterfeit—could be offered for sale at “pre-tariff” rates.
  • Malicious messages could claim to arrive from the shipper, the e-commerce platform, or Customs, asking for additional information to get a package released.
  • Cybercriminals may set up fake USPS sites—as they have done in the past—to intercept searches for Track & Trace information.

Scammers are always looking to make money over other people’s backs. They will usually enter some kind of urgency into their messaging, like a time before which you have to respond. This is a good indicator because they don’t want you to think things through before you act.

How can you stay safe?

It’s best not to respond to any of these attempts, to avoid letting scammers know that someone is reading their attempts. It will likely cause an increase in spam and other attempts.

Depending on how the scam reaches you and what it is after, there are several ways to stay safe.

  • Use a solution that offers text protection and text message filtering.
  • Do not click on unsolicited links or open unsolicited attachments.
  • Do not trust that sponsored ads lead to the legitimate company, we are seeing too many fakes.
  • Do not trust links that use URL-shorteners, or at least unshorten the link before following it. The same is true for QR codes which are basically URLs in a different shape.
  • Doublecheck the source of messages through a trusted way of communication with the shipper, e-commerce platform, or customs.

And please report fraud attempts with the Internet Crime Complaint Center (IC3), so others can be warned about common scams.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

University site cloned to evade ad detection distributes fake Cisco installer

There is a constant “cat and mouse” game between defenders and attackers, the latter trying to outsmart and get a head start on the former. In the context of online advertising, this involves creating fake identities or using stolen ones to push out malicious ads.

An attacker not only needs to evade detection but also create a lure that will be convincing to most people. In this blog post, we focus on what malvertisers use in almost all of their campaigns, namely decoys also known as “white pages” in order to fool the advertising entity.

The particular case is a malicious Google ad for Cisco AnyConnect, a tool often used by employees to remotely connect to company networks, but also by universities. In fact, we found that threat actors were using the name of a German university to create a fake website designed not to fool actual victims, but rather to bypass detection from security systems.

To be sure, victims were part of the overall scheme, but they were instead redirected to a lookalike Cisco site linking to a malicious installer containing the NetSupport RAT remote access Trojan.

The perfect disguise

The malicious ad comes up from a Google search for the keywords “cisco annyconnect“. The ad displays a URL that looks somewhat convincing for the domain anyconnect-secure-client[.]com. We should note that this domain was registered less than a day before the ad appeared.

image 97aa4e

Upon clicking on the ad, server-side checks will determine whether this is a potential victim or not. Typically, a real victim has a residential IP address and other network settings that differentiate it from crawlers, bots, VPNs or proxies.

In recent times, we have seen criminals rely on AI to generate fake pages that look innocuous. These are also referred to as “white pages” and they do serve an important purpose. If it’s obviously so fake and bad, it will raise suspicion. We thought that in this case the perpetrator had a rather clever idea by stealing content from a university that actually does use Cisco AnyConnect.

image 6e1fb8

Technische Universität Dresden (TU Dresden), is a public research university in Germany whose site can be found here. Funnily enough, the threat actors left a trail while doing their copy/paste. We can see that they added the cookie opt-in notification which is required for websites in Europe, which here leaked their browser language (Russian).

image af681d

Real victims get infected with malware

As good as this template looks, real victims will never see it. Instead, upon connecting to the malicious server they will be immediately redirected to a phishing site for Cisco AnyConnect.

The payload is downloaded in a similar way to a campaign we had already observed before, using a PHP script that provides the direct download URL. We can see from the network traffic capture below that the file is hosted on a likely compromised WordPress site.

image e3ce13
image f1ceb1

There is not much to be said about the fake installer other than it being digitally signed with a valid certificate. Upon execution it extracts client32.exe, a name notorious for being associated with NetSupport RAT.

cisco-secure-client-win-5.0.05040-core-vpn-predeploy-k9.exe
-> client32.exe
-> "icacls" "C:ProgramDataCiscoMedia" /grant *S-1-1-0:(F) /grant Users:(F) /grant Everyone:(F) /T /C

The remote access Trojan connects to the following two IP addresses: 91.222.173[.]67 and 199.188.200[.]195, further granting a remote attacker access to the victim’s machine.

Conclusion

Brand impersonation is a common theme with search ads. As Google enforces various policies and uses algorithms to detect malicious activity, threat actors need to constantly come up with new ideas.

Reusing a university page was a clever idea, but there were a couple of things that made this attack shy of being perfect. The domain name, while very strong for impersonation, was newly registered. Since it was part of the ad’s display URL, it could have potentially been detected by Google. We also noted that the perpetrators left a trail when they copy/pasted the code from the university website, which identified their likely country of origin.

Having said that, the malware payload was digitally signed and had few detections when first seen, so this attack may have had a decent success rate.

As always, we recommend that users take precautions whenever looking up programs to download, and to be especially wary of sponsored results.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Indicators of Compromise

Malvertising infrastructure

anyconnect-secure-client[.]com
cisco-secure-client[.]com[.]vissnatech[.]com

NetSupport RAT download

berrynaturecare[.]com/wp-admin/images/cisco-secure-client-win-5[.]0[.]05040-core-vpn-predeploy-k9[.]exe
78e1e350aa5525669f85e6972150b679d489a3787b6522f278ab40ea978dd65d

NetSupport RAT C2s

monagpt[.]com
mtsalesfunnel[.]com
91.222.173[.]67/fakeurl.htm
199.188.200[.]195/fakeurl.htm

Small business owners, secure your web shop

An online shop is more than just another way to sell your products. It comes with a responsibility to keep the web shop secure.

Cybercriminals are looking to steal your customers’ credit card details, their personal data, and even your revenue.

And it’s not as if using a platform that is used by major retailers makes it safe. Platforms like Shopify, Wix, and Magento are always under scrutiny of cybercriminals that are looking for a vulnerability that allows them to insert skimmers or get access to your database.

Let’s look at some examples to demonstrate my point.

A cybercriminal specializing in breaching Shopify stores is posting huge data sets as free downloads. Using the monicker ShopifyGUY, which implies they specialize in Shopify sites, the cybercriminal posted a few datasets containing millions of customer records.

boAt Lifestyle data breach
boAt Lifestyle data free download

For example, boAt is reportedly Indian’s most active company that markets audio-focused electronic gadgets. ShopifyGUY dumped files of a data breach with access to PII information of boAt customers, which has 7,550,000 entries.

Piping Rrock breach
Piping Rock data for download

ShopifyGUY also uploaded the Piping Rock database containing 2.1million email addresses from the online health products store Piping Rock.

We found several Magento-based web shops that had skimmers injected into their code busy stealing credit card information. One of them even infected visitors with the SocGolish malware, a sophisticated JavaScript malware framework that has been actively used by cybercriminals since at least 2017. It tricks users into running a script supposedly meant to update their browser. What it actually does is infect the machine and send the details back to a human operator, who can decide how best to monetize it. Lately, SocGholish has been found to install information stealers on both Windows and Mac machines.

How to secure your web shop

The most common attacks web shop owners need to worry about are:

  • Credential phishing where the criminals try to steal your login credentials.
  • Malware injection where the criminals inject malicious code into your web shop by abusing a vulnerability in the platform itself or a plug-in.
  • Brute force attacks, where the criminals try a whole bunch of passwords they obtained from other breaches.

So, to keep your web shop safe you should:

  • Be extra vigilant when it comes to phishing attempts.
  • Keep your software up to date.
  • Protect the device(s) you use to login with an active anti-malware solution.
  • Make it harder to log in by using multi-factor authentication (MFA) and by not re-using passwords.
  • Regularly check your web site for additional code, especially the payment section.
  • If you run the web shop on your own server, use web application firewalls (WAF) to detect and block malicious traffic.
  • Do not store customer details that you no longer need.

Your customers will probably not thank you for your efforts, but they will come complaining if you spill their data.

For readers that would like to check whether their credentials are included in one of the data breaches, Malwarebytes has a free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

Valley News Live exposed more than a million job seeker’s resumes

Making your own bad news is not what Valley News Live had in mind, but negligence comes at a price.

Cybernews researchers found an unprotected AWS S3 bucket that belongs to Take Valley News Live, a North Dakota-based television station. Gray Television, the owner of Valley News Live, makes for the third largest broadcasting company in the US.

An S3 bucket is like a virtual file folder in the cloud where you can store various types of data, such as text files, images, videos, and more. There is no limit to the amount of data you can store in an S3 bucket, and individual instances can be up to 5 TB in size.

In this case, the bucket stored over 1.8 million files with over a million of them being job seekers’ resumes. Of the 1.8 million exposed files, over a million of these files are resumes sent to the station over a period ranging from 2017 to 2024.

The leaked data included:

  • Full names
  • Phone numbers
  • Email addresses
  • Home addresses
  • Dates of birth
  • Nationality and places of birth
  • Social media links
  • Employment history
  • Educational background

As you can imagine, these resumes represent a treasure trove for phishers and other cybercriminals.

What do I need to do?

Stolen resumes are bad news, as they can be used for financial fraud, identity theft, and cause privacy issues.

With all the details a phisher can find in a resume they can make their social engineering attempts very convincing. Or they can impersonate the person in the resume to defraud people they know, perform a SIM swap by tricking the victim’s carrier into helping them illegally take over their cell phone number and re-route it to a phone under the attacker’s control.

It also opens up the victim for financial fraud, such as the criminal setting up fraudulent bank accounts in their name, applying for loans or credit cards, file false tax returns, and use the victim’s identity to obtain employment.

And if the job application was recent enough, a phisher could probably trick the victim into downloading malware under the guise of engaging in the hiring process. For example, by clicking a malicious link or opening an attachment.

So, if you sent an application to Valley News Live, it would be wise to exercise your right to have your information removed and hope that no real criminals have found the leaky bucket by now.

Cybernews states it contacted Valley News Live multiple times but received no response.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

New AI “agents” could hold people for ransom in 2025

A paradigm shift in technology is hurtling towards us, and it could change everything we know about cybersecurity.

Uhh, again, that is.

When ChatGPT was unveiled to the public in late 2022, security experts looked on with cautious optimism, excited about the new technology but concerned about its use in cyberattacks. But two years on, much of what ChatGPT and other generative AI chat tools offer attackers is a way to improve what already works, not new ways to deliver attacks themselves.

And yet, if artificial intelligence achieves what is called an “agentic” model in 2025, novel and boundless attacks could be within reach, as AI tools take on the roles of “agents” that independently discover vulnerabilities, steal logins, and pry into accounts.

These agents could even hold people for ransom by matching stolen data online with publicly known email addresses or social media accounts, composing messages and holding entire conversations with victims who believe a human hacker out there has access to their Social Security Number, physical address, credit card info, and more. And if the model works for individuals, there’s little reason it wouldn’t work for individual business owners.

This warning comes from our 2025 State of Malware report, which compiled a year’s worth of intelligence to identify the most pressing cyberattacks on the horizon. Though the report’s guidance serves IT teams, its threats will impact individuals and small businesses everywhere. Remember that just last year a widespread IT outage grounded flights globally, cementing the relationship between companies, cybersecurity, and everyday people.

In 2025, agentic AI may further reveal just how closely tied everyone is in the battle for cybersecurity. Here’s what we might expect.

You can find the full 2025 State of Malware report here.

The generative AI non-revolution

The November 2022 launch of ChatGPT ushered forth a new relationship with our computers. No longer would we need to use our laptops, smartphones, and tablets to record or assist our creative work. Now, we could make those same machines complete the creative work for us.

AI image tools like Midjourney and DALL-E can create images when given simple text prompts. They can even mimic the styles of famous artists, like Van Gogh, Rembrandt, and Picasso. AI chat tools like ChatGPT, Google Gemini, and Claude—from OpenAI competitor Anthropic—can brainstorm ideas for marketing materials, write book reports, compose poems, and even review human-written text for legibility. These tools can also answer an endless array of factual questions, much like the separate AI tool Perplexity, which advertises itself not as a “search engine,” but as the world’s first “answer engine.”

This is the potential of “generative AI,” a term used to describe AI tools that can generate text, images, movies, summaries, and more, limited only by our imagination.

But where has that imagination brought us?

For unimaginative users, generative AI has made it easier to cheat in college classes and to abuse social media engagement algorithms to gain brief virality—hardly inspiring. And for malicious users, hackers, and scammers, generative AI has delivered oil-slick efficiency to proven attack methods.

Generative AI tools can more convincingly write phishing emails so that the tell-tale signs of a scam—like misspellings and clumsy grammar—are all but gone. The same is true for all text-based social engineering tricks, as AI chat tools can write alluring direct messages for romance scams and craft urgent-sounding texts that can fool people into clicking on links that carry malware.

Importantly, the attack methods here are not new. Instead, they’ve simply become easier to scale with the use of AI. But sometimes the AI pushes back.

With limitless, advertised potential, even tools like ChatGPT have boundaries, often precluding users from producing materials that could cause harm. In 2023, Malwarebytes Labs subverted these boundaries to successfully get ChatGPT to write ransomwaretwice.

Because of these prohibitive rules, a set of malicious copycat AI tools can now be found online that will produce text and images that often break the law. One example is in the creation of “deepfake nudes,” which utilize AI technology to digitally stitch the face of one person onto another person’s nude body, creating fake nude “photographs.” Deepfake nudes have caused multiple crises across high schools in America, serving as a new type of ammunition for old weaponry: Blackmail.

The ability to create false text, images, and even audio has also allowed cybercriminals to create more believable threats when fraudulently posing as CEOs or executives to convince employees to, say, sign a bogus contract or hand over a set of important account credentials.

These are real threats, but they are not novel. As we wrote in the 2025 State of Malware report:

“The limited impact of AI on malware stems from its current capabilities. Although there are notable exceptions, generative AIs tend to provide efficiency rather than brand new capabilities. Cybercrime is a very mature field that relies on a set of well-established tools, such as phishing, information stealers, and ransomware that are already feature complete.”

That could change in 2025.

“Agentic” AI and a new landscape of attacks

Agentic AI is the next big thing in artificial intelligence, even if you’ve never heard about it before.

Google, Amazon, Meta, Microsoft, and more have all begun experimenting with the technology, which promises to take AI out of its current chatbot silo and into a new landscape where individualized AI “agents” can help with specific tasks. These agents could, for example, more effectively respond to simple customer support questions, help patients find in-network providers with their health insurance, and even suggest strategy based on a company’s most recent performance. Microsoft, for its part, has already teased its AI agent that answers employee questions around HR policies, holiday schedules, and more. Salesforce, too, is investing heavily in agentic AI, positioning the technology as a personal assistant for everyone.

As we wrote in the 2025 State of Malware report:

“If agentic AIs arrive in 2025, they won’t just answer questions, they will be able to think and act, transforming AI from an assistant that responds to prompts, into a peer, or even an expert that can plan out tasks, interact with the world, and solve the problems it encounters.”

The implications for cyberattacks are enormous. If put into the wrong hands, malicious attackers could ask AI agents to:

  • Search vast troves of stolen data to match leaked Social Security numbers with leaked email addresses, composing and sending phishing emails that threaten more data exposure unless a ransom is paid.
  • Scrape public social media feeds for baby photos that are delivered to other AI agents that create fake profiles that weaponize those baby photos as empty threats against a child’s safety.
  • Scour LinkedIn to create a database of potentially viable email addresses from countless companies by deducing the email address format—first name, last name; first initial, last name; etc.—from publicly listed email addresses, and then mirroring that format to write and send bogus requests from executives to their direct reports.
  • Comb through public divorce records across multiple states and countries to identify targets for romance scams, who receive messages and who can carry on with whole conversations composed and controlled by another AI agent.

These attacks threaten not only individuals but small businesses, too, as a vulnerability in a person’s device can become a malware attack on a network. The same is true in reverse—if attacks on companies become more accessible, then the data that people give these companies becomes more vulnerable to exposure.

Thankfully, where agentic AI poses a risk, it also poses a boon, as individual AI agents could be tasked with finding a company’s vulnerabilities, responding to suspicious activity on its network, and even guiding everyday people into safely posting online, searching the web, and buying from unknown retailers.

The truth is that AI is here to stay. There is already too much investment from the largest developers and companies for that to reverse course any time soon. So, if the threat is that attackers might harness this AI, then the foreseeable future will involve a lot of defenders and everyday people harnessing it, too.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.