IT NEWS

Claude AI chatbot abused to launch “cybercrime spree”

Anthropic—the company behind the widely renowned coding chatbot, Claude—says it uncovered a large-scale extortion operation in which cybercriminals abused Claude to automate and orchestrate sophisticated attacks.

The company issued a Threat Intelligence report in which it describes several instances of Claude abuse. In the report it states that:

“Cyber threat actors leverage AI—using coding agents to actively execute operations on victim networks, known as vibe hacking.”

This means that cybercriminals found ways to exploit vibe coding by using AI to design and launch attacks. Vibe coding is a way of creating software using AI, where someone simply describes what they want an app or program to do in plain language, and the AI writes the actual code to make it happen.

The process is much less technical than traditional programming, making it easy and fast to build applications, even for those who aren’t expert coders. For cybercriminals this lowers the bar for the technical knowledge needed to launch attacks, and helps the criminals to do it faster and at a larger scale.

Anthropic provides several examples of Claude’s abuse by cybercriminals. One of them was a large-scale operation which potentially affected at least 17 distinct organizations in just the last month across government, healthcare, emergency services, and religious institutions.

The people behind these attacks integrated the use of open source intelligence tools with an “unprecedented integration of artificial intelligence throughout their attack lifecycle.”

This systematic approach resulted in the compromise of personal records, including healthcare data, financial information, government credentials, and other sensitive information.

The primary goal of the cybercriminals is the extortion of the compromised organizations. The attacker created ransom notes to compromised systems demanding payments ranging from $75,000 to $500,000 in Bitcoin. But if the targets refuse to pay, the stolen personal records are bound to be published or sold to other cybercriminals.

Other campaigns stopped by Anthropic involved North Korean IT worker schemes, Ransomware-as-a-Service operations, credit card fraud, information stealer log analysis, a romance scam bot, and a Russian-speaking developer using Claude to create malware with advanced evasion capabilities.

But the case in which Anthropic found cybercriminals attack at least 17 organizations represents an entirely new phenomenon where the attacker used AI throughout the entire operation. From gaining access to the target’s systems to writing the ransomware notes—for every step Claude was used to automate this cybercrime spree.

Anthropic deploys a Threat Intelligence team to investigate real world abuse of their AI agents and works with other teams to find and improve defenses against this type of abuse. They also share key findings of the indicators with partners to help prevent similar abuse across the ecosystem.

Anthropic did not name any of the 17 organizations, but it stands to reason we’ll learn who they are sooner or later. One by one, when they report data breaches, or as a whole if the cybercriminals decide to publish a list.

Check your digital footprint

Data breaches of organizations that we’ve given our data to happen all the time, and that stolen information is often published online. Malwarebytes has a free tool for you to check how much of your personal data has been exposed—just submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scanner and we’ll give you a report and recommendations.

Developer verification: a promised lift for Android security

To reduce the number of harmful apps targeting Android users, Google has announced that certified Android devices will require all apps to be registered by verified developers in order to be installed.

But this new measure is not just about malware that’s found on the Google Play Store, it’s mainly about sideloaded apps (apps downloaded from outside the official Google Play Store).

Since August 31, 2023, apps on the Play Store already were subject to a D-U-N-S (Data Universal Numbering System) number requirement. Google says this has helped reduce the number of cybercriminals exploiting anonymity to distribute malware, commit financial fraud, and steal sensitive data.

To broaden this success, Google intends to start sending out invitations gradually starting October 2025, before opening it up to all developers in March 2026. In September 2026, the requirements go into effect in Brazil, Indonesia, Singapore, and Thailand. At this point, any app installed on a certified Android device in these regions must be registered by a verified developer. The requirements will then be rolled out globally.

This initiative, branded as ‘Developer verification,’ aims to combat the widespread problem of malware from sideloaded apps. Google says its research shows that 50 times more malware comes from sideloaded sources than from Google Play itself.

So, the new rules extend to everyone distributing Android apps, including those hosting them on third-party app stores or offering APK downloads directly. For developers who distribute their apps solely through the Google Play Store there will not be much of a change.

Yet, while legitimate developers will tell you how hard it is to get their apps accepted into the Google Play Store, cybercriminals manage to sneak in their malicious apps anyway.

For a full understanding of the new requirement, we’ll need to explain what “certified Android devices” are.

A definition for a certified Android device is: an Android product—such as a smartphone, tablet, smart TV, or streaming box—that has passed a rigorous series of Google security, compatibility, and performance tests, and is officially approved by Google. Certified devices run an official version of Android and have access to Google apps and the Play Store. Uncertified devices often lack these and may not receive updates or proper security support.

This is important to know because not all Android malware is limited to phones. Take for example, the BadBox botnet which also affects devices like TV streaming boxes, tablets, and smart TVs.

In practice, a certified device encompasses all mainstream devices from Samsung, Xiaomi, Motorola, OnePlus, Oppo, Vivo, and the Google Pixel line.

Reportedly, non-certified devices are those from Huawei, Amazon Fire tablets, and a set of Chinese TV boxes and smartphones that use heavily modified OS images.

Google encourages all developers to sign up for early access as the best way to prepare and stay informed.

 “Early participants will also get:

  • An invitation to an exclusive community discussion forum.
  • Priority support for these new requirements.
  • The chance to provide feedback and help us shape the experience.”

Whether these controls will be effective largely depends on enforcement and public awareness, but Google feels it marks real progress toward a safer mobile ecosystem. Let us know how you feel about this in the comments.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

More vulnerable stalkerware victims’ data exposed in new TheTruthSpy flaw

TheTruthSpy is at it again. A security researcher has discovered a flaw in the Android-based stalkerware that allows anyone to compromise any record in the system.

TheTruthSpy stalkerware is designed to be installed surreptitiously on a victim’s Android phone. It then monitors that phone’s activities and sends the information it gathers back to a central server. On Monday, TechCrunch revealed—not for the first time—that the servers are vulnerable to attack. It found that anyone can reset the password of any account on the app, meaning they could hijack anyone’s data.

The security researcher, Swarang Wade, demonstrated the vulnerability to TechCrunch by changing the passwords on several tests accounts. The publication isn’t revealing exactly how it was done, to prevent anyone from abusing the flaw.

TheTruthSpy gathers a lot of data about its victims. It provides the person who installed it with information about what calls or texts were made or received on the victim’s phone, and its location (harvested from the GPS), along with activities associated with messaging apps and files.

This isn’t the first time that TheTruthSpy has suffered from security issues:

This would all be very bad if people using the app knew that they were doing so, and that their personal usage data was stored online. But many of them are oblivious to the fact.

TheTruthSpy’s vendor, Vietnam-based 1Byte Software, warns that people must obtain consent before installing the app on someone else’s phone. However it also specifically advertises ‘stealth mode’, which makes it “completely invisible to users on phones/tablets where it’s installed.”

The software’s website touts its ability to spy on phone users as a way for parents to monitor and protect their children. That raises its own ethical questions, especially given the multiple data leaks. But that isn’t its only use. Abusers will use apps like these to monitor their current or ex-partners, or other stalking targets.

Once a victim has this installed on their phone without their knowledge, the installer can monitor their photos, social media interactions, emails, and internet browsing history. It will also record audio and log keystrokes without them being aware.

Van (Vardy) Thieu, owner of 1Byte Software, told TechCrunch that its source code was lost. He claimed to be building a new version from scratch, although TechCrunch’s reporters found that it was using the same vulnerable software library as the older version.

The software’s multiple bugs demonstrate just how dangerous it is to put this – or indeed any stalkerware app – on someone’s phone. The operators of these apps are often difficult to track down and hold accountable for their security issues.

How to check if you have stalkerware on your phone

What can you do if you suspect your phone might be infected with stalkerware? We think TechCrunch’s guide deserves a mention here, as does The Coalition Against Stalkerware, of which Malwarebytes is a founding member. The latter includes per-country links to organizations that help victims of domestic violence.

It is good to keep in mind however that by removing any stalkerware-type app, you will alert the person spying on you that you know the app is there.

Because the apps install under a different name and hide themselves from the user, it can be hard to find and remove them. That is where Malwarebytes for Android can help you.

  1. Open Malwarebytes on your Android.
  2. Open the app’s dashboard
  3. Tap Scan now
  4. It may take a few minutes to scan your device.

 If malware is detected you can act on it in the following ways:

  • Uninstall. The threat will be deleted from your device.
  • Ignore Always. The file detection will be added to the Allow List, and excluded from future scans. Legitimate files are sometimes detected as malware. We recommend reviewing scan results and adding files to Ignore Always that you know are safe and want to keep.
  • Ignore Once: A file has been detected as a threat, but you are not sure whether to add it to your Allow List or delete. This option will ignore the detection this time only. It will be detected as malware on your next scan.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

77 malicious apps removed from Google Play Store

Google has removed 77 malicious apps from the Google Play Store. Before they were removed, researchers at ThreatLabz discovered the apps had been installed over 19 million times.

One of the malware families discovered by the researchers is a banking Trojan known as Anatsa or TeaBot. This banking Trojan is a highly sophisticated Android malware, which focuses on stealing banking and cryptocurrency credentials.

Anatsa is a classic case of mobile malware rapidly adapting to security research progress. Its stealth tactics, exploitation of accessibility permissions, and ability to shift between hundreds of financial targets make it an ongoing threat for Android users worldwide.

Also found by the researchers were several types of adware. However, the largest chunk of malicious apps belonged to the Joker malware family, which is notorious for its stealthy behavior. It steals SMS messages, contacts, device info, and enrolls victims in unwanted premium services, which can result in financial losses.

The malware is installed like this:

  • It gets added to the Play Store as a benign app with useful and sought-after functionality (e.g. document readers, health trackers, keyboards, and photo apps).
  • Once installed, the app acts as a “dropper” which connects to a remote server for instructions and additional payloads, which often ends in the installation of information stealers.
  • Anatsa—specifically—uses several methods to avoid detection, such as a well-known Android APK ZIP obfuscator, and downloading each new chunk of code with a separate DES key.

Google says it picked up on the flaws and protected against these malware infections before the researchers published their report.

As a consequence, Google Play Protect may send users of the removed apps a push notification, giving them the option to remove the app from their device.

But don’t let that be your only line of defense. We found that Android users are more careful than iPhone users. Let’s keep that up!

How to protect your Android from malicious apps

Just because something is in the Google Play Store, there is no guarantee that it will remain a non-malicious app. So here are a few extra measures you can take:

  • Always check what permissions an app is requesting, and don’t just trust an app because it’s in the official Play Store. Ask questions such as: Do the permissions make sense for what the app is supposed to do? Why did necessary permissions change after an update? Do these changes make sense?
  • Occasionally go over your installed apps and remove any you no longer need.
  • Make sure you have the latest available updates for your device, and all your important apps (banking, security, etc.)
  • Protect your Android with security software. Your phone needs it just as much as your computer.

Malwarebytes for Android detects Anatsa as Trojan.Banker.CPL.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

AI browsers could leave users penniless: A prompt injection warning

Artificial Intelligence (AI) browsers are gaining traction, which means we may need to start worrying about the potential dangers of something called “prompt injection.”

Large language models (LLMs)—like the ones that power AI chatbots including ChatGPT, Claude, and Gemini—are designed to follow “prompts,” which are the instructions and questions that people provide when looking up info or getting help with a topic. In a chatbot, the questions you ask the AI are the “prompts.” But AI models aren’t great at telling apart the types of commands that are meant for their eyes only (for example, hidden background rules that come directly from developers, like “don’t write ransomware“) from the types of requests that come from users.

To showcase the risks here, the web browser developer Brave—which has its own AI assistant called Leo—recently tested whether it could trick an AI browser into reading dangerous prompts that harm users. And what the company found caused alarm, as they wrote in a blog this week:

“As users grow comfortable with AI browsers and begin trusting them with sensitive data in logged in sessions—such as banking, healthcare, and other critical websites—the risks multiply. What if the model hallucinates and performs actions you didn’t request? Or worse, what if a benign-looking website or a comment left on a social media site could steal your login credentials or other sensitive data by adding invisible instructions for the AI assistant?”

Prompt injection, then, is basically a trick where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.

What sets prompt injection apart from old-school hacking is that the weapon here is language, not code. Attackers don’t need to break into servers or look for traditional software bugs, they just need to be clever with words.

For an AI browser, part of the input is the content of the sites it visits. So, it’s possible to hide indirect prompt injections inside web pages by embedding malicious instructions in content that appears harmless or invisible to human users but is processed by AI browsers as part of their command context.

Now we need to define the difference between an AI browser and an agentic browser. An AI browser is any browser that uses artificial intelligence to assist users. This might mean answering questions, summarizing articles, making recommendations, or helping with searches. These tools support the user but usually need some manual guidance and still rely on the user to approve or complete tasks.

But, more recently, we are seeing the rise of agentic browsers, which are a new type of web browser powered by artificial intelligence, designed to do much more than just display websites. These browsers are designed to actually take over entire workflows, executing complex multi-step tasks with little or no user intervention, meaning they can actually use and interact with sites to carry out tasks for the user, almost like having an online assistant. Instead of waiting for clicks and manual instructions, agentic browsers can navigate web pages, fill out forms, make purchases, or book appointments on their own, based on what the user wants to accomplish.

For example, when you tell your agentic browser, “Find the cheapest flight to Paris next month and book it,” the browser will do all the research, compare prices, fill out passenger details, and complete the booking without any extra steps or manual effort—provided it has all the necessary details of course, which are part of the prompts the user feeds the agentic browser.

Are you seeing the potential dangers of prompt injections here?

What if my agentic browser gets new details while visiting a website? I can imagine criminals setting up a website with extremely competitive pricing just to attract visitors, but the real goal is to extract the payment information which the agentic browser needs to make purchases on your behalf. You could end up paying for someone else’s vacation to France.

During their research, Brave found that Perplexity’s Comet has some vulnerabilities which “underline the security challenges faced by agentic AI implementations in browsers.”

The vulnerabilities allow an attack based on indirect prompt injection, which means the malicious instructions are embedded in external content (like a website, or a PDF) that the browser AI assistant processes as part of fulfilling the user’s request. There are various ways of hiding that malicious content from a casual inspection. Brave uses the example of white text on a white background which AI browsers have no problem reading and a human would not see without closer inspection.

To quote a user on X:

“You can literally get prompt injected and your bank account drained by doomscrolling on reddit”

To prevent this type of prompt injection, it is imperative that agentic browsers understand the difference between user-provided instructions and web content processed to fulfill the instructions and treat them accordingly.

Perplexity has attempted twice to fix the vulnerability reported by Brave, but it still hasn’t fully mitigated this kind of attack as of the time of this reporting.

Safe use of agentic browsers

While it’s always tempting to use the latest gadgets this comes with a certain amount of risk. To limit those risks when using agentic browsers you should:

  • Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
  • Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects or unexpected input requests.
  • Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
  • Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
  • Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
  • Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic browsers should assist, but critical decisions benefit from human oversight. For example: limit the amount of money it can spend without your explicit permission or always let it ask you to authorize payments.
  • Report suspicious behavior: If an agentic browser acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (August 18 – August 24)

How a scam hunter got scammed (Lock and Code S06E17)

This week on the Lock and Code podcast…

If there’s one thing that scam hunter Julie-Anne Kearns wants everyone to know, it is that no one is immune from a scam. And she would know—she fell for one last year.

For years now, Kearns has made a name for herself on TikTok as a scam awareness and education expert. Popular under the name @staysafewithmjules, Kearns makes videos about scam identification and defense. She has posted countless profile pictures that are used and repeated by online scammers across different accounts. She has flagged active scam accounts on Instagram and detailed their strategies. And, perhaps most importantly, she answers people’s questions.

In fielding everyday comments and concerns from her followers and from strangers online, Kearns serves as a sort of gut-check for the internet at large. And by doing it day in, day out, Kearns is able to hone her scam “radar,” which helps guide people to safety.

But last year, Kearns fell for a scam, disguised initially as a letter from HM Revenue & Customs, or HMRC, the tax authority for the United Kingdom.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Kearns about the scam she fell for and what she’s lost, the worldwide problem of victim blaming, and the biggest warning signs she sees for a variety of scams online.

“A lot of the time you think that it’s somebody who’s silly—who’s just messing about. It’s not. You are dealing with criminals.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Clickjack attack steals password managers’ secrets

Sometimes it can seem as though everything’s toxic online, and the latest good thing turned bad is here: Browser pop-ups that look like they’re trying to help or authenticate you could be programmed to steal data from your password manager. To make matters worse, most browser extension-based password managers are still vulnerable to the attack.

This issue affects password managers like 1Password, LastPass, NordPass, and Enpass. They’re online services that store all your access credentials in an encrypted vault, and they use browser extensions to automatically fill in those passwords on web forms when you need them. Because they use extensions, you have to install them separately in your browser.

These extension-based password managers are more secure than those built natively into your web browser in some ways. Browser-based password managers tend to encrypt information using your browser access credentials. Malicious infostealer software can steal the files and decrypt them easily when you’re already logged in.

Browser extension-based password managers store encrypted vaults in memory or in other locations on your computer. They auto-lock after activity and instead of using operating system-level encryption, they use a separate master password. But while they have their benefits, nothing’s ever completely safe.

Clickjacking’s back

At the DEFCON security conference this month, cybersecurity researcher Marek Tóth presented an attack that works on most browser extension-based password managers. It uses malicious code to manipulate the structure of the site in the browser, changing the way it looks and behaves.

Tóth, who was just demonstrating the attack to highlight the vulnerability, used this capability for a new version of an old attack called clickjacking. It persuades a victim to click on one thing on a web page but then uses that action to click something else.

Messing with the structure of the site enabled him to make certain things invisible. One of these is a drop-down selector that extension-based password managers use to select and fill in account login credentials.

He used this trick to put an invisible overlay on top of a seemingly legitimate clickable element on the screen. When the user clicks it, they’re actually clicking on the overlay—which is their password manager’s dropdown selector.

The result: the password manager gives up the victim’s secrets without their knowledge.

Think twice about what you click

What would a decoy popup look like? These days, thanks to regulations from the EU, web sites often throw up permission banners that ask you if you’re OK with them using cookies. Most of us just click ‘yes’, but no matter what you click, an attack like this could put you at risk. Or an attacker could use an authentication button, or a “This content is sensitive, click yes if you really want to see it” button. Or, given the recent push for age verification, an “Are you really 18?” button.

This attack can steal more than your login credentials. It can also pilfer other information stored in password managers, including credit card information, personal data like your name and phone number, passkeys (digital certificates which your computer can use instead of passwords), and time-based one-time passwords (TOTP). The latter are the login tokens your computer gets after you use authentication apps like Google Authenticator.

Tóth didn’t just release this out of the blue. He disclosed it to password manager companies ahead of time, but many addressed it only partly, and some not at all.

As of earlier this week, Dashlane, Keeper, NordPass, ProtonPass, and RoboForm had fixed the issue, according to Tóth. Bitwarden, Enpass, and Apple (which uses an iCloud password manager) were in the progress of fixing it. 1Password had classified it as ‘informative’ but hadn’t fixed it yet. LastPass had fixed the vulnerability for personal and credit card data, but hadn’t yet fixed the vulnerability for login credentials, passkeys, or TOTP data. LogMeOnce hadn’t replied at all.

Protect yourself

So, what can you do about this threat? Tóth provides the usual warnings about enabling automatic updates and ensuring you’re using the latest versions of the password manager products. The most secure protection is disabling the autofill feature that allows password managers to fill in web form fields without user intervention. Instead, you’d have to copy and paste your details manually.

Another more convenient option is to control autofill so that it only operates when you specifically click on the browser extension in your toolbar. On Chromium browsers like Edge and Google Chrome, that means going into your extension settings, selecting “site access,” and then selecting the “on click” option. Selecting this would stop malicious code stealing your credentials in the way Tóth describes.

And as always, think twice about what you’re clicking when you’re on any website, especially any less trustworthy ones.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Grok chats show up in Google searches

I’m starting to feel like a broken record, but I feel you should know that yet another AI has been found sharing private conversations so that Google was able to index them, and now they can be found in search results.

It’s déjà vu in the world of AI: another day, another exposé about chatbot conversations being leaked, indexed, or made public. We have written about the share option in ChatGPT that was swiftly removed because users seemed oblivious to the consequences, and about Meta AI first making conversations discoverable via search engines and later exposing them due to a bug. In another leak we looked at an AI bot used by McDonalds to process job applications. And, not to forget, the AI girlfriend fiasco where a hacker was able to steal a massive database of users’ interactions with their sexual partner chatbots.

In some of these cases the developers thought it was clear to the users that by using a “Share” option, their conversations were publicly accessible, but in reality, the users were just as surprised as the people that found their conversations.

This same thing must have happened at Grok, the AI chatbot developed by xAI and launched in November 2023 by Elon Musk. When Grok users press a button to share a transcript of their conversation, this also made those conversations searchable, and, according to Forbes, this was sometimes done without users’ knowledge or permission.

For example, when a Grok user wants to share their conversation with another person, they can use the “Share” button to create a unique URL which they can then send to that person. But without many users being aware, pressing that “Share” button also made the conversation available to search engines, like Google, Bing, and DuckDuckGo. And that made them available for anyone to find.

Even though the account details may be hidden in the shared chatbot transcripts, the prompts—the instructions written by the user–may still contain personal or sensitive information about someone.

Forbes reported that it was able to view “conversations where users asked intimate questions about medicine and psychology.” And in one example seen by the BBC, the chatbot provided detailed instructions on how to make a Class A drug in a lab.

I have said this before, and I’ll probably have to say it again until privacy is baked deeply into the DNA of AI tools, rather than patched on as an afterthought: We have to be careful about what we share with chatbots.

How to safely use AI

While we continue to argue that the developments in AI are going too fast for security and privacy to be baked into the tech, there are some things to keep in mind to make sure your private information remains safe:

  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you are not logged in on that social media platform. Your conversations could be tied to your social media account which might contain a lot of personal information.
  • When using AI, make sure you understand how to keep your conversations private. Many AI tools have an “Incognito Mode.” Do not “share” your conversations unless needed. But always keep in mind that there could be leaks, bugs, and data breaches revealing even those conversations you set to private.
  • Do not feed any AI your private information.
  • Familiarize yourself with privacy policies. If they’re too long, feel free to use an AI to extract the main concerns.
  • Never share personally identifiable information (PII).

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

All Apple users should update after company patches zero-day vulnerability in all platforms

Apple has released security updates for iPhones, iPads and Macs to fix a zero-day vulnerability (a vulnerability which Apple was previously unaware of) that is reportedly being used in targeted attacks.

The updates cover:

Apple has acknowledged reports that attackers may have already used this flaw in a highly sophisticated operation aimed at specific, high‑value targets.

But history teaches us that once a patch goes out, attackers waste little time recycling the same vulnerability into broader, more opportunistic campaigns. What starts as a highly targeted campaign often trickles down into mass exploitation against everyday users.

That’s why it’s important that everyone takes the time to update now.

How to update your iPhone or iPad

For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. You want to be on iOS 18.6.2 or iPadOS 18.6.2 (or 17.7.10 for older models), so update now if you’re not. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

iPadOS screenshot update now

How to update your Mac

For Mac users, click on the Apple menu in the top-left corner of your screen and open System Settings. From there, scroll down until you find General, then select Software Update. Your Mac will automatically check for new updates. If an update is available, you’ll see the option to download and install it. Depending on the size of the update, this process might take anywhere from a few minutes to an hour, and your machine will need to restart to complete the installation.

As always, it’s a good idea to make sure you’ve saved your work before using the Restart Now button. Updates can sometimes require more than one reboot, so allow some downtime. After you install the update, your system gains stronger protection, and you can use your Mac without the constant worry of this vulnerability hanging over you.

Technical details

The flaw is tracked as CVE-2025-43300 and lies in the Image I/O framework, the part of macOS that does the heavy lifting whenever an app needs to open or save a picture. The problem came from an out-of-bounds write. Apple stepped in and tightened the rules with better bounds checking, closing off the hole so attackers can no longer use it.

An out-of-bounds write vulnerability means that the attacker can manipulate parts of the device’s memory that should be out of their reach. Such a flaw in a program allows it to read or write outside the bounds the program sets, enabling attackers to manipulate other parts of the memory allocated to more critical functions. Attackers can write code to a part of the memory where the system executes it with permissions that the program and user should not have.

In this case, an attacker could construct an image to exploit the vulnerability.  Processing such a malicious image file would result in memory corruption. Memory corruption issues can be manipulated to crash a process or run attacker’s code.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.