Archive for author: makoadmin

Beware of Zelle transfer scams

As we have said many times before, falling for a scam can happen to the best of us. And it can ruin lives. In our podcast How a scam hunter got scammed, scam hunter Julie-Anne Kearns talked about how she had been duped by people pretending to be from HMRC, which is the UK’s version of the US Internal Revenue Service (IRS).

This week in the New York Times crime reporter Michael Wilson, who has covered many scams during his career, almost fell for a scam that used a spoofed telephone number from Chase Bank. Michael’s story sounded vaguely familiar to us because we reported about something similar back in 2022.

The scam is a prime example of how social engineering is used to talk victims out of their money.

Michael received a call, seemingly from a Chase bank branch. The caller even invited him to Google the number and pointed out which branch he was “calling from.” The scammer claimed that fraudulent Zelle transfers had been made to and from a bank account in his name, even though Michael had never opened an account with Chase.

The initial scammer gave Michael a case number and put him through to “his supervisor.” This man asked Michael to open Zelle.

Zelle is a popular US peer-to-peer payment service that allows users to send and receive money quickly and securely directly from their bank accounts using just an email address or mobile phone number.

Where it says, “Enter an amount,” the “supervisor” instructed him to type $2,100, the amount of the withdrawals he said he was going to help reverse. In another field the scammer wanted Michael to enter the 10 digits of the case number. This triggered Michael’s spidey senses—it looked suspiciously like a phone number:

“This case number sure looks like a phone number, and I’m about to send that number $2,100.”

Zelle form. Receiver can be email address or telephone number

At that point the scammer gave him a 19 character code to put in the “What’s this for?” field, telling Michael it was needed for his team to be able to reverse the transaction.

But that didn’t calm down the spidey senses and Michael asked the question that will scare most scammers away. He proposed to meet in person and settle this. The scammer tried to persuade him by saying it might be too late by then, but Michael persisted and said he’d call back.

Only then did he realize the scammers had him on the hook for 16 minutes before he managed to break free.

“I should be able to spot a scam in under 16 seconds, I thought — but 16 minutes?”

Michael found that several others had been approached in the very same way. The “supervisor” is an element that provides legitimacy to the call and makes people feel like they’re talking to actual bank employees.

And once they have you filling out forms and writing down long codes, they have turned you from a critical thinker into a person with a mission to fulfil.

For completeness’ sake, Michael went to the bank office and asked for the two employees he’d allegedly spoken to. No surprise they didn’t work there, but someone who did work there recognized the scam and said she’d heard the story many times before and actually knew about a few people that lost money to these scammers.

How to avoid Zelle scams

There’s several aspects of this attack common to many others which may indicate a fraud attempt.

  • They don’t want you to call the bank back. If you do this, the scam falls apart because their number is spoofed. A genuine member of staff would have no issue with you calling them.
  • Pressure tactics. If a bank calls you out of the blue and claims that they’re powerless to stop something without your assistance, be very cautious. Is your bank really unable to perform a basic banking action?
  • Knowing your date of birth, address, and other information doesn’t mean the caller is genuine. They may have obtained the data from a phish, or a security breach.
  • Referencing third party payment apps may be another red flag, especially if they talk about a platform you’ve not used before.

Zelle transfers are instantaneous and almost impossible to reverse. And neither banks nor Zelle are liable for fraudulent payments, so a refund is highly unlikely. So, be extra careful when using Zelle.

Did you know, you can use Malwarebytes Scam Guard for this kind of situation as well? We tested Scam Guard with some details from the NYT story and it correctly identified it as a known scam, asked some follow up questions, and provided a clear set of recommendations.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

ChatGPT solves CAPTCHAs if you tell it they’re fake

If you’re seeing fewer or different CAPTCHA puzzles in the near future, that’s not because website owners have agreed that they’re annoying, but it might be because they no longer prove that the visitor is human.

For those that forgot what CAPTCHA stands for: Completely Automated Public Turing test to tell Computers and Humans Apart.

The fact that AI bots can bypass CAPTCHA systems is nothing new. Sophisticated bots have been bypassing CAPTCHA systems for years using methods such as optical character recognition (OCR), machine learning, and AI, making traditional CAPTCHA challenges increasingly ineffective.

Most of the openly accessible AI chat agents have been barred from solving CAPTCHAs by their developers. But now researchers say they’ve found a way to get ChatGPT to solve image-based CAPTCHAs. They did this by prompt injection, similar to “social engineering” a chatbot into doing something it would refuse if you asked it outright.

In this case, the researchers convinced ChatGPT-4o that it was solving fake CAPTCHAs.

According to the researchers:

“This priming step is crucial to the exploit. By having the LLM affirm that the CAPTCHAs were fake and the plan was acceptable, we increased the odds that the agent would comply later.”

This is something I have noticed myself. When I ask an AI to help me analyze malware, it often starts by saying it is not allowed to help me, but once I convince it I’m not going to improve it or make a new version of it, then it’ll often jump right in and assist me in unravelling it. By doing so, it provides information that a cybercriminal could use to make their own version of the malware.

The researchers proceeded by copying the conversation they had with the chatbot into the ChatGPT agent they planned to use.

A chatbot is built to answer questions and follow specific instructions given by a person, meaning it helps with single tasks and relies on constant user input for each step. In contrast, an AI agent acts more like a helper that can understand a big-picture goal (for example, “book me a flight” or “solve this problem”) and can take action on its own, handling multi-step tasks with less guidance needed from the user.

A chatbot relies on the person to provide every answer, click, and decision throughout a CAPTCHA challenge, so it cannot solve CAPTCHAs on its own. In contrast, an AI agent plans tasks, adapts to changes, and acts independently, allowing it to complete the entire CAPTCHA process with minimal user input.

What the researchers found is that the agent had no problems with one-click CAPTCHAs, logic-based CAPTCHAs, and CAPTCHAs based on text-recognition. It had more problems with image-based CAPTCHAs requiring precision (drag-and-drop, rotation, etc.), but managed to solve some of those as well.

Is this a next step in the arms-race, or will the web developers succumb to the fact that AI agents and AI browsers are helping a human to get the information from their website that they need, with or without having to solve a puzzle.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

A week in security (September 15 – September 21)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

ChatGPT Deep Research zero-click vulnerability fixed by OpenAI

OpenAI has moved quickly to patch a vulnerability known as “ShadowLeak” before anyone detected real-world abuse. Revealed by researchers yesterday, ShadowLeak was an issue in OpenAI’s Deep Research project that attackers could exploit by simply sending an email to the target.

Deep Research was launched in ChatGPT in early 2025 to enable users to delegate time-intensive, multi-step research tasks to an autonomous agent operating as an agentic AI (Artificial Intelligence). Agentic AI is a term that refers to AI systems that can act autonomously to achieve objectives by planning, deciding, and executing tasks with minimal human intervention. Deep Research users can primarily be found in finance, science, policy, engineering, and similar fields.

Users are able to select a “deep research” mode, input a query—optionally providing the agent with files and spreadsheets—and receive a detailed report after the agent browses, analyzes, and processes information from dozens of sources.

The researchers found a zero-click vulnerability in the Deep Research agent, that worked when the agent was connected to Gmail and browsing. By sending the target a specially crafted email, the agent leaked sensitive inbox information to the attacker, without the target needing to do anything and without any visible signs.

The attack relies on prompt injection, which is a well-known weak spot for AI agents. The poisoned prompts can be hidden in email by using tricks like tiny fonts, white-on-white text, and layout tricks. The target will not see them, but the agent still reads and obeys them.

And the data leak is impossible to pick up by internal defenses, since the leak occurs server-side, directly from OpenAI’s cloud infrastructure.

The researchers say it wasn’t easy to craft an effective email due to existing protection (guardrails) which recognized straight-out and obvious attempts to send information to an external address. For example, when the researchers tried to get the agent to interact with a malicious URL, it didn’t just refuse. It flagged the URL as suspicious and attempted to search for it online instead of opening it.

The key to success was to get the agent to encode the extracted PII with a simple method (base64) before appending it to the URL.

“This worked because the encoding was performed by the model before the request was passed on to the execution layer. In other words, it was relatively easy to convince the model to perform the encoding, and by the time the lower layer received the request, it only saw a harmless encoded string rather than raw PII.”

In the example, the researchers used Gmail as a connector,  but there are many other sources that present structured text which can be used as a potential prompt injection vector.

Safe use of agentic agents

While it’s always tempting to use the latest technology, this comes with a certain amount of risk. To limit those risks when using agentic agents you should:

  • Be cautious with permissions: Only grant access to sensitive information or system controls when absolutely necessary. Review what data or accounts the agentic browser can access and limit permissions where possible.
  • Verify sources before trusting links or commands: Avoid letting the browser automatically interact with unfamiliar websites or content. Check URLs carefully and be wary of sudden redirects, additional parameters, or unexpected input requests.
  • Keep software updated: Ensure the agentic browser and related AI tools are always running the latest versions to benefit from security patches and improvements against prompt injection exploits.
  • Use strong authentication and monitoring: Protect accounts connected to agentic browsers with multi-factor authentication and review activity logs regularly to spot unusual behavior early.
  • Educate yourself about prompt injection risks: Stay informed on the latest threats and best practices for safe AI interactions. Being aware is the first step to preventing exploitation.
  • Limit sensitive operations automation: Avoid fully automating high-stakes transactions or actions without manual review. Agentic agents should assist, but critical decisions deserve human oversight.
  • Report suspicious behavior: If an agentic agent acts unpredictably or asks for strange permissions, report it to the developers or security teams immediately for investigation.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Disrupted phishing service was after Microsoft 365 credentials

Microsoft and Cloudflare have disrupted a Phishing-as-a-Service operation, known as RaccoonO365.

The primary goal of RaccoonO365 (or Storm-2246 as Microsoft calls it) was to rent out a phishing toolkit that specialized in stealing Microsoft 365 credentials. They were successful in at least 5,000 cases, spanning 94 countries since July 2024.

The operation provided the cybercriminals’ customers with stolen credentials, cookies, and data which they in turn could use to plunder OneDrive, SharePoint, and Outlook accounts for information to use in financial fraud, extortion, or to serve as initial access for larger attacks.

Roughly an attack would look like this:

  • Emails were sent to victims with an attachment containing a link or QR code.
  • The malicious link led to a page with a simple CAPTCHA. This and other anti-bot techniques were implemented to evade analysis without raising suspicion from the victim.
  • After solving the CAPTCHA, the victim was redirected to a fake Microsoft O365 login page designed to harvest the entered credentials.

RaccoonO365 built its operation on top of legitimate infrastructure in an attempt to avoid detection. Leveraging free accounts, they strategically deployed Cloudflare workers to act as an intermediary layer, shielding their backend phishing servers from direct public exposure.

Reacting to this abuse of its services, Cloudflare teamed up with Microsoft’s Digital Crimes Unit (DCU). Using a court order granted by the Southern District of New York, the DCU seized 338 websites associated with RaccoonO365.

The danger of phishing kits like these is clear. Even non-technical criminals can lease a 30-day plan for $355 (to be paid in cryptocurrency) and get their hands on valid Microsoft O365 credentials. With the latest new feature of the phishing kit, users of the kit can even receive codes for certain multi-factor authentication (MFA) methods.

From there they can move forward to data theft, financial fraud, or even use the credentials to infiltrate an organization to deploy ransomware. And to give you an idea, RaccoonO365 customers were able to send emails to 9,000 targets per day. The suspected leaders of the operation had over 850 members on Telegram and have received at least US$100,000 in cryptocurrency payments.

The takedown of the websites and the attribution to a Nigerian suspect cut off the cybercriminals’ revenue streams, and significantly increased RaccoonO365’s operational costs. Besides that, the main suspect is believed to be the main coder behind the project and his apprehension by international law enforcement is likely to be a major blow to the operation.

Now, RaccoonO365 phishing kit customers can start worrying about how much of their information could be revealed in the aftermath of this disruption.

We’ll keep you posted.

Don’t fall for phishing attempts

In the operations run by RaccoonO365 two simple rules could have saved you from lots of trouble.

  • Don’t click on links in unsolicited attachments
  • Check if the website address in the browser matches the domain you expect to be on (eg. Microsoft.com).

Other important tips to stay safe from phishing in general:

  • Verify the sender: Always check if the sender’s email address matches what you would expect it to be. It’s not always conclusive but it can help you spot some attempts.
  • Check through an independent channel if the sender actually sent you an attachment or a link.
  • Use up-to-date security software, preferably with a web protection component.
  • Keep your device and all its software updated.
  • Use multi-factor authentication for every account you can.
  • Use a password manager. Password managers will not auto-fill a password to a fake site, even if it looks like the real deal to you.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Update your Chrome today: Google patches 4 vulnerabilities including one zero-day

Google has released an update for its Chrome browser to patch four security vulnerabilities, including one zero-day. A zero-day vulnerability refers to a bug that has been found and exploited by cybercriminals before the vendor even knew about it (they have “zero days” to fix it).

This update is crucial since it addresses one vulnerability which is already being actively exploited and, reportedly, can be abused when the user visits a malicious website. It probably doesn’t require any further user interaction, which means the user doesn’t need to click on anything in order for their system to be compromised.

The Chrome update brings the version number to 140.0.7339.185/.186 for Windows, Mac and 140.0.7339.185 for Linux.

The easiest way to update Chrome is to allow it to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To manually get the update, click the more menu (three stacked dots), then choose Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is reload Chrome in order for the update to complete, and for you to be safe from the vulnerabilities.

Chrome is up to date

You can find more elaborate update instructions and how to read the version number in our article on how to update Chrome on every operating system.

Technical details on the zero-day vulnerability

Google describes the zero-day vulnerability tracked as CVE-2025-10585 as a type confusion in V8. Reported by Google Threat Analysis Group on 2025-09-16.

Despite the short statement—Google never reveals a lot of details until everyone has had a chance to update—there are a few conclusions we can draw.

It helps to know that V8 is Google’s open-source Javascript engine.

A “type confusion” vulnerability happens when code doesn’t verify the object type passed to it and then uses the object without type-checking. So, a program mistakenly treats one type of data as if it were another, like confusing a list for a single value or interpreting a number as text. This mix-up can cause the software to behave unpredictably, creating opportunities for attackers to break in, steal data, crash programs, or even run malicious code.

Google’s Threat Analysis Group (TAG) focuses on spyware and nation-state attackers who abuse zero days for espionage purposes.

So, it stands to reason that an attacker used Javascript to create a malicious site that exploited this vulnerability and lured targeted victims to that website.

TAG reported the bug on September 16, and Google issued the patch one day later. That implies that the bug was urgent, or very easy to fix, and probably that both of those statements are true to some extent.

Usually, as more details become known or a patch gets reverse engineered, cybercriminals will start using the vulnerability in less targeted attacks.

Users of other Chromium-based browsers, such as Microsoft Edge, Brave, Opera, and Vivaldi, are also advised to keep an eye out for updates and install them when they become available.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Age verification and parental controls coming to ChatGPT to protect teens

OpenAI is going to try and predict the ages of its users to protect them better, as stories of AI-induced harms in children mount.

The company, which runs the popular ChatGPT AI, is working on what it calls a long-term system to determine whether users are over 18. If it can’t verify that a user is an adult, they will eventually get a different chat experience, CEO Sam Altman warned.

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” Altman said in a blog post on the issue.

Citing “principles in conflict,” Altman talked in a supporting blog post about how the company is struggling with competing values: allowing people the freedom to use the product as they wish, while also protecting teens (the system isn’t supposed to be used by those under 13). Privacy is another concept it holds dear, Altman said.

OpenAI is prioritizing teen safety over its other values. Two things that it shouldn’t do with teens, but can do with adults, are flirting and discussing suicide, even as a theoretical creative writing endeavor.

Altman commented:

“The model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”

It will also try to contact a teen user’s parents if it looks like the child is considering taking their own life, and possibly even the authorities if the child seems likely to harm themselves imminently.

The move comes as lawsuits mount against the company from parents of teens who took their own lives after using the system. Late last month, the parents of 16-year-old Adam Raine sued the company after ChatGPT allegedly advised him on suicide techniques and offered to write the first draft of his suicide note.

The company hasn’t gone into detail about how it will try and predict user age, other than looking at “how people use ChatGPT.” You can be sure some wily teens will do their best to game the system. Altman says that if the system can’t predict with confidence that a user is an adult, it will drop them into teen-oriented chat sessions.

Altman also signaled that ID authentication might be coming to some ChatGPT users. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” he said.

While OpenAI works on the age prediction system, Altman recommends parental controls for families with teen users. Available by the month’s end, it will allow parents to link their teens’ accounts with their own, guide how ChatGPT responds to them, and disable certain features including memory and chat history. It will also allow blackout hours, and will alert parents if their teen seems in distress.

This is a laudable step, but the problems are bigger than the effects on teens alone. As Altman says, this is a “new and powerful technology”, and it’s affecting adults in unexpected ways too. This summer, the New York Times reported that a Toronto man, Allen Brooks, fell into a delusional spiral after beginning a simple conversation with ChatGPT.

There are plenty more such stories. How, exactly, does the company plan to protect those people?


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

224 malicious apps removed from the Google Play Store after ad fraud campaign discovered

Researchers have discovered a large ad fraud campaign on Google Play Store.

The Satori Threat Intelligence and Research team found 224 malicious apps which were downloaded over 38 million times and generated up to 2.3 billion ad requests per day. They named the campaign “SlopAds.”

Ad fraud is a type of fraud that lets advertisers pay for ads even though the number of impressions (the times that the ad has been seen) is enormously exaggerated.

While the main victims of ad fraud are the advertisers, there are consequences for the users that had these apps installed as well, such as slowed-down devices and connections due to the apps executing their malicious activity in the background without the user even being aware.

At first, to stay under the radar of Google’s app review process and security software, the downloaded app will behave as advertised, if a user has installed it directly from the Play Store.

collection of services hosted by the SlopAds threat actor
Image courtesy of HUMAN Satori

But if the installation has been initiated by one of the campaign’s ads, the user will receive some extra files in the form of a steganographically encrypted payload.

If the app passes the first check it will receive four .png images that, when decrypted and reassembled, are actually an .apk file. The malicious file uses WebView (essentially a very basic browser) to send collected device and browser information to a Control & Command (C2) server which determines, based on that information, what domains to visit in further hidden WebViews.

The researchers found evidence of an AI (Artificial Intelligence) tool training on the same domain as the C2 server (ad2[.]cc). It is unclear whether this tool actively managed the ad fraud campaign.

Based on similarities in the C2 domain, the researchers found over 300 related domains promoting SlopAds-associated apps, suggesting that the collection of 224 SlopAds-associated apps was only the beginning.

Google removed all of the identified apps listed in this report from Google Play. Users are automatically protected by Google Play Protect, which warns users and blocks apps known to exhibit SlopAds associated behavior at install time on certified Android devices, even when apps come from sources outside of the Play Store.

You can find a complete list of the removed apps here: SlopAds app list

How to avoid installing malicious apps

While the official Google Play Store is the safest place to get your apps from, there is no guarantee that it will remain a non-malicious app just because it is in the Google Play Store. So here are a few extra measures you can take:

  • Always check what permissions an app is requesting, and don’t just trust an app because it’s in the official Play Store. Ask questions such as: Do the permissions make sense for what the app is supposed to do? Why did necessary permissions change after an update? Do these changes make sense?
  • Occasionally go over your installed apps and remove any you no longer need.
  • Make sure you have the latest available updates for your device, and all your important apps (banking, security, etc.)
  • Protect your Android with security software. Your phone needs it just as much as your computer.

Another precaution you can take if you’re looking for an app, do your research about the app before you go to the app store. As you can see from the screenshot above, many of the apps are made to look exactly the same as very popular legitimate ones (e.g. ChatGPT).

So, it’s important to know in advance who the official developer is of the app you want and if it’s even available from the app store.

As researcher Jim Nielsen demonstrated for the Mac App Store, there are a lot of apps trying to look like ChatGPT, but they are not the real thing. ChatGPT is not even in the Mac App Store, it is available in the Google Play Store for Android, but make sure to check that OpenAI is listed as the developer.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Airline data broker selling 5 billion passenger records to US government

We already knew that the US airline industry gave the government access to passenger records. However, this week it emerged that at least five billion passenger records are being sold to government agencies via a searchable database—far more than was initially believed.

A few weeks ago, investigative research team 404 Media reported on a secretive relationship between many US airlines and the US government. That story showed that the airlines had sold US agencies access to around a billion records.

Now, researchers have found the data broker that collects flight data from the airline industry has made at least five billion records available to federal agencies.

The organization selling the data is the Airlines Reporting Corporation (ARC), which is owned and operated by at least eight US airlines. It sells the government this data under the Travel Intelligence Program (TIP), which was started after the 2001 attack on the World Trade Center.

ARC provides access to a searchable database of at least five billion records, updated daily with new ticketing information. At least one agency, the US Secret Service, has a contract to access this data, paying $885,000 for data through 2028, according to documents obtained by 404 Media.

Known clients

In June, 404 Media found that ARC had been making names, flight itineraries, and financial details available to US agencies, which were forbidden from revealing it as the source, under contract. The data included flights booked via 12,800 travel agencies, which submit ticket sales from over 270 carriers globally to ARC.

Originally developed as a financial clearing house, ARC provides payment settlement services for federal agencies and airlines. Known clients include Customs and Border Protection, and Immigration and Customs Enforcement. Travel dates and credit card numbers are available to federal customers, which also include the Securities and Exchange Commission, the Drug Enforcement Administration, and the US Marshals Service.

A long history of sharing data

The US airline industry has a long history of interacting with the US government. In 1996, Al Gore’s White House Commission on Aviation Safety and Security recommended automated screening for better flight security. A year later, most North American airlines voluntarily implemented what became known as the Computer Assisted Passenger Prescreening System (CAPPS). After the Transportation Security Administration (TSA) took over CAPPS, it built a system called CAPPS II, which used security color-coding for flight passengers. That system ran into trouble after several airlines admitted to giving the US government access to passenger data.

American Airlines reportedly confessed to making passengers’ records available in the early 2000s, as did United, while Northwest also gave NASA access to millions of passenger records. These relationships enabled data mining work at government agencies involving passenger records. A US General Accounting Office (GAO) report in 2004 found that CAPPS II was behind schedule, in part because it had failed to address privacy concerns.

“One air carrier initially agreed to provide passenger data for testing purposes, but adverse publicity resulted in its withdrawal from participation. Similar situations occurred for the other two potential data providers,” the report said. “TSA’s attempts to obtain test data are still ongoing, and privacy issues remain a stumbling block.”

The TSA canned CAPPS II that year, switching instead to a system called Secure Flight. This also implemented a color-coded security system for passengers and uses the US government’s No-Fly list.

The information that ARC funnels to the US government reportedly comes only from travel agencies, meaning that direct bookings with airlines hopefully won’t be logged in this way. Passengers might want to consider that when making travel plans.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Update your Apple devices to fix dozens of vulnerabilities

Apple has released security updates for iPhones, iPads, Apple Watches, Apple TVs, and Macs as well as for Safari, and Xcode to fix dozens of vulnerabilities which could give cybercriminals access to sensitive data.

How to update your devices

How to update your iPhone or iPad

For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

 choices in the iPad update or upgrade screen

How to update macOS on any version

To update macOS on any supported Mac, use the Software Update feature, which Apple designed to work consistently across all recent versions. Here are the steps:

  • Click the Apple menu in the upper-left corner of your screen.
  • Choose System Settings (or System Preferences on older versions).
  • Select General in the sidebar, then click Software Update on the right. On older macOS, just look for Software Update directly.
  • Your Mac will check for updates automatically. If updates are available, click Update Now (or Upgrade Now for major new versions) and follow the on-screen instructions. Before you upgrade to macOS Tahoe 26, please read these instructions.
  • Enter your administrator password if prompted, then let your Mac finish the update (it might need to restart during this process).
  • Make sure your Mac stays plugged in and connected to the internet until the update is done.

How to update Apple Watch

  • Ensure your iPhone is paired with your Apple Watch and connected to Wi-Fi.
  • Keep your Apple Watch on its charger and close to your iPhone.
  • Open the Watch app on your iPhone.
  • Tap General > Software Update.
  • If an update appears, tap Download and Install.
  • Enter your iPhone passcode or Apple ID password if prompted.

Your Apple Watch will automatically restart during the update process. Make sure it remains near your iPhone and on charge until the update completes.

How to update Apple TV

  • Turn on your Apple TV and make sure it’s connected to the internet.
  • Open the Settings app on Apple TV.
  • Navigate to System > Software Updates.
  • Select Update Software.
  • If an update appears, select Download and Install.

The Apple TV will download the update and restart as needed. Keep your device connected to power and Wi-Fi until the process finishes.

Updates for your particular device

Apple has today released version 26 for all its software platforms. This new version brings in a new “Liquid Glass” design, expanded Apple Intelligence, and new features. You can choose to update to that version, or just update to fix the vulnerabilities:

iOS 26 and iPadOS 26 iPhone 11 and later, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 8th generation and later, and iPad mini 5th generation and later
iOS 18.7 and iPadOS 18.7 iPhone XS and later, iPad Pro 13-inch, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 7th generation and later, and iPad mini 5th generation and later
iOS 16.7.12 and iPadOS 16.7.12 iPhone 8, iPhone 8 Plus, iPhone X, iPad 5th generation, iPad Pro 9.7-inch, and iPad Pro 12.9-inch 1st generation
iOS 15.8.5 and iPadOS 15.8.5 iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Air 2, iPad mini (4th generation), and iPod touch (7th generation)
macOS Tahoe 26 Mac Studio (2022 and later), iMac (2020 and later), Mac Pro (2019 and later), Mac mini (2020 and later), MacBook Air with Apple silicon (2020 and later), MacBook Pro (16-inch, 2019), MacBook Pro (13-inch, 2020, Four Thunderbolt 3 ports), and MacBook Pro with Apple silicon (2020 and later)
macOS Sequoia 15.7 macOS Sequoia
macOS Sonoma 14.8 macOS Sonoma
tvOS 26 Apple TV HD and Apple TV 4K (all models)
watchOS 26 Apple Watch Series 6 and later
visionOS 26 Apple Vision Pro
Safari 26 macOS Sonoma and macOS Sequoia
Xcode 26 macOS Sequoia 15.6 and later

Technical details

Apple did not mention any actively exploited vulnerabilities, but there are two that we would like to highlight.

A vulnerability tracked as CVE-2025-43357 in Call History was found that could be used to fingerprint the user. Apple addressed this issue with improved redaction of sensitive information. This issue is fixed in macOS Tahoe 26, iOS 26, and iPadOS 26.

A vulnerability in the Safari browser tracked as CVE-2025-43327 where visiting a malicious website could lead to address bar spoofing. The issue was fixed by adding additional logic.

Address bar spoofing is a trick cybercriminals might use to make you believe you’re on a trusted website when in reality you’re not. Instead of showing the real address, attackers exploit browser flaws or use clever coding so the address bar displays something like login.bank.com even though you’re not on your bank’s site at all. This would allow the criminals to harvest your login credentials when you enter them on what is really their website.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.