IT NEWS

Age verification and parental controls coming to ChatGPT to protect teens

OpenAI is going to try and predict the ages of its users to protect them better, as stories of AI-induced harms in children mount.

The company, which runs the popular ChatGPT AI, is working on what it calls a long-term system to determine whether users are over 18. If it can’t verify that a user is an adult, they will eventually get a different chat experience, CEO Sam Altman warned.

“The way ChatGPT responds to a 15-year-old should look different than the way it responds to an adult,” Altman said in a blog post on the issue.

Citing “principles in conflict,” Altman talked in a supporting blog post about how the company is struggling with competing values: allowing people the freedom to use the product as they wish, while also protecting teens (the system isn’t supposed to be used by those under 13). Privacy is another concept it holds dear, Altman said.

OpenAI is prioritizing teen safety over its other values. Two things that it shouldn’t do with teens, but can do with adults, are flirting and discussing suicide, even as a theoretical creative writing endeavor.

Altman commented:

“The model by default should not provide instructions about how to commit suicide, but if an adult user is asking for help writing a fictional story that depicts a suicide, the model should help with that request.”

It will also try to contact a teen user’s parents if it looks like the child is considering taking their own life, and possibly even the authorities if the child seems likely to harm themselves imminently.

The move comes as lawsuits mount against the company from parents of teens who took their own lives after using the system. Late last month, the parents of 16-year-old Adam Raine sued the company after ChatGPT allegedly advised him on suicide techniques and offered to write the first draft of his suicide note.

The company hasn’t gone into detail about how it will try and predict user age, other than looking at “how people use ChatGPT.” You can be sure some wily teens will do their best to game the system. Altman says that if the system can’t predict with confidence that a user is an adult, it will drop them into teen-oriented chat sessions.

Altman also signaled that ID authentication might be coming to some ChatGPT users. “In some cases or countries we may also ask for an ID; we know this is a privacy compromise for adults but believe it is a worthy tradeoff,” he said.

While OpenAI works on the age prediction system, Altman recommends parental controls for families with teen users. Available by the month’s end, it will allow parents to link their teens’ accounts with their own, guide how ChatGPT responds to them, and disable certain features including memory and chat history. It will also allow blackout hours, and will alert parents if their teen seems in distress.

This is a laudable step, but the problems are bigger than the effects on teens alone. As Altman says, this is a “new and powerful technology”, and it’s affecting adults in unexpected ways too. This summer, the New York Times reported that a Toronto man, Allen Brooks, fell into a delusional spiral after beginning a simple conversation with ChatGPT.

There are plenty more such stories. How, exactly, does the company plan to protect those people?


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

224 malicious apps removed from the Google Play Store after ad fraud campaign discovered

Researchers have discovered a large ad fraud campaign on Google Play Store.

The Satori Threat Intelligence and Research team found 224 malicious apps which were downloaded over 38 million times and generated up to 2.3 billion ad requests per day. They named the campaign “SlopAds.”

Ad fraud is a type of fraud that lets advertisers pay for ads even though the number of impressions (the times that the ad has been seen) is enormously exaggerated.

While the main victims of ad fraud are the advertisers, there are consequences for the users that had these apps installed as well, such as slowed-down devices and connections due to the apps executing their malicious activity in the background without the user even being aware.

At first, to stay under the radar of Google’s app review process and security software, the downloaded app will behave as advertised, if a user has installed it directly from the Play Store.

collection of services hosted by the SlopAds threat actor
Image courtesy of HUMAN Satori

But if the installation has been initiated by one of the campaign’s ads, the user will receive some extra files in the form of a steganographically encrypted payload.

If the app passes the first check it will receive four .png images that, when decrypted and reassembled, are actually an .apk file. The malicious file uses WebView (essentially a very basic browser) to send collected device and browser information to a Control & Command (C2) server which determines, based on that information, what domains to visit in further hidden WebViews.

The researchers found evidence of an AI (Artificial Intelligence) tool training on the same domain as the C2 server (ad2[.]cc). It is unclear whether this tool actively managed the ad fraud campaign.

Based on similarities in the C2 domain, the researchers found over 300 related domains promoting SlopAds-associated apps, suggesting that the collection of 224 SlopAds-associated apps was only the beginning.

Google removed all of the identified apps listed in this report from Google Play. Users are automatically protected by Google Play Protect, which warns users and blocks apps known to exhibit SlopAds associated behavior at install time on certified Android devices, even when apps come from sources outside of the Play Store.

You can find a complete list of the removed apps here: SlopAds app list

How to avoid installing malicious apps

While the official Google Play Store is the safest place to get your apps from, there is no guarantee that it will remain a non-malicious app just because it is in the Google Play Store. So here are a few extra measures you can take:

  • Always check what permissions an app is requesting, and don’t just trust an app because it’s in the official Play Store. Ask questions such as: Do the permissions make sense for what the app is supposed to do? Why did necessary permissions change after an update? Do these changes make sense?
  • Occasionally go over your installed apps and remove any you no longer need.
  • Make sure you have the latest available updates for your device, and all your important apps (banking, security, etc.)
  • Protect your Android with security software. Your phone needs it just as much as your computer.

Another precaution you can take if you’re looking for an app, do your research about the app before you go to the app store. As you can see from the screenshot above, many of the apps are made to look exactly the same as very popular legitimate ones (e.g. ChatGPT).

So, it’s important to know in advance who the official developer is of the app you want and if it’s even available from the app store.

As researcher Jim Nielsen demonstrated for the Mac App Store, there are a lot of apps trying to look like ChatGPT, but they are not the real thing. ChatGPT is not even in the Mac App Store, it is available in the Google Play Store for Android, but make sure to check that OpenAI is listed as the developer.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Airline data broker selling 5 billion passenger records to US government

We already knew that the US airline industry gave the government access to passenger records. However, this week it emerged that at least five billion passenger records are being sold to government agencies via a searchable database—far more than was initially believed.

A few weeks ago, investigative research team 404 Media reported on a secretive relationship between many US airlines and the US government. That story showed that the airlines had sold US agencies access to around a billion records.

Now, researchers have found the data broker that collects flight data from the airline industry has made at least five billion records available to federal agencies.

The organization selling the data is the Airlines Reporting Corporation (ARC), which is owned and operated by at least eight US airlines. It sells the government this data under the Travel Intelligence Program (TIP), which was started after the 2001 attack on the World Trade Center.

ARC provides access to a searchable database of at least five billion records, updated daily with new ticketing information. At least one agency, the US Secret Service, has a contract to access this data, paying $885,000 for data through 2028, according to documents obtained by 404 Media.

Known clients

In June, 404 Media found that ARC had been making names, flight itineraries, and financial details available to US agencies, which were forbidden from revealing it as the source, under contract. The data included flights booked via 12,800 travel agencies, which submit ticket sales from over 270 carriers globally to ARC.

Originally developed as a financial clearing house, ARC provides payment settlement services for federal agencies and airlines. Known clients include Customs and Border Protection, and Immigration and Customs Enforcement. Travel dates and credit card numbers are available to federal customers, which also include the Securities and Exchange Commission, the Drug Enforcement Administration, and the US Marshals Service.

A long history of sharing data

The US airline industry has a long history of interacting with the US government. In 1996, Al Gore’s White House Commission on Aviation Safety and Security recommended automated screening for better flight security. A year later, most North American airlines voluntarily implemented what became known as the Computer Assisted Passenger Prescreening System (CAPPS). After the Transportation Security Administration (TSA) took over CAPPS, it built a system called CAPPS II, which used security color-coding for flight passengers. That system ran into trouble after several airlines admitted to giving the US government access to passenger data.

American Airlines reportedly confessed to making passengers’ records available in the early 2000s, as did United, while Northwest also gave NASA access to millions of passenger records. These relationships enabled data mining work at government agencies involving passenger records. A US General Accounting Office (GAO) report in 2004 found that CAPPS II was behind schedule, in part because it had failed to address privacy concerns.

“One air carrier initially agreed to provide passenger data for testing purposes, but adverse publicity resulted in its withdrawal from participation. Similar situations occurred for the other two potential data providers,” the report said. “TSA’s attempts to obtain test data are still ongoing, and privacy issues remain a stumbling block.”

The TSA canned CAPPS II that year, switching instead to a system called Secure Flight. This also implemented a color-coded security system for passengers and uses the US government’s No-Fly list.

The information that ARC funnels to the US government reportedly comes only from travel agencies, meaning that direct bookings with airlines hopefully won’t be logged in this way. Passengers might want to consider that when making travel plans.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Update your Apple devices to fix dozens of vulnerabilities

Apple has released security updates for iPhones, iPads, Apple Watches, Apple TVs, and Macs as well as for Safari, and Xcode to fix dozens of vulnerabilities which could give cybercriminals access to sensitive data.

How to update your devices

How to update your iPhone or iPad

For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

 choices in the iPad update or upgrade screen

How to update macOS on any version

To update macOS on any supported Mac, use the Software Update feature, which Apple designed to work consistently across all recent versions. Here are the steps:

  • Click the Apple menu in the upper-left corner of your screen.
  • Choose System Settings (or System Preferences on older versions).
  • Select General in the sidebar, then click Software Update on the right. On older macOS, just look for Software Update directly.
  • Your Mac will check for updates automatically. If updates are available, click Update Now (or Upgrade Now for major new versions) and follow the on-screen instructions. Before you upgrade to macOS Tahoe 26, please read these instructions.
  • Enter your administrator password if prompted, then let your Mac finish the update (it might need to restart during this process).
  • Make sure your Mac stays plugged in and connected to the internet until the update is done.

How to update Apple Watch

  • Ensure your iPhone is paired with your Apple Watch and connected to Wi-Fi.
  • Keep your Apple Watch on its charger and close to your iPhone.
  • Open the Watch app on your iPhone.
  • Tap General > Software Update.
  • If an update appears, tap Download and Install.
  • Enter your iPhone passcode or Apple ID password if prompted.

Your Apple Watch will automatically restart during the update process. Make sure it remains near your iPhone and on charge until the update completes.

How to update Apple TV

  • Turn on your Apple TV and make sure it’s connected to the internet.
  • Open the Settings app on Apple TV.
  • Navigate to System > Software Updates.
  • Select Update Software.
  • If an update appears, select Download and Install.

The Apple TV will download the update and restart as needed. Keep your device connected to power and Wi-Fi until the process finishes.

Updates for your particular device

Apple has today released version 26 for all its software platforms. This new version brings in a new “Liquid Glass” design, expanded Apple Intelligence, and new features. You can choose to update to that version, or just update to fix the vulnerabilities:

iOS 26 and iPadOS 26 iPhone 11 and later, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 8th generation and later, and iPad mini 5th generation and later
iOS 18.7 and iPadOS 18.7 iPhone XS and later, iPad Pro 13-inch, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 7th generation and later, and iPad mini 5th generation and later
iOS 16.7.12 and iPadOS 16.7.12 iPhone 8, iPhone 8 Plus, iPhone X, iPad 5th generation, iPad Pro 9.7-inch, and iPad Pro 12.9-inch 1st generation
iOS 15.8.5 and iPadOS 15.8.5 iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Air 2, iPad mini (4th generation), and iPod touch (7th generation)
macOS Tahoe 26 Mac Studio (2022 and later), iMac (2020 and later), Mac Pro (2019 and later), Mac mini (2020 and later), MacBook Air with Apple silicon (2020 and later), MacBook Pro (16-inch, 2019), MacBook Pro (13-inch, 2020, Four Thunderbolt 3 ports), and MacBook Pro with Apple silicon (2020 and later)
macOS Sequoia 15.7 macOS Sequoia
macOS Sonoma 14.8 macOS Sonoma
tvOS 26 Apple TV HD and Apple TV 4K (all models)
watchOS 26 Apple Watch Series 6 and later
visionOS 26 Apple Vision Pro
Safari 26 macOS Sonoma and macOS Sequoia
Xcode 26 macOS Sequoia 15.6 and later

Technical details

Apple did not mention any actively exploited vulnerabilities, but there are two that we would like to highlight.

A vulnerability tracked as CVE-2025-43357 in Call History was found that could be used to fingerprint the user. Apple addressed this issue with improved redaction of sensitive information. This issue is fixed in macOS Tahoe 26, iOS 26, and iPadOS 26.

A vulnerability in the Safari browser tracked as CVE-2025-43327 where visiting a malicious website could lead to address bar spoofing. The issue was fixed by adding additional logic.

Address bar spoofing is a trick cybercriminals might use to make you believe you’re on a trusted website when in reality you’re not. Instead of showing the real address, attackers exploit browser flaws or use clever coding so the address bar displays something like login.bank.com even though you’re not on your bank’s site at all. This would allow the criminals to harvest your login credentials when you enter them on what is really their website.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

“A dare, a challenge, a bit of fun:” Children are hacking their own schools’ systems, says study

As if ransomware wasn’t enough of a security problem for the sector, educational institutions also need to worry about their own students, a recent study shows.

Last week, the UK Information Commissioner’s Office (ICO) published a report about the “insider threat of students”. Here are a few key points:

  • Over half of school insider cyberattacks were caused by students.
  • Almost a third of insider attack incidents caused by students involved guessing weak passwords or finding them jotted down on bits of paper.
  • Teen hackers are not breaking in, they are logging in.

The conclusion of the ICO is that:

“Children are hacking into their schools’ computer systems – and it may set them up for a life of cyber crime.”

The ICO examined a total of 215 personal data breach reports caused by insider attacks from the education sector between January 2022 and August 2024. They found that students were responsible for 57% of them, and that students covered 97% of the incidents that were caused by stolen login details.

The British National Crime Agency (NCA) reported about a survey of children aged 10-16 which showed that 20% engage in behaviors that violate the Computer Misuse Act, which criminalizes unauthorized access to computer systems and data. It adds a warning:

“The consequences of committing Computer Misuse Act offences are serious. In addition to being arrested and potentially given a criminal record, those caught can have their phone or computer taken away from them, risk expulsion from school, and face limits on their internet use, career opportunities and international travel.”

The reasons that children provided for hacking included dares, notoriety, financial gain, revenge and rivalries. Security experts also mention cases of students altering grades or using staff credentials.

While the ICO report highlights a troubling trend in the UK, US data shows it faces similar problems. A March 2025 Center for Internet Security survey found 82% of K-12 schools experienced a cyber incident between July 2023 and December 2024, and security analysts say students pose an inside threat to the education sector.

In one high-profile US prosecution, a 19-year-old faced charges in connection with the 2024 PowerSchool compromise that exposed millions of records, student and teacher data. In the end, that incident that led to extortion attempts against districts and caused major operational disruption.

While seemingly less harmless, the consequences of student hacking can be just as serious as something like a ransomware attack, ending up spilling the personal data from students and teachers.

As Heather Toomey, Principal Cyber Specialist at the ICO put it:

“What starts out as a dare, a challenge, a bit of fun in a school setting can ultimately lead to children taking part in damaging attacks on organisations or critical infrastructure.”

Parents and schools need to warn children about the possible implications, no matter how innocent it may start. And more strict authorization of school staff and teachers could prevent a lot of these incidents, given that 30% of incidents were caused by stolen login details.

Protecting yourself or your children after a data breach

There are some actions you can take if you are, or suspect you or your children may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online and helps you recover after.

Grok, ChatGPT, other AIs happy to help phish senior citizens

If you are under the impression that cybercriminals need to get their hands on compromised AI chatbots to help them do their dirty work, think again.

Some AI chatbots are just so user friendly that they can help the user craft phishing text, and even malicious HTML and Javascript code.

A few weeks ago we published an article about the actions Anthropic was taking to stop its Claude AI from helping cybercriminals launch a cybercrime spree.

A recent investigation by Reuters journalists showed that Grok was more than happy to help them craft and perfect a phishing email targeting senior citizens. Grok is the AI marketed by Elon Musk’s xAI. Reuters reported:

“Grok generated the deception after being asked by Reuters to create a phishing email targeting the elderly. Without prodding, the bot also suggested fine-tuning the pitch to make it more urgent.”

In January 2025, we told you about a report that AI-supported spear phishing mails were equally as effective as phishing emails thought up by experts, and able to fool more than 50% of targets. But since then, the development of AI has grown exponentially and researchers are worrying about how to recognize AI-crafted phishing.

Phishing is the first step in many cybercrime campaigns. It poses an enormous problem with billions of phishing emails sent out every day. AI helps criminals to create more variation which makes pattern detection less effective and it helps them fine tune the messages themselves. And Reuters focused on senior citizens for a reason.

The FBI’s Internet Crime Complaint Center (IC3) 2024 report confirms that Americans aged 60 and older filed 147,127 complaints and lost nearly $4.9 billion to online fraud, representing a 43% increase in losses and a 46% increase in complaints compared to 2023.

Besides Grok, the reporters tested five other popular AI chatbots: ChatGPT, Meta AI, Claude, Gemini, and DeepSeek. Although most of the AI chatbots protested at first and cautioned the user not to use the emails in a real-life scenario, in the end their “will to please” helped overcome these obstacles.

Fred Heiding, a Harvard University researcher and an expert in phishing helped Reuters put the crafted emails to the test. Using a targeted approach to reach those most likely to fall for them, about 11% of the seniors clicked on the emails sent to them.

An investigation by Cybernews showed that Yellow.ai, an agentic AI provider for businesses such as Sony, Logitech, Hyundai, Domino’s, and hundreds of other brands could be persuaded to produce malicious HTML and JavaScript code. It even allowed attackers to bypass checks to inject unauthorized code into the system.

In a separate test by Reuters, Gemini produced a phishing email, saying it was “for educational purposes only,” but helpfully added that “for seniors, a sweet spot is often Monday to Friday, between 9:00 AM and 3:00 PM local time.”

After damaging reports like these are released, AI companies often build in additional guardrails for their chatbots, but that only highlights an ongoing dilemma in the industry. When providers tighten restrictions to protect users, they risk pushing people toward competing models that don’t play by the same rules.

Every time a platform moves to shut down risky prompts or limit generated content, some users will look for alternatives with fewer safety checks or ethical barriers. That tug of war between user demand and responsible restraint will likely fuel the next round of debate among developers, researchers, and policymakers.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Watch out for the “We are hiring” remote online evaluator message scam

Looking at our team’s recent text messages, you’d think that remote online evaluators are in high demand right now.

Several members of our team have received the almost exact same job offer scam texts. The content of the messages is almost identical, but there is a variation in background images.

same job different texts

“We are hiring
 Join our professional team
Job content Job title: remote online evaluator
Salary: $100-$600 per day
Working hours: 1-2 hours per day
Time: freedom can be done at home anytime
Job requirements: Age 25+

If you’re interested in this position, please answer ‘yes’ or ‘interested’ and I’ll send you the details. “

This type of scam has been around for a while, but the ones sending this exact same text have really taken off the last week. All the recipients that reported this scam are in the US and the messages all came from different US numbers.

You can rely on the fact that the only lazy job here is the scammer’s. There are different possible scenarios when the targets reply, but they all have negative consequences.

  • Advance fee scams are the most likely scenario. This is where the prospective employer explains what the job entails and then asks the target to pay for materials, start capital, or other onboarding costs. Typically, these payments are required in cryptocurrencies like USDT or Bitcoin.
  • Identity theft. Under the guise of needing your personal information before you can start working, the scammers will send you to a website to fill out sensitive information (full name, address, date of birth, SSN, banking details).
  • Money mule or laundering. The victim is actually working unknowingly to launder stolen money or cryptocurrency on behalf of the scammers. They will act as the person where the police come knocking first, giving the scammer more time to grab the money and run.
  • Phishing for further exploitation. Even if there is no immediate ask for money, they may direct the victim to click malicious links or install apps that harvest data.

How to stay safe from hiring scams

There are a few simple but very effective guidelines to stay out of the grasp of scammers that reach out to you with unsolicited job offers:

  • Don’t reply. Even if you reply ‘no’, all you’re doing is sending a signal that you’re reading their texts.
  • Never give out your personal information based on an unsolicited message.
  • Treat employers that want you to send them money before you can earn some with a healthy dose of suspicion. The same is true for those that pay a small amount and then ask for more in return.
  • Ignore offers that are too good to be true. They probably ARE too good to be true.
  • Is there a company name on the job ad? If not, question why.
  • If there is a company name mentioned, does the location of the company match the location of the phone number?

If you have already engaged with the scammers, there are some actions that can help you limit any damage:

  • Stop communication immediately.
  • Do not send money.
  • Contact your bank if you’ve shared any financial information.
  • Consider an identity monitoring service.
  • File a police/FTC report.
  • US recipients can forward scam texts to 7726 (SPAM).

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

A week in security (September 8 – September 14)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

AI browsers or agentic browsers: a look at the future of web surfing

Browsers like Chrome, Edge, and Firefox are our traditional gateway to the internet. But lately, we have seen a new generation of browsers emerge. These are AI-powered browsers or “agentic browsers”—which are not to be confused with your regular browsers that have just AI-powered plugins bolted on.

It might be better not to compare them to traditional browsers but look at them as personal assistants that perform online tasks for you. Embedded within the browser with no additional downloads needed, these assistants can download, summarize, automate tasks, or even make decisions on your behalf.

Which AI browsers are out there?

AI browsers are on the way. While I realize that this list will age quickly and probably badly, this is what is popular at the time of writing. These all have their specialties and weaknesses.

  • Dia browser: An AI-first browser where the URL bar doubles as a chat interface with the AI. It summarizes tabs, drafts text in your style, helps with shopping, and automates multi-step tasks without coding. It’s currently in beta and only available for Apple macOS 14+ with M1 chips or later and specifically designed for research, writing, and automation.
  • Fellou: Called the first agentic browser, it automates workflows like deep research, report generation, and multi-step web tasks, acting proactively rather than just reactively helping you browse. It’s very useful for researchers and reporters.
  • Comet: Developed by Perplexity.ai, Comet is a Chromium-based standalone AI browser. Comet treats browsing as a conversation, answering questions about pages, comparing content, and automating tasks like shopping or bookings. It aims to reduce tab overload and supports integration with apps like Gmail and Google Calendar.
  • Sigma browser: Privacy-conscious with end-to-end encryption. It combines AI tools for conversational assistance, summarization, and content generation, with features like ad-blocking and phishing protection.
  • Opera Neon: More experimental or niche, focused on AI-assisted tab management, workflows, and creative file management. Compared to the other browsers on this list, its AI features are limited.

These browsers offer various mixes of AI that can chat with you, automate tasks, summarize content, or organize your workflow better than traditional browsers ever could.

For those interested in a more technical evaluation, you can have a look at Mind2Web, which is a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website.

How are agentic browsers different from regular browsers?

Regular browsers mostly just show you websites. You determine what to search for, where to navigate, what links to click, and maybe choose what extensions to download for added features. AI browsers embed AI agents directly into this experience:

  • Conversational interface: Instead of just searching or typing URLs, you can talk or type natural language commands to the browser. For example, “Summarize these open tabs,” or “Add this product to my cart.”
  • Task automation: They don’t just assist, they act autonomously to execute complex multi-step tasks across sites—booking flights, researching topics, compiling reports, or managing your tabs.
  • Context awareness: AI browsers remember what you’re looking at in tabs or open apps and can synthesize information across them, providing a kind of continuous memory that helps cut through the clutter.
  • Built-in privacy and security features: Some integrate robust encryption, ad blockers, and phishing protection aligned with their AI capabilities.
  • Integrated AI tools: Text generation, summarization, translation, and workflow management are part of the browser, not separate plugins.

This means less manual juggling, fewer tabs, and a more proactive digital assistant built into the browser itself.

Are AI browsers safe to use?

With great AI power comes great responsibility, and risk. So, it’s important to consider the security and privacy implications if you decide to start using an AI browser and when to decide which one.

There are certain security wins. AI browsers tend to integrate anti-phishing tools, malware blocking, and sandboxing, sometimes surpassing traditional browsers in protecting users against web threats. For example, Sigma’s AI browser employs end-to-end encryption and compliance with global data regulations.

However, due to their advanced AI functionality and sometimes early-stage software status, AI browsers can be more complex and still evolving, which may introduce vulnerabilities or bugs. Some are invite-only or in beta, which limits exposure but also reduces maturity.

Privacy is another key concern. Many AI browsers process your data locally or encrypt it to protect user information, but some features may still require cloud-based AI processing. This means your browsing context or personal information could be transmitted to third parties, depending on the browser’s architecture and privacy policy. And, as browsing activity is key to many of the browser’s AI features, a user’s visited web sites—and perhaps even the words displayed on those websites—could be read and processed, even in a limited way, by the browser.

Consumers should carefully review each AI browser’s privacy documentation and look for features like local data encryption, minimal data logging, user consent for data sharing, and transparency about AI data usage.

As a result, choosing AI browsers from trusted developers with transparent privacy policies is crucial, especially if you let them handle sensitive information.

When are AI browsers useful, and when is it better to avoid them?

Given the early stages of development, we would recommend not using AI browsers, unless you understand what you’re doing and the risks involved.

When to use AI browsers:

  • If productivity and automation in browsing are priorities, such as during deep research, writing, or complex workflows.
  • When you want to cut down manual multitasking and tab overload with an AI that can help you summarize, fetch related information, and automate data processing.
  • For creative projects that require AI assistance directly in the browsing environment.
  • When privacy-centric options are selected and trusted.

When to avoid or be cautious:

  • If you handle highly sensitive data—including workplace data—and the browser’s privacy stance is unclear.
  • There will be concerns about early-stage software bugs or untested security.
  • When minimalism, speed, control, and simplicity are preferred over complex AI-driven features.
  • If your choice is limited it may be better to wait. Some AI browsers still focus on macOS or are limited to other platforms.

In essence, AI and agentic browsers are transformative tools meant to augment human browsing with AI intelligence but are best paired with an understanding of their platform maturity and privacy implications.

It is also good to understand that using them will come with a learning curve and that research into their vulnerabilities, although only scratching the surface has uncovered some serious security concerns. Specifically on how it’s possible to deliver prompt injection. Several researchers and security analysts have documented successful prompt injection methods targeting AI browsers and agentic browsing agents. Their reliance on dynamic content, tool execution, and user-provided data exposes AI browsers to a broad attack surface.

AI browsers are poised to redefine how we surf the web, blending browsing with intelligent assistance for a more productive and tailored experience. Like all new tech, choosing the right browser depends on balancing the promise of smart automation with careful security and privacy choices.

For cybersecurity-conscious users, experimenting with AI browsers like Sigma or Comet while keeping a standard browser for your day-to-day is a recommended strategy.

The future of web browsing is here. Browsers built on AI agents that think, act, and assist the user are available. But whether you and the current state of development are ready for it, is a decision only you can make.

Questions? Post them in the comments and I’ll add a FAQ section which answers those we know how.

From Fitbit to financial despair: How one woman lost her life savings and more to a scammer

We hear so often about people falling for scams and losing money. But we often don’t find out the real details of what happened, and how one “like” can turn into a nightmare that controls someone’s life for many years. This is that story.

Not too long ago, a scam victim named Karen reached out to me, asking for help. It’s a story that may seem unbelievable to some, but it happens more often than you think.

Karen tells us about the initial hook:

“My story started on January 1, 2020, when a man called Charles Hillary ‘liked’ something that I shared on the exercise app Fitbit. He kept on reaching out instead of just liking and moving on like most people do.“

It wasn’t long until “Charles” asked Karen if she wanted to move their chats to Google Hangouts. Karen used Google Hangouts at work so it didn’t seem like a strange request.

But moving a conversation to a more secure environment is not something scammers do for convenience. They do it to reduce the chance of anyone listening in on their conversation or finding out their identity.

Karen was slightly suspicious about when she would get messages from Charles, given that he had told her he was from Atlanta, Georgia.

“Every time he messaged me, I would receive it around 2am, so I asked him where he was. He responded and said he was on a contract job with Diamond Offshore Drilling in Ireland. I later found that not to be true.”

As it happens, Ireland is in the same time zone as West Africa Time (WAT), which is used in countries like Gabon, Congo, and Nigeria.

In late January, after Karen and Charles had been talking for almost a month, he asked her for some help.

Charles said he had lent his friend, also in the oil drilling business, a lot of money. His friend had paid him back, he said, but had left it in a box with a security company, Damag Security.

“He said the security company was closing and needed him to get his ‘box.’ He asked me to be the recipient and I asked him lots of questions but ultimately agreed since it would not cost me anything and I could place it in his bank at my local branch of Bank of America.”

Charles showed Karen the documentation:

Picture 1

Once a scammer has found an angle and the victim is invested, the costs will typically grow in number and in size.

“This is when the nightmare began. The box immediately cost me $3900 for shipping.”

After that, Karen was asked by the security for money for various forms. Charles told her all forms should have been secured when the money was placed with the security company.

“He played innocent through it all.”

The forms were expensive and ranged from $25,000 to around $60,000. Karen asked them to reduce the price and they did, so she paid.

Charles gave Karen several separate reasons as to why he wasn’t able to get the money himself:

  • His bank account had been frozen due to money laundering.
  • His ex-wife had taken a lot of his money so he froze his account until he could return in person.
  • He had illegally done oil drilling in Russian waters and made a lot of money—also in the box—and could not let anyone find out about it or he would go to prison.

It all does sound far-fetched, and it’s easy to read this and say you’d never get caught by something like this. However, Karen is a well-educated person who was manipulated into paying large sums of money. Scams can catch anyone out.

Karen realised something wasn’t right and that she was being scammed, so she filed a police report at her local Sheriff’s Office, along with the FBI, TBI, IC3 and the Better Business Bureau.

The local investigator found nothing on Charles Hillary. Worse, the damage was already done: Karen’s credit was bad, her finances in a mess, and nobody except for one friend and a co-worker knew.

“At this point, I owed about $65,000—some was a Discover loan, some were cash advances and some on credit cards…all in my name alone.”

The box scam continued until December 2020 until the scammer decided to change tactics.

Scammer threats, while scary, are typically empty. But how can a victim be sure of that? Karen tells us about the most recent threats the scammer made:

“The most recent threat was to my son’s wedding. He said the Russians had hired hit men in the United States to create a blood bath. He sent me the wedding invite to prove he knew who, where, and when. Nothing happened but he is still emailing me daily.”

The scammer started using a second, more supportive, persona. As an example of how this second persona was used, this bizarre, less aggressive email came after the threats to disrupt the wedding (all sic):

“I woke up with sadness in my spirit due to the recent threats against your children …

I have about $2500 in my wallet and if you can send the balance today that would be great so we can end this immediately instead of waiting for your son wedding to become a disaster or endangering his guess. I am willing to assist with $2500 if you can come up with the balance today and also the board will be in an agreement to prevent any future harm against your children. Get back to me as soon as possible.”

This persona expresses concern and sadness about the threats against the victim’s children and criticizes “Hillary” for continuing the threats. This dual-role tactic is a classic psychological manipulation technique often used in scams:

  • The victim feels fear and urgency from the threat.
  • Then they feel relief and trust from the “helper” who appears to be on their side.
  • This builds rapport and pressure to comply with demands.
  • The combination makes the victim feel psychologically cornered, pushing them to do things which they’d normally consider irrational.

Our investigation

An analysis of the language and style of the emails from the two personae shows it’s very likely the same person or same group of people working from the same script.

Many of the Gmail addresses the scammer used were removed after complaints to Google, but it’s trivial to set up a new one. Google did tell Karen that at least some of the accounts were set up from Nigeria.

Our own analysis of the headers of some recent emails didn’t reveal much useful information, unfortunately.

Email authentication and origin:

  • The email was sent from the Gmail server (mail-sor-f41.google.com) with IP 209.85.220.41, which is a legitimate Google mail server IP.
  • SPF, DKIM, and DMARC authentication all passed successfully for the domain gmail.com and the sending addresses charleshillary****@gmail.com and cortneymalander***@gmail.com. This means the emails were indeed sent through Google’s infrastructure and the sender address was not spoofed at the SMTP level.
  • ARC (Authenticated Received Chain) signatures are present but show no chain validation (cv=none), which is typical for a first hop.
  • The Return-Path and From address match, which is a clear sign that the envelope sender and header sender are consistent.

Conclusion: The sender’s Gmail accounts were likely compromised or set up for this scam, rather than the email being forged or spoofed at the server level. Looking at the list of past email addresses, we are pretty sure that all of them were created specifically for this scam.

We also followed up on some wire transfers that Karen made to pay the scammer, but we found that the receivers were scam victims as well, which the scammers used as money mules. The receivers of the wires were instructed to collect the money and put it in a Bitcoin wallet. Most of Karen’s payments went directly into those wallets.

We’ve advised Karen to ignore the scammers and not even open their emails anymore. At some point they will give up and turn their attention to other victims. Meanwhile, Karen will have to keep working two jobs as she has a remaining $20,000 debt.

Even after a month of not replying, Karen reports that she still receives emails from the scammer. They haven’t given up on extracting more money out of her. Her exhaustion and isolation showed in this reply to me:

“Appreciate your help so much. Wish I had found you a long time ago. Could have saved me money, 2nd jobs and a marriage from nearly going under. The devastation they cause is real.

This is what I daily beat myself up over. I saw the signs of scam. I was told it was scam, but they make it so dang real that I could not wrap my head around it being anything but truth. I looked for any and every sign of them stepping all over each other in their stories but never did until about two or so months ago.”

How to tell if you’re talking to a scammer

A few things that should have warned Karen:

  • The person that contacted you on one platform now wants to move to a different platform. Whether that is WhatsApp, Signal, Telegram, or as in Karen’s case, Google Hangouts. For a scammer this is not a matter of convenience, but of staying under the radar.
  • Time zones don’t match up. Based on their activity, you can make a rough guess about the time zone the person you are communicating with is in. Check if that matches their story. In Karen’s case the scammer picked Ireland which is very likely their actual time zone but given their use of the English language, not their actual location.
  • Dodgy paperwork. The documents Karen received would not have survived any legal or professional scrutiny. But since Karen was too embarrassed to tell anyone what she was involved in, she didn’t get a second opinion on the papers.
  • A second person starts messaging. Granted that the scammers had a decently thought out script, linguistic analysis would have shown Karen that the two separate personas were one and the same person with a very high accuracy.

If you feel like you might be talking to a scammer, STOP and think of the following tips:

  1. Slow down: Don’t let urgency or pressure push you to take action.
  2. Test them: Ask questions they should know the answer to, especially if you think they are posing as someone you know
  3. Opt out – Don’t be afraid to end the conversation.
  4. Prove it – If any companies are involved, confirm the request by contacting the company through a verified, trusted channel like an official website or method you’ve used in the past.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!