IT NEWS

A week in security (September 8 – September 14)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

AI browsers or agentic browsers: a look at the future of web surfing

Browsers like Chrome, Edge, and Firefox are our traditional gateway to the internet. But lately, we have seen a new generation of browsers emerge. These are AI-powered browsers or “agentic browsers”—which are not to be confused with your regular browsers that have just AI-powered plugins bolted on.

It might be better not to compare them to traditional browsers but look at them as personal assistants that perform online tasks for you. Embedded within the browser with no additional downloads needed, these assistants can download, summarize, automate tasks, or even make decisions on your behalf.

Which AI browsers are out there?

AI browsers are on the way. While I realize that this list will age quickly and probably badly, this is what is popular at the time of writing. These all have their specialties and weaknesses.

  • Dia browser: An AI-first browser where the URL bar doubles as a chat interface with the AI. It summarizes tabs, drafts text in your style, helps with shopping, and automates multi-step tasks without coding. It’s currently in beta and only available for Apple macOS 14+ with M1 chips or later and specifically designed for research, writing, and automation.
  • Fellou: Called the first agentic browser, it automates workflows like deep research, report generation, and multi-step web tasks, acting proactively rather than just reactively helping you browse. It’s very useful for researchers and reporters.
  • Comet: Developed by Perplexity.ai, Comet is a Chromium-based standalone AI browser. Comet treats browsing as a conversation, answering questions about pages, comparing content, and automating tasks like shopping or bookings. It aims to reduce tab overload and supports integration with apps like Gmail and Google Calendar.
  • Sigma browser: Privacy-conscious with end-to-end encryption. It combines AI tools for conversational assistance, summarization, and content generation, with features like ad-blocking and phishing protection.
  • Opera Neon: More experimental or niche, focused on AI-assisted tab management, workflows, and creative file management. Compared to the other browsers on this list, its AI features are limited.

These browsers offer various mixes of AI that can chat with you, automate tasks, summarize content, or organize your workflow better than traditional browsers ever could.

For those interested in a more technical evaluation, you can have a look at Mind2Web, which is a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website.

How are agentic browsers different from regular browsers?

Regular browsers mostly just show you websites. You determine what to search for, where to navigate, what links to click, and maybe choose what extensions to download for added features. AI browsers embed AI agents directly into this experience:

  • Conversational interface: Instead of just searching or typing URLs, you can talk or type natural language commands to the browser. For example, “Summarize these open tabs,” or “Add this product to my cart.”
  • Task automation: They don’t just assist, they act autonomously to execute complex multi-step tasks across sites—booking flights, researching topics, compiling reports, or managing your tabs.
  • Context awareness: AI browsers remember what you’re looking at in tabs or open apps and can synthesize information across them, providing a kind of continuous memory that helps cut through the clutter.
  • Built-in privacy and security features: Some integrate robust encryption, ad blockers, and phishing protection aligned with their AI capabilities.
  • Integrated AI tools: Text generation, summarization, translation, and workflow management are part of the browser, not separate plugins.

This means less manual juggling, fewer tabs, and a more proactive digital assistant built into the browser itself.

Are AI browsers safe to use?

With great AI power comes great responsibility, and risk. So, it’s important to consider the security and privacy implications if you decide to start using an AI browser and when to decide which one.

There are certain security wins. AI browsers tend to integrate anti-phishing tools, malware blocking, and sandboxing, sometimes surpassing traditional browsers in protecting users against web threats. For example, Sigma’s AI browser employs end-to-end encryption and compliance with global data regulations.

However, due to their advanced AI functionality and sometimes early-stage software status, AI browsers can be more complex and still evolving, which may introduce vulnerabilities or bugs. Some are invite-only or in beta, which limits exposure but also reduces maturity.

Privacy is another key concern. Many AI browsers process your data locally or encrypt it to protect user information, but some features may still require cloud-based AI processing. This means your browsing context or personal information could be transmitted to third parties, depending on the browser’s architecture and privacy policy. And, as browsing activity is key to many of the browser’s AI features, a user’s visited web sites—and perhaps even the words displayed on those websites—could be read and processed, even in a limited way, by the browser.

Consumers should carefully review each AI browser’s privacy documentation and look for features like local data encryption, minimal data logging, user consent for data sharing, and transparency about AI data usage.

As a result, choosing AI browsers from trusted developers with transparent privacy policies is crucial, especially if you let them handle sensitive information.

When are AI browsers useful, and when is it better to avoid them?

Given the early stages of development, we would recommend not using AI browsers, unless you understand what you’re doing and the risks involved.

When to use AI browsers:

  • If productivity and automation in browsing are priorities, such as during deep research, writing, or complex workflows.
  • When you want to cut down manual multitasking and tab overload with an AI that can help you summarize, fetch related information, and automate data processing.
  • For creative projects that require AI assistance directly in the browsing environment.
  • When privacy-centric options are selected and trusted.

When to avoid or be cautious:

  • If you handle highly sensitive data—including workplace data—and the browser’s privacy stance is unclear.
  • There will be concerns about early-stage software bugs or untested security.
  • When minimalism, speed, control, and simplicity are preferred over complex AI-driven features.
  • If your choice is limited it may be better to wait. Some AI browsers still focus on macOS or are limited to other platforms.

In essence, AI and agentic browsers are transformative tools meant to augment human browsing with AI intelligence but are best paired with an understanding of their platform maturity and privacy implications.

It is also good to understand that using them will come with a learning curve and that research into their vulnerabilities, although only scratching the surface has uncovered some serious security concerns. Specifically on how it’s possible to deliver prompt injection. Several researchers and security analysts have documented successful prompt injection methods targeting AI browsers and agentic browsing agents. Their reliance on dynamic content, tool execution, and user-provided data exposes AI browsers to a broad attack surface.

AI browsers are poised to redefine how we surf the web, blending browsing with intelligent assistance for a more productive and tailored experience. Like all new tech, choosing the right browser depends on balancing the promise of smart automation with careful security and privacy choices.

For cybersecurity-conscious users, experimenting with AI browsers like Sigma or Comet while keeping a standard browser for your day-to-day is a recommended strategy.

The future of web browsing is here. Browsers built on AI agents that think, act, and assist the user are available. But whether you and the current state of development are ready for it, is a decision only you can make.

Questions? Post them in the comments and I’ll add a FAQ section which answers those we know how.

From Fitbit to financial despair: How one woman lost her life savings and more to a scammer

We hear so often about people falling for scams and losing money. But we often don’t find out the real details of what happened, and how one “like” can turn into a nightmare that controls someone’s life for many years. This is that story.

Not too long ago, a scam victim named Karen reached out to me, asking for help. It’s a story that may seem unbelievable to some, but it happens more often than you think.

Karen tells us about the initial hook:

“My story started on January 1, 2020, when a man called Charles Hillary ‘liked’ something that I shared on the exercise app Fitbit. He kept on reaching out instead of just liking and moving on like most people do.“

It wasn’t long until “Charles” asked Karen if she wanted to move their chats to Google Hangouts. Karen used Google Hangouts at work so it didn’t seem like a strange request.

But moving a conversation to a more secure environment is not something scammers do for convenience. They do it to reduce the chance of anyone listening in on their conversation or finding out their identity.

Karen was slightly suspicious about when she would get messages from Charles, given that he had told her he was from Atlanta, Georgia.

“Every time he messaged me, I would receive it around 2am, so I asked him where he was. He responded and said he was on a contract job with Diamond Offshore Drilling in Ireland. I later found that not to be true.”

As it happens, Ireland is in the same time zone as West Africa Time (WAT), which is used in countries like Gabon, Congo, and Nigeria.

In late January, after Karen and Charles had been talking for almost a month, he asked her for some help.

Charles said he had lent his friend, also in the oil drilling business, a lot of money. His friend had paid him back, he said, but had left it in a box with a security company, Damag Security.

“He said the security company was closing and needed him to get his ‘box.’ He asked me to be the recipient and I asked him lots of questions but ultimately agreed since it would not cost me anything and I could place it in his bank at my local branch of Bank of America.”

Charles showed Karen the documentation:

Picture 1

Once a scammer has found an angle and the victim is invested, the costs will typically grow in number and in size.

“This is when the nightmare began. The box immediately cost me $3900 for shipping.”

After that, Karen was asked by the security for money for various forms. Charles told her all forms should have been secured when the money was placed with the security company.

“He played innocent through it all.”

The forms were expensive and ranged from $25,000 to around $60,000. Karen asked them to reduce the price and they did, so she paid.

Charles gave Karen several separate reasons as to why he wasn’t able to get the money himself:

  • His bank account had been frozen due to money laundering.
  • His ex-wife had taken a lot of his money so he froze his account until he could return in person.
  • He had illegally done oil drilling in Russian waters and made a lot of money—also in the box—and could not let anyone find out about it or he would go to prison.

It all does sound far-fetched, and it’s easy to read this and say you’d never get caught by something like this. However, Karen is a well-educated person who was manipulated into paying large sums of money. Scams can catch anyone out.

Karen realised something wasn’t right and that she was being scammed, so she filed a police report at her local Sheriff’s Office, along with the FBI, TBI, IC3 and the Better Business Bureau.

The local investigator found nothing on Charles Hillary. Worse, the damage was already done: Karen’s credit was bad, her finances in a mess, and nobody except for one friend and a co-worker knew.

“At this point, I owed about $65,000—some was a Discover loan, some were cash advances and some on credit cards…all in my name alone.”

The box scam continued until December 2020 until the scammer decided to change tactics.

Scammer threats, while scary, are typically empty. But how can a victim be sure of that? Karen tells us about the most recent threats the scammer made:

“The most recent threat was to my son’s wedding. He said the Russians had hired hit men in the United States to create a blood bath. He sent me the wedding invite to prove he knew who, where, and when. Nothing happened but he is still emailing me daily.”

The scammer started using a second, more supportive, persona. As an example of how this second persona was used, this bizarre, less aggressive email came after the threats to disrupt the wedding (all sic):

“I woke up with sadness in my spirit due to the recent threats against your children …

I have about $2500 in my wallet and if you can send the balance today that would be great so we can end this immediately instead of waiting for your son wedding to become a disaster or endangering his guess. I am willing to assist with $2500 if you can come up with the balance today and also the board will be in an agreement to prevent any future harm against your children. Get back to me as soon as possible.”

This persona expresses concern and sadness about the threats against the victim’s children and criticizes “Hillary” for continuing the threats. This dual-role tactic is a classic psychological manipulation technique often used in scams:

  • The victim feels fear and urgency from the threat.
  • Then they feel relief and trust from the “helper” who appears to be on their side.
  • This builds rapport and pressure to comply with demands.
  • The combination makes the victim feel psychologically cornered, pushing them to do things which they’d normally consider irrational.

Our investigation

An analysis of the language and style of the emails from the two personae shows it’s very likely the same person or same group of people working from the same script.

Many of the Gmail addresses the scammer used were removed after complaints to Google, but it’s trivial to set up a new one. Google did tell Karen that at least some of the accounts were set up from Nigeria.

Our own analysis of the headers of some recent emails didn’t reveal much useful information, unfortunately.

Email authentication and origin:

  • The email was sent from the Gmail server (mail-sor-f41.google.com) with IP 209.85.220.41, which is a legitimate Google mail server IP.
  • SPF, DKIM, and DMARC authentication all passed successfully for the domain gmail.com and the sending addresses charleshillary****@gmail.com and cortneymalander***@gmail.com. This means the emails were indeed sent through Google’s infrastructure and the sender address was not spoofed at the SMTP level.
  • ARC (Authenticated Received Chain) signatures are present but show no chain validation (cv=none), which is typical for a first hop.
  • The Return-Path and From address match, which is a clear sign that the envelope sender and header sender are consistent.

Conclusion: The sender’s Gmail accounts were likely compromised or set up for this scam, rather than the email being forged or spoofed at the server level. Looking at the list of past email addresses, we are pretty sure that all of them were created specifically for this scam.

We also followed up on some wire transfers that Karen made to pay the scammer, but we found that the receivers were scam victims as well, which the scammers used as money mules. The receivers of the wires were instructed to collect the money and put it in a Bitcoin wallet. Most of Karen’s payments went directly into those wallets.

We’ve advised Karen to ignore the scammers and not even open their emails anymore. At some point they will give up and turn their attention to other victims. Meanwhile, Karen will have to keep working two jobs as she has a remaining $20,000 debt.

Even after a month of not replying, Karen reports that she still receives emails from the scammer. They haven’t given up on extracting more money out of her. Her exhaustion and isolation showed in this reply to me:

“Appreciate your help so much. Wish I had found you a long time ago. Could have saved me money, 2nd jobs and a marriage from nearly going under. The devastation they cause is real.

This is what I daily beat myself up over. I saw the signs of scam. I was told it was scam, but they make it so dang real that I could not wrap my head around it being anything but truth. I looked for any and every sign of them stepping all over each other in their stories but never did until about two or so months ago.”

How to tell if you’re talking to a scammer

A few things that should have warned Karen:

  • The person that contacted you on one platform now wants to move to a different platform. Whether that is WhatsApp, Signal, Telegram, or as in Karen’s case, Google Hangouts. For a scammer this is not a matter of convenience, but of staying under the radar.
  • Time zones don’t match up. Based on their activity, you can make a rough guess about the time zone the person you are communicating with is in. Check if that matches their story. In Karen’s case the scammer picked Ireland which is very likely their actual time zone but given their use of the English language, not their actual location.
  • Dodgy paperwork. The documents Karen received would not have survived any legal or professional scrutiny. But since Karen was too embarrassed to tell anyone what she was involved in, she didn’t get a second opinion on the papers.
  • A second person starts messaging. Granted that the scammers had a decently thought out script, linguistic analysis would have shown Karen that the two separate personas were one and the same person with a very high accuracy.

If you feel like you might be talking to a scammer, STOP and think of the following tips:

  1. Slow down: Don’t let urgency or pressure push you to take action.
  2. Test them: Ask questions they should know the answer to, especially if you think they are posing as someone you know
  3. Opt out – Don’t be afraid to end the conversation.
  4. Prove it – If any companies are involved, confirm the request by contacting the company through a verified, trusted channel like an official website or method you’ve used in the past.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Meta ignored child sex abuse in VR, say whistleblowers

Two former employees at Meta testified against the company at a Senate hearing this week, accusing it of downplaying the dangers of child abuse in its virtual reality (VR) environment.

The whistleblowers say they saw incidents where children were asked for sex acts and nude photos in Facebook’s VR world, which it calls the ‘metaverse’. This is a completely immersive world that people enter by wearing a Meta virtual reality headset. There, they are able to use a variety of apps that that surround them in 360-degree visuals. They can interact with the environment, and with other users.

At the hearing, held by the US Senate Judiciary Subcommittee on Privacy, Technology and the Law, the two former employees warned that Meta deliberately turned a blind eye to potential child harms. It restricted the information that researchers could collect about child safety and even altered research designs so that it could preserve plausible deniability, they said, adding that it also made researchers delete data that showed harm was being done to kids in VR.

“We researchers were directed how to write reports to limit risk to Meta,” said Jason Sattizahan, who researched integrity in Meta’s VR initiative during his six-year stint at the company. “Internal work groups were locked down, making it nearly impossible to share data and coordinate between teams to keep users safe. Mark Zuckerberg disparaged whistleblowers, claiming past disclosures were ‘used to construct a false narrative’”.

“When our research uncovered that underage children using Meta VR in Germany were subject to demands for sex acts, nude photos and other acts that no child should ever be exposed to, Meta demanded that we erase any evidence of such dangers that we saw,” continued Sattizahan. The company, which completely controlled his research, demanded that he change his methods to avoid collecting data on emotional and psychological harm, he said.

“Meta is aware that its VR platform is full of underage children,” said Cayce Savage, who led research on youth safety and virtual reality at Meta between 2019 and 2023. She added that recognizing this problem would force the company to kick them off the system, which would harm its engagement numbers. “Meta purposely turns a blind eye to this knowledge, despite it being obvious to anyone using their products.”

The dangers to children in VR are especially severe, Savage added, arguing that real-life physical movements made using the headsets and their controllers are required to affect the VR environment.

“Meta is aware that children are being harmed in VR. I quickly became aware that it is not uncommon for children in VR to experience bullying, sexual assault, to be solicited for nude photographs and sexual acts by pedophiles, and to be regularly exposed to mature content like gambling and violence, and to participate in adult experiences like strip clubs and watching pornography with strangers,” she said, adding that she had seen these things happening herself. “I wish I could tell you the percentage of children in VR experiencing these harms, but Meta would not allow me to conduct this research.”

In one case, abusers coordinated to set up a virtual strip club in the app Roblox and pay underage users the in-game currency, ‘Robux’, to have their avatars strip in the environment. Savage said she told Meta not to allow the app on its VR platform. “You can now download it in their app store,” she added.

This isn’t the first time that Meta has been accused of ignoring harm to children. In November 2023, a former employee warned that the company had ignored sexual dangers for children on Instagram, testifying that his own child had received unsolicited explicit pictures. In 2021, former employee Frances Haugen accused the company of downplaying risks to young users.

Facebook has reportedly referred to the “claims at the heart” of the hearing as “nonsense”.

Senator Marsha Blackburn, who chaired the meeting, has proposed the Kids Online Safety Act to force platforms into responsible design choices that would prevent harm to children.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Fake Bureau of Motor Vehicles texts are after your personal and banking details

Scammers are sending out texts that claim to be from the Bureau of Motor Vehicles (BMV), saying that you have outstanding traffic tickets.

Here’s an example, which was sent to one of our employees.

text message scam BMV

“Ohio (BMV) Final Notice: Enforcement Begins September 10nd.

Our records indicate that as of today, you still have an outstanding traffic ticket. Pursuant to Ohio Administrative Code 15C-16.003, if you fail to pay by September 9, 20025, we will take the following actions:

1. Report to the BMV violation database

2. Suspend your vehicle registration effective September 9st

3. Suspend your driving privileges for 30 days

4. Pay a 35% service fee at toll booths

5. You may be prosecuted, and your credit score will be affected.

Pay Now:

link

Please pay immediately before enforcement begins to avoid license suspension and further legal trouble. (Reply Y and reopen this message, or copy it to your browser.)

The Ohio Department of Public Safety actually warned about this scam a few months ago, and the Bowling Green (OH) Police Division repeated that warning on Facebook this week.

The people in Ohio are not alone. We found similar warnings issued by the Indiana DMV, Colorado DMV, West-Virginia DMV, Hawaii County, Arizona Department of Transportation, and the New Hampshire DMV.

If you click the link in the message, you’ll be taken to a website that mimics that of the department in question. The site contains a form to fill out your personal details and payment information, which can then be used for financial fraud or even identity theft.

The scam messages all look the same except for the domains which are rotated very fast, as is habitual in scams. Because they are all from the same campaign, it’s easy to recognize them though.

Red flags in the scam text:

There are some tell-tale signs in these scams which you can look for to recognize them as such;

  1. Spelling and grammar mistakes: the scammers seem to have problems with formatting dates. For example “September 10nd”, “9st” (instead of 9th or 1st).
  2. Urgency: you only have one or two days to pay. Or else…..
  3. The over-the-top threats: Real agencies won’t say your “credit score will be affected” for an unpaid traffic violation.
  4. Made-up legal codes: “Ohio Administrative Code 15C-16.003” doesn’t match any real Ohio BMV administrative codes. When a code looks fake, it probably is!
  5. Sketchy payment link: Real BMVs don’t send urgent “pay now or else” links by text. If you pay through the link, your wallet—or worse, your identity—is the real victim here.
  6. Vague or missing personalization: Genuine government agencies tend to use your legal name, not a generic scare message sent to many people at the same time.

How to stay safe

Recognizing scams is the most important part of protecting yourself, so always consider these golden rules:

  • Always search phone numbers and email addresses to look for associations with known scams.
  • When in doubt, go directly to the website of the organization that contacted you to see if there are any messages for you.
  • Do not get rushed into decisions without thinking them through.
  • Do not click on links in unsolicited text messages.
  • Do not reply, even if the text message explicitly tells you to do so.

If you have engaged with the scammers’ website:

  • Immediately change your passwords for any accounts that may have been compromised. 
  • Contact your bank or financial institution to report the incident and take any necessary steps to protect your accounts, such as freezing them or monitoring for suspicious activity. 
  • Consider a fraud alert or credit freeze. To start layering protection, you might want to place a fraud alert or credit freeze on your credit file with all three of the primary credit bureaus. This makes it harder for fraudsters to open new accounts in your name.
  • US citizens can report confirmed cases of identity theft to the FTC at identitytheft.gov.

Indicators of Compromise (IOCs)

We found the following domains involved in these scams, but there are probably many, many more. Hopefully it will give you an idea of what type of links the scammers are using:

https://ohio.dtetazt[.]shop/bmv?cdr=Bue4ZZ
https://askasas[.]top/portal
https://dmv.colorado-govw[.]icu/us


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

When AI chatbots leak and how it happens

In a recent article on Cybernews there were two clear signs of how fast the world of AI chatbots is growing. A company I had never even heard of had over 150 million app downloads across its portfolio, and it also had an exposed unprotected Elasticsearch instance.

This needs a bit of an explanation. I had never heard of Vyro AI, a company that probably still doesn’t ring many bells, but its app ImagineArt has over 10 million downloads on Google Play. Vyro AI also markets Chatly, which has over 100,000 downloads, and Chatbotx, a web-based chatbot with about 50,000 monthly visits.

An Elasticsearch instance is a database server running a tool used to quickly store and search lots of data. If it’s unsecured because it lacks passwords, authentication, or network restrictions, it is unprotected against unauthorized visitors. This means it’s freely accessible to access by anyone with internet access that happens to find it. And without any protection like a password or a firewall, anyone who finds the database online can read, copy, change, or even delete all its data.

The researcher that found the database says it covered both production and development environments and stored about 2–7 days’ worth of logs, including 116GB of user logs in real time from the company’s three popular apps.

The information that was accessible included:

  • AI prompts that users typed into the apps. AI prompts are the questions and instructions that users submit to the AI.
  • Bearer authentication tokens, which function similarly to cookies so the user does not have to log in before every session, and allows the user to view their history and enter prompts. An attacker could even hijack an account using these tokens.
  • User agents which are strings of text sent with requests to a server to identify the application, its version, and the device’s operating system. For native mobile apps, developers might include a custom user agent string within the HTTP headers of their requests. This allows developers to identify specific app users, and tailor content and experiences for different app versions or platforms.

The researcher found that the database was first indexed by IoT search engines in mid-February. IoT search engines actively find and list devices or servers that anyone can access on the internet. They help users discover vulnerable devices (such as cameras, printers, and smart home gadgets) and also locate open databases.

This means that attackers have had a chance to “stumble” over this open database for months. And with the information there they could have taken over user accounts, accessed chat histories and generated images, and made fraudulent AI credit purchases.

How does this happen all the time?

Generative AI has found a place in many homes and even more companies, which means there is a lot of money to be made.

But the companies delivering these AI chatbots feel they can only be relevant when they push out new products. So, their engineering efforts are put there where they can control the cash flow. Security and privacy concerns are secondary at best.

Just looking at the last few months, we have reported about:

  • Prompt injection vulnerabilities, where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.
  • An AI chatbot used to launch a cybercrime spree where cybercriminals were found to be using a chatbot to help them defraud people and breach organizations.
  • AI chats showing up in Google search results. These findings concerned Grok, ChatGPT, and Meta AI (twice).
  • An insecure backend application that exposed data about chatbot interactions of job applicants at McDonalds.

As diverse as the causes of the data breaches are—they stem from a combination of human error, platform weaknesses, and architectural flaws—the call to do something about them is starting to get heard.

Hopefully, 2025 will be remembered as a starting point for compliance regulations in the AI chatbots landscape.

The AI Act is a European regulation on artificial intelligence (AI). The Act entered into force on August 1, 2024, and is the first comprehensive regulation on AI by a major regulator anywhere.

The Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. But lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.

Although not completely ironed out, the NIS2 Directive is destined to have significant implications for AI providers, especially those operating in the EU or serving EU customers. Among others, AI model endpoints, APIs, and data pipelines must be protected to prevent breaches and attacks, ensuring secure deployment and operation.

And, although not cybersecurity related, the California State Assembly took a big step toward regulating AI on September 10, 2025, passing SB 243: a bill that aims to regulate AI companion chatbots in order to protect minors and vulnerable users. One of the major requirements is repeated warnings that the user is “talking to” an AI chatbot and not a real person, and that they should take a break.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

‘Astronaut-in-distress’ romance scammer steals money from elderly woman

A Japanese octogenarian from Hokkaido Island lost thousands of dollars after being scammed by someone who described himself as a desperate astronaut in need of help.  

According to Hokkaidō Broadcasting, police in Sapporo say the fraudster contacted the woman on social media in July. After several weeks of exchanging messages, the ‘astronaut’ claimed he was under attack in space and asked her to send money for “life-saving oxygen” through prepaid systems at five different convenience stores in the city.

The money requests escalated as the woman got more romantically attached to the scammer, resulting in a total loss of around 1 million Yen (US$6,700). At that point she told her family and reported the scam to the police.

Romance scammers typically target individuals on social media or online dating platforms, building trust over time, before convincing victims to send money, personal information, or valuable items—sometimes to help the scammer launder funds or goods. 

These scams have grown significantly in recent years, driven by the widespread loneliness epidemic and the increase in online activity. 

Police in Sapporo’s Teine district are now treating the case as a romance scam and have warned residents to be cautious of similar social media encounters. 

.wp-block-kadence-advancedheading.kt-adv-heading309287_2bc2ac-57, .wp-block-kadence-advancedheading.kt-adv-heading309287_2bc2ac-57[data-kb-block=”kb-adv-heading309287_2bc2ac-57″]{font-style:normal;}.wp-block-kadence-advancedheading.kt-adv-heading309287_2bc2ac-57 mark.kt-highlight, .wp-block-kadence-advancedheading.kt-adv-heading309287_2bc2ac-57[data-kb-block=”kb-adv-heading309287_2bc2ac-57″] mark.kt-highlight{font-style:normal;color:#f76a0c;-webkit-box-decoration-break:clone;box-decoration-break:clone;padding-top:0px;padding-right:0px;padding-bottom:0px;padding-left:0px;}

How to stay safe from romance scammers 

It’s very easy to look at a case like this and think “How could they not know they were being scammed?” But anyone can fall for a scam, especially as scammers get more and more sophisticated and their use of AI increases.

Here are some tips to stay safe:

  • Don’t send money or disclose sensitive information to anyone you have never met in person. 
  • Take it slow and read back answers. Scammers usually have a playbook, but sometimes you can spot inconsistencies in their answers. 
  • Cut them off early. As soon as you expect you are dealing with a scammer, stop responding. Don’t fall for sob stories or even physical threats they’ll use to keep the connection alive. 
  • Check their profile picture using an online search. You may find other profiles with the same picture (a huge red flag) or even reports of scammers using that picture.
  • If they ask you to move to another platform to chat, this is another red flag. They are not doing this for privacy reasons, but to stay under the radar of the platform where they first contacted you. 
  • Consult with a financial advisor or investment professional who can provide an objective opinion if you’re offered an investment opportunity. 
  • Share examples (anonymized) to help others. One way to do this is to use Malwarebytes Scam Guard, which also helps you assess if a message is a scam or not. 
  • Don’t do this alone. If you have any doubts, share your concerns with someone in your life that you trust. Their perspective may keep your feet on the ground. 
  • If you encounter something suspicious, report it to the appropriate authorities—such as local law enforcement or the FBI via its Internet Crime Complaint Center. Your actions could prevent others from falling victim.   

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Ransomware attack at blood center: Org tells users their data’s been stolen

A blood center has begun sending data breach notifications to its users after suffering a ransomware attack and theft of personal data.

The New York Blood Center’s (NYBC) suffered the ransomware attack in January, in which an unauthorized party gained access to its network and acquired copies of a subset of files. The security incident was first noticed on January 26, 2025, but this week NYBC has started notifying victims.

NYBC publicly acknowledged the scale but has not issued a precise number of affected people due to ongoing investigations and limitations in contact information for all service recipients. Based on documents that NYBC submitted to regulators in several states, hackers could have stolen information belonging to at least tens of thousands of people.

NYBC ranks among the largest independent community-based blood collection organizations in the US. It serves over 75 million people across more than 17 states and delivers about one million lifesaving blood products annually.

The information varies per affected individual but can include:

  • Name
  • Social Security number
  • Driver’s license or other government identification card number.
  • Financial account information if you participated in direct deposit.

NYBC also provides clinical services, and diagnostic blood testing, for which it needs clinical information from healthcare providers. New York Blood Center Enterprises said some of this information was also accessed by the attackers during the cyber incident.

So far it is unknown which ransomware group might have been behind the attack, and we have seen no threats to publish or sell the acquired data. But this could change quickly once negotiations about the ransom come to an end without the cybercriminals getting paid what they demand.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online and helps you recover after.

Pre-approved GLP-1 prescription scam could be bad for your health

A co-worker received a text which is, unfortunately, becoming more common. The text pretends to come from a doctor and states a weight-loss medication prescription has been approved.

prescription scam text screenshot

“Good morning. This is Dr. Santos. I pre-approved your GLP1 prescription. You may start treatment as of 09/04. {followed by a link}”

Signs it’s a scam

  1. The message claims to be from “Dr. Santos,” a doctor the recipient does not know.
  2. The text references a GLP-1 prescription. GLP-1 drugs (like Ozempic, Wegovy, and Mounjaro) are legitimate prescription medications for diabetes and weight loss, but they should only be prescribed by a health professional after an in-person consultation. No real provider would cold-text a random person about starting such treatment.
  3. The sender’s number appears to be in Texas while our co-worker lives in California. That is one long-distance prescription.
  4. The linked website does not match any real medical or pharmacy provider and is not a site known for drug fulfillment.

what’s more, when we visited the page with a US IP address, we received a Browser Guard warning:

Malwarebytes Browser Guard warning about the tracking site

The site tried to redirect me to a known Phishing domain while sending some information in the URL which might be used to identify which of the targets clicked the link.

savezmeetcomblock chrome

The use of a dedicated tracker subdomain (track.savezmeet[.]com) matches common phishing infrastructure, where user data is collected as soon as the victim clicks and before further redirection occurs.

URL parameters are routinely used in phishing to uniquely identify visitors and record who clicked which phishing SMS. In this case we suspect:

  • {var1} may refer to the vector or campaign type (“txt1” = SMS/text campaign).
  • {var2} is empty, possibly reserved for an additional variable (such as a tracking code or message ID)
  • {var3} is a 10-digit number meeting the format of a US phone number, which may be mapped to the target.

So we visited the URL after replacing the receiving phone number with the sender’s, and lo and behold, we got what we expected.

weight loss scam website

According to our telemetry, we first saw the track.savezmeet[.]com with this format on August 2. Malwarebytes has blocked MyStartHealth.com since March 2025.

What you will get if you decide to buy there is probably not recommended. The website explicitly uses compounded GLP-1 products (not FDA approved), with the disclaimer buried in legalese and clear acknowledgment that these are not branded or FDA-validated versions of Ozempic, Wegovy or any other GLP-1s.

And it’s not just an issue in the US, the EU recently sent out a warning about a sharp rise in illegal medicine sold in the EU.

“In recent months there has been a sharp rise in the number of illegal medicines marketed as GLP-1 receptor agonists such as semaglutide, liraglutide and tirzepatide for weight loss and diabetes. These products, often sold via fraudulent websites and promoted on social media, are not authorised and do not meet necessary standards of quality, safety and efficacy.”

So, besides social media, we can add cold texts as a means of promoting these products in the US.

Avoiding weight-loss scams

Before buying weight-loss products, there are a few pointers you can use:

  • Never follow unsolicited links in social media posts, text messages, or emails.
  • Don’t let anybody rush you into buying anything.
  • Read the fine print. Often this will tell you that you are signing up for a monthly subscription model instead of a one-time payment.
  • Research the name of the product the scammers are selling. In many cases you will find the name associated with scams.
  • If you have bought one of these products, keep an eye on your financial accounts, because some scammers might use your card for other transactions.
  • If you’re not sure if a text message is trustworthy, submit it to Malwarebytes Scam Guard and we will tell you if it’s likely genuine or a scam.
  • Use an active security solution that blocks malicious domains.
Malwarebytes blocks mystarthealth.com

Indicators of compromise (IOCs)

Phone number: +1(682) 416-2557

Domains:

andkovz[.]com

savezmeet[.]com

mystarthealth[.]com


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Plex users: Reset your password!

Media streaming platform Plex has warned customers about a data breach, advising them to reset their password.

Plex said an attacker broke into one of its databases, allowing them to access a “limited subset” of customer data. This included email addresses, usernames, hashed passwords, and authentication data.

“Any account passwords that may have been accessed were securely hashed, in accordance with best practices, meaning they cannot be read by a third party. Out of an abundance of caution, we recommend you take some additional steps to secure your account… Rest assured that we do not store credit card data on our servers, so this information was not compromised in this incident.”

Hashing is a way to protect users’ passwords by transforming them into a scrambled and unreadable format before storing them. Think of it like turning a password into a unique “fingerprint” made of random letters and numbers that doesn’t resemble the original password. This scrambled form is called a hash, and it is created using a special mathematical process called a hash function.

The main point about hashing is that it is a one-way process: once a password is hashed, it cannot be reversed or decrypted back into the original password. When you log in, the system hashes the password you enter and compares that to the stored hash. If they match, you get access. This means companies never store your real, plain text password, which helps keep your credentials safe even if their database is hacked.

The downside is that some systems are vulnerable to pass-the-hash attacks where an attacker can sign in by only knowing the hash. But those are mainly a concern in Windows network environments.

In the case of the Plex breach, pass-the-hash attacks are less of a worry for regular users. Plex uses hashed passwords mainly for user login access to its streaming platform, not for network-level authentication. Plex doesn’t directly enable attackers to authenticate anywhere else without cracking those hashes first.

However, as a precaution, Plex users should still follow the instructions from the company, below.

What Plex asks users to do

If you normally log in using a password: Reset your Plex account password immediately by visiting https://plex.tv/reset. During the reset process you’ll see a checkbox to “Sign out connected devices after password change,” which the company recommends you enable. This will sign you out of all your devices (including any Plex Media Server you own). After the reset you’ll need to sign back in with your new password.

If you normally log in using Single Sign-On: Log out of all active sessions by visiting http://plex.tv/security and clicking the button that says ”Sign out of all devices”. This will sign you out of all your devices (including any Plex Media Server you own) for your security, and you will then need to sign back in as normal.

For further account protection, we also recommend enabling two-factor authentication 2FA on your Plex account if you haven’t already done so.

Look out for any phishing attempts that may try to prey on this incident. Plex has said that no one at Plex will ever reach out to you over email to ask for a password or credit card number for payments.

Check your digital footprint

Malwarebytes has a free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.