IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Update your Apple devices to fix dozens of vulnerabilities

Apple has released security updates for iPhones, iPads, Apple Watches, Apple TVs, and Macs as well as for Safari, and Xcode to fix dozens of vulnerabilities which could give cybercriminals access to sensitive data.

How to update your devices

How to update your iPhone or iPad

For iOS and iPadOS users, you can check if you’re using the latest software version, go to Settings > General > Software Update. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

 choices in the iPad update or upgrade screen

How to update macOS on any version

To update macOS on any supported Mac, use the Software Update feature, which Apple designed to work consistently across all recent versions. Here are the steps:

  • Click the Apple menu in the upper-left corner of your screen.
  • Choose System Settings (or System Preferences on older versions).
  • Select General in the sidebar, then click Software Update on the right. On older macOS, just look for Software Update directly.
  • Your Mac will check for updates automatically. If updates are available, click Update Now (or Upgrade Now for major new versions) and follow the on-screen instructions. Before you upgrade to macOS Tahoe 26, please read these instructions.
  • Enter your administrator password if prompted, then let your Mac finish the update (it might need to restart during this process).
  • Make sure your Mac stays plugged in and connected to the internet until the update is done.

How to update Apple Watch

  • Ensure your iPhone is paired with your Apple Watch and connected to Wi-Fi.
  • Keep your Apple Watch on its charger and close to your iPhone.
  • Open the Watch app on your iPhone.
  • Tap General > Software Update.
  • If an update appears, tap Download and Install.
  • Enter your iPhone passcode or Apple ID password if prompted.

Your Apple Watch will automatically restart during the update process. Make sure it remains near your iPhone and on charge until the update completes.

How to update Apple TV

  • Turn on your Apple TV and make sure it’s connected to the internet.
  • Open the Settings app on Apple TV.
  • Navigate to System > Software Updates.
  • Select Update Software.
  • If an update appears, select Download and Install.

The Apple TV will download the update and restart as needed. Keep your device connected to power and Wi-Fi until the process finishes.

Updates for your particular device

Apple has today released version 26 for all its software platforms. This new version brings in a new “Liquid Glass” design, expanded Apple Intelligence, and new features. You can choose to update to that version, or just update to fix the vulnerabilities:

iOS 26 and iPadOS 26 iPhone 11 and later, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 8th generation and later, and iPad mini 5th generation and later
iOS 18.7 and iPadOS 18.7 iPhone XS and later, iPad Pro 13-inch, iPad Pro 12.9-inch 3rd generation and later, iPad Pro 11-inch 1st generation and later, iPad Air 3rd generation and later, iPad 7th generation and later, and iPad mini 5th generation and later
iOS 16.7.12 and iPadOS 16.7.12 iPhone 8, iPhone 8 Plus, iPhone X, iPad 5th generation, iPad Pro 9.7-inch, and iPad Pro 12.9-inch 1st generation
iOS 15.8.5 and iPadOS 15.8.5 iPhone 6s (all models), iPhone 7 (all models), iPhone SE (1st generation), iPad Air 2, iPad mini (4th generation), and iPod touch (7th generation)
macOS Tahoe 26 Mac Studio (2022 and later), iMac (2020 and later), Mac Pro (2019 and later), Mac mini (2020 and later), MacBook Air with Apple silicon (2020 and later), MacBook Pro (16-inch, 2019), MacBook Pro (13-inch, 2020, Four Thunderbolt 3 ports), and MacBook Pro with Apple silicon (2020 and later)
macOS Sequoia 15.7 macOS Sequoia
macOS Sonoma 14.8 macOS Sonoma
tvOS 26 Apple TV HD and Apple TV 4K (all models)
watchOS 26 Apple Watch Series 6 and later
visionOS 26 Apple Vision Pro
Safari 26 macOS Sonoma and macOS Sequoia
Xcode 26 macOS Sequoia 15.6 and later

Technical details

Apple did not mention any actively exploited vulnerabilities, but there are two that we would like to highlight.

A vulnerability tracked as CVE-2025-43357 in Call History was found that could be used to fingerprint the user. Apple addressed this issue with improved redaction of sensitive information. This issue is fixed in macOS Tahoe 26, iOS 26, and iPadOS 26.

A vulnerability in the Safari browser tracked as CVE-2025-43327 where visiting a malicious website could lead to address bar spoofing. The issue was fixed by adding additional logic.

Address bar spoofing is a trick cybercriminals might use to make you believe you’re on a trusted website when in reality you’re not. Instead of showing the real address, attackers exploit browser flaws or use clever coding so the address bar displays something like login.bank.com even though you’re not on your bank’s site at all. This would allow the criminals to harvest your login credentials when you enter them on what is really their website.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Grok, ChatGPT, other AIs happy to help phish senior citizens

If you are under the impression that cybercriminals need to get their hands on compromised AI chatbots to help them do their dirty work, think again.

Some AI chatbots are just so user friendly that they can help the user craft phishing text, and even malicious HTML and Javascript code.

A few weeks ago we published an article about the actions Anthropic was taking to stop its Claude AI from helping cybercriminals launch a cybercrime spree.

A recent investigation by Reuters journalists showed that Grok was more than happy to help them craft and perfect a phishing email targeting senior citizens. Grok is the AI marketed by Elon Musk’s xAI. Reuters reported:

“Grok generated the deception after being asked by Reuters to create a phishing email targeting the elderly. Without prodding, the bot also suggested fine-tuning the pitch to make it more urgent.”

In January 2025, we told you about a report that AI-supported spear phishing mails were equally as effective as phishing emails thought up by experts, and able to fool more than 50% of targets. But since then, the development of AI has grown exponentially and researchers are worrying about how to recognize AI-crafted phishing.

Phishing is the first step in many cybercrime campaigns. It poses an enormous problem with billions of phishing emails sent out every day. AI helps criminals to create more variation which makes pattern detection less effective and it helps them fine tune the messages themselves. And Reuters focused on senior citizens for a reason.

The FBI’s Internet Crime Complaint Center (IC3) 2024 report confirms that Americans aged 60 and older filed 147,127 complaints and lost nearly $4.9 billion to online fraud, representing a 43% increase in losses and a 46% increase in complaints compared to 2023.

Besides Grok, the reporters tested five other popular AI chatbots: ChatGPT, Meta AI, Claude, Gemini, and DeepSeek. Although most of the AI chatbots protested at first and cautioned the user not to use the emails in a real-life scenario, in the end their “will to please” helped overcome these obstacles.

Fred Heiding, a Harvard University researcher and an expert in phishing helped Reuters put the crafted emails to the test. Using a targeted approach to reach those most likely to fall for them, about 11% of the seniors clicked on the emails sent to them.

An investigation by Cybernews showed that Yellow.ai, an agentic AI provider for businesses such as Sony, Logitech, Hyundai, Domino’s, and hundreds of other brands could be persuaded to produce malicious HTML and JavaScript code. It even allowed attackers to bypass checks to inject unauthorized code into the system.

In a separate test by Reuters, Gemini produced a phishing email, saying it was “for educational purposes only,” but helpfully added that “for seniors, a sweet spot is often Monday to Friday, between 9:00 AM and 3:00 PM local time.”

After damaging reports like these are released, AI companies often build in additional guardrails for their chatbots, but that only highlights an ongoing dilemma in the industry. When providers tighten restrictions to protect users, they risk pushing people toward competing models that don’t play by the same rules.

Every time a platform moves to shut down risky prompts or limit generated content, some users will look for alternatives with fewer safety checks or ethical barriers. That tug of war between user demand and responsible restraint will likely fuel the next round of debate among developers, researchers, and policymakers.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

“A dare, a challenge, a bit of fun:” Children are hacking their own schools’ systems, says study

As if ransomware wasn’t enough of a security problem for the sector, educational institutions also need to worry about their own students, a recent study shows.

Last week, the UK Information Commissioner’s Office (ICO) published a report about the “insider threat of students”. Here are a few key points:

  • Over half of school insider cyberattacks were caused by students.
  • Almost a third of insider attack incidents caused by students involved guessing weak passwords or finding them jotted down on bits of paper.
  • Teen hackers are not breaking in, they are logging in.

The conclusion of the ICO is that:

“Children are hacking into their schools’ computer systems – and it may set them up for a life of cyber crime.”

The ICO examined a total of 215 personal data breach reports caused by insider attacks from the education sector between January 2022 and August 2024. They found that students were responsible for 57% of them, and that students covered 97% of the incidents that were caused by stolen login details.

The British National Crime Agency (NCA) reported about a survey of children aged 10-16 which showed that 20% engage in behaviors that violate the Computer Misuse Act, which criminalizes unauthorized access to computer systems and data. It adds a warning:

“The consequences of committing Computer Misuse Act offences are serious. In addition to being arrested and potentially given a criminal record, those caught can have their phone or computer taken away from them, risk expulsion from school, and face limits on their internet use, career opportunities and international travel.”

The reasons that children provided for hacking included dares, notoriety, financial gain, revenge and rivalries. Security experts also mention cases of students altering grades or using staff credentials.

While the ICO report highlights a troubling trend in the UK, US data shows it faces similar problems. A March 2025 Center for Internet Security survey found 82% of K-12 schools experienced a cyber incident between July 2023 and December 2024, and security analysts say students pose an inside threat to the education sector.

In one high-profile US prosecution, a 19-year-old faced charges in connection with the 2024 PowerSchool compromise that exposed millions of records, student and teacher data. In the end, that incident that led to extortion attempts against districts and caused major operational disruption.

While seemingly less harmless, the consequences of student hacking can be just as serious as something like a ransomware attack, ending up spilling the personal data from students and teachers.

As Heather Toomey, Principal Cyber Specialist at the ICO put it:

“What starts out as a dare, a challenge, a bit of fun in a school setting can ultimately lead to children taking part in damaging attacks on organisations or critical infrastructure.”

Parents and schools need to warn children about the possible implications, no matter how innocent it may start. And more strict authorization of school staff and teachers could prevent a lot of these incidents, given that 30% of incidents were caused by stolen login details.

Protecting yourself or your children after a data breach

There are some actions you can take if you are, or suspect you or your children may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online and helps you recover after.

Watch out for the “We are hiring” remote online evaluator message scam

Looking at our team’s recent text messages, you’d think that remote online evaluators are in high demand right now.

Several members of our team have received the almost exact same job offer scam texts. The content of the messages is almost identical, but there is a variation in background images.

same job different texts

“We are hiring
 Join our professional team
Job content Job title: remote online evaluator
Salary: $100-$600 per day
Working hours: 1-2 hours per day
Time: freedom can be done at home anytime
Job requirements: Age 25+

If you’re interested in this position, please answer ‘yes’ or ‘interested’ and I’ll send you the details. “

This type of scam has been around for a while, but the ones sending this exact same text have really taken off the last week. All the recipients that reported this scam are in the US and the messages all came from different US numbers.

You can rely on the fact that the only lazy job here is the scammer’s. There are different possible scenarios when the targets reply, but they all have negative consequences.

  • Advance fee scams are the most likely scenario. This is where the prospective employer explains what the job entails and then asks the target to pay for materials, start capital, or other onboarding costs. Typically, these payments are required in cryptocurrencies like USDT or Bitcoin.
  • Identity theft. Under the guise of needing your personal information before you can start working, the scammers will send you to a website to fill out sensitive information (full name, address, date of birth, SSN, banking details).
  • Money mule or laundering. The victim is actually working unknowingly to launder stolen money or cryptocurrency on behalf of the scammers. They will act as the person where the police come knocking first, giving the scammer more time to grab the money and run.
  • Phishing for further exploitation. Even if there is no immediate ask for money, they may direct the victim to click malicious links or install apps that harvest data.

How to stay safe from hiring scams

There are a few simple but very effective guidelines to stay out of the grasp of scammers that reach out to you with unsolicited job offers:

  • Don’t reply. Even if you reply ‘no’, all you’re doing is sending a signal that you’re reading their texts.
  • Never give out your personal information based on an unsolicited message.
  • Treat employers that want you to send them money before you can earn some with a healthy dose of suspicion. The same is true for those that pay a small amount and then ask for more in return.
  • Ignore offers that are too good to be true. They probably ARE too good to be true.
  • Is there a company name on the job ad? If not, question why.
  • If there is a company name mentioned, does the location of the company match the location of the phone number?

If you have already engaged with the scammers, there are some actions that can help you limit any damage:

  • Stop communication immediately.
  • Do not send money.
  • Contact your bank if you’ve shared any financial information.
  • Consider an identity monitoring service.
  • File a police/FTC report.
  • US recipients can forward scam texts to 7726 (SPAM).

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

A week in security (September 8 – September 14)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

AI browsers or agentic browsers: a look at the future of web surfing

Browsers like Chrome, Edge, and Firefox are our traditional gateway to the internet. But lately, we have seen a new generation of browsers emerge. These are AI-powered browsers or “agentic browsers”—which are not to be confused with your regular browsers that have just AI-powered plugins bolted on.

It might be better not to compare them to traditional browsers but look at them as personal assistants that perform online tasks for you. Embedded within the browser with no additional downloads needed, these assistants can download, summarize, automate tasks, or even make decisions on your behalf.

Which AI browsers are out there?

AI browsers are on the way. While I realize that this list will age quickly and probably badly, this is what is popular at the time of writing. These all have their specialties and weaknesses.

  • Dia browser: An AI-first browser where the URL bar doubles as a chat interface with the AI. It summarizes tabs, drafts text in your style, helps with shopping, and automates multi-step tasks without coding. It’s currently in beta and only available for Apple macOS 14+ with M1 chips or later and specifically designed for research, writing, and automation.
  • Fellou: Called the first agentic browser, it automates workflows like deep research, report generation, and multi-step web tasks, acting proactively rather than just reactively helping you browse. It’s very useful for researchers and reporters.
  • Comet: Developed by Perplexity.ai, Comet is a Chromium-based standalone AI browser. Comet treats browsing as a conversation, answering questions about pages, comparing content, and automating tasks like shopping or bookings. It aims to reduce tab overload and supports integration with apps like Gmail and Google Calendar.
  • Sigma browser: Privacy-conscious with end-to-end encryption. It combines AI tools for conversational assistance, summarization, and content generation, with features like ad-blocking and phishing protection.
  • Opera Neon: More experimental or niche, focused on AI-assisted tab management, workflows, and creative file management. Compared to the other browsers on this list, its AI features are limited.

These browsers offer various mixes of AI that can chat with you, automate tasks, summarize content, or organize your workflow better than traditional browsers ever could.

For those interested in a more technical evaluation, you can have a look at Mind2Web, which is a dataset for developing and evaluating generalist agents for the web that can follow language instructions to complete complex tasks on any website.

How are agentic browsers different from regular browsers?

Regular browsers mostly just show you websites. You determine what to search for, where to navigate, what links to click, and maybe choose what extensions to download for added features. AI browsers embed AI agents directly into this experience:

  • Conversational interface: Instead of just searching or typing URLs, you can talk or type natural language commands to the browser. For example, “Summarize these open tabs,” or “Add this product to my cart.”
  • Task automation: They don’t just assist, they act autonomously to execute complex multi-step tasks across sites—booking flights, researching topics, compiling reports, or managing your tabs.
  • Context awareness: AI browsers remember what you’re looking at in tabs or open apps and can synthesize information across them, providing a kind of continuous memory that helps cut through the clutter.
  • Built-in privacy and security features: Some integrate robust encryption, ad blockers, and phishing protection aligned with their AI capabilities.
  • Integrated AI tools: Text generation, summarization, translation, and workflow management are part of the browser, not separate plugins.

This means less manual juggling, fewer tabs, and a more proactive digital assistant built into the browser itself.

Are AI browsers safe to use?

With great AI power comes great responsibility, and risk. So, it’s important to consider the security and privacy implications if you decide to start using an AI browser and when to decide which one.

There are certain security wins. AI browsers tend to integrate anti-phishing tools, malware blocking, and sandboxing, sometimes surpassing traditional browsers in protecting users against web threats. For example, Sigma’s AI browser employs end-to-end encryption and compliance with global data regulations.

However, due to their advanced AI functionality and sometimes early-stage software status, AI browsers can be more complex and still evolving, which may introduce vulnerabilities or bugs. Some are invite-only or in beta, which limits exposure but also reduces maturity.

Privacy is another key concern. Many AI browsers process your data locally or encrypt it to protect user information, but some features may still require cloud-based AI processing. This means your browsing context or personal information could be transmitted to third parties, depending on the browser’s architecture and privacy policy. And, as browsing activity is key to many of the browser’s AI features, a user’s visited web sites—and perhaps even the words displayed on those websites—could be read and processed, even in a limited way, by the browser.

Consumers should carefully review each AI browser’s privacy documentation and look for features like local data encryption, minimal data logging, user consent for data sharing, and transparency about AI data usage.

As a result, choosing AI browsers from trusted developers with transparent privacy policies is crucial, especially if you let them handle sensitive information.

When are AI browsers useful, and when is it better to avoid them?

Given the early stages of development, we would recommend not using AI browsers, unless you understand what you’re doing and the risks involved.

When to use AI browsers:

  • If productivity and automation in browsing are priorities, such as during deep research, writing, or complex workflows.
  • When you want to cut down manual multitasking and tab overload with an AI that can help you summarize, fetch related information, and automate data processing.
  • For creative projects that require AI assistance directly in the browsing environment.
  • When privacy-centric options are selected and trusted.

When to avoid or be cautious:

  • If you handle highly sensitive data—including workplace data—and the browser’s privacy stance is unclear.
  • There will be concerns about early-stage software bugs or untested security.
  • When minimalism, speed, control, and simplicity are preferred over complex AI-driven features.
  • If your choice is limited it may be better to wait. Some AI browsers still focus on macOS or are limited to other platforms.

In essence, AI and agentic browsers are transformative tools meant to augment human browsing with AI intelligence but are best paired with an understanding of their platform maturity and privacy implications.

It is also good to understand that using them will come with a learning curve and that research into their vulnerabilities, although only scratching the surface has uncovered some serious security concerns. Specifically on how it’s possible to deliver prompt injection. Several researchers and security analysts have documented successful prompt injection methods targeting AI browsers and agentic browsing agents. Their reliance on dynamic content, tool execution, and user-provided data exposes AI browsers to a broad attack surface.

AI browsers are poised to redefine how we surf the web, blending browsing with intelligent assistance for a more productive and tailored experience. Like all new tech, choosing the right browser depends on balancing the promise of smart automation with careful security and privacy choices.

For cybersecurity-conscious users, experimenting with AI browsers like Sigma or Comet while keeping a standard browser for your day-to-day is a recommended strategy.

The future of web browsing is here. Browsers built on AI agents that think, act, and assist the user are available. But whether you and the current state of development are ready for it, is a decision only you can make.

Questions? Post them in the comments and I’ll add a FAQ section which answers those we know how.

From Fitbit to financial despair: How one woman lost her life savings and more to a scammer

We hear so often about people falling for scams and losing money. But we often don’t find out the real details of what happened, and how one “like” can turn into a nightmare that controls someone’s life for many years. This is that story.

Not too long ago, a scam victim named Karen reached out to me, asking for help. It’s a story that may seem unbelievable to some, but it happens more often than you think.

Karen tells us about the initial hook:

“My story started on January 1, 2020, when a man called Charles Hillary ‘liked’ something that I shared on the exercise app Fitbit. He kept on reaching out instead of just liking and moving on like most people do.“

It wasn’t long until “Charles” asked Karen if she wanted to move their chats to Google Hangouts. Karen used Google Hangouts at work so it didn’t seem like a strange request.

But moving a conversation to a more secure environment is not something scammers do for convenience. They do it to reduce the chance of anyone listening in on their conversation or finding out their identity.

Karen was slightly suspicious about when she would get messages from Charles, given that he had told her he was from Atlanta, Georgia.

“Every time he messaged me, I would receive it around 2am, so I asked him where he was. He responded and said he was on a contract job with Diamond Offshore Drilling in Ireland. I later found that not to be true.”

As it happens, Ireland is in the same time zone as West Africa Time (WAT), which is used in countries like Gabon, Congo, and Nigeria.

In late January, after Karen and Charles had been talking for almost a month, he asked her for some help.

Charles said he had lent his friend, also in the oil drilling business, a lot of money. His friend had paid him back, he said, but had left it in a box with a security company, Damag Security.

“He said the security company was closing and needed him to get his ‘box.’ He asked me to be the recipient and I asked him lots of questions but ultimately agreed since it would not cost me anything and I could place it in his bank at my local branch of Bank of America.”

Charles showed Karen the documentation:

Picture 1

Once a scammer has found an angle and the victim is invested, the costs will typically grow in number and in size.

“This is when the nightmare began. The box immediately cost me $3900 for shipping.”

After that, Karen was asked by the security for money for various forms. Charles told her all forms should have been secured when the money was placed with the security company.

“He played innocent through it all.”

The forms were expensive and ranged from $25,000 to around $60,000. Karen asked them to reduce the price and they did, so she paid.

Charles gave Karen several separate reasons as to why he wasn’t able to get the money himself:

  • His bank account had been frozen due to money laundering.
  • His ex-wife had taken a lot of his money so he froze his account until he could return in person.
  • He had illegally done oil drilling in Russian waters and made a lot of money—also in the box—and could not let anyone find out about it or he would go to prison.

It all does sound far-fetched, and it’s easy to read this and say you’d never get caught by something like this. However, Karen is a well-educated person who was manipulated into paying large sums of money. Scams can catch anyone out.

Karen realised something wasn’t right and that she was being scammed, so she filed a police report at her local Sheriff’s Office, along with the FBI, TBI, IC3 and the Better Business Bureau.

The local investigator found nothing on Charles Hillary. Worse, the damage was already done: Karen’s credit was bad, her finances in a mess, and nobody except for one friend and a co-worker knew.

“At this point, I owed about $65,000—some was a Discover loan, some were cash advances and some on credit cards…all in my name alone.”

The box scam continued until December 2020 until the scammer decided to change tactics.

Scammer threats, while scary, are typically empty. But how can a victim be sure of that? Karen tells us about the most recent threats the scammer made:

“The most recent threat was to my son’s wedding. He said the Russians had hired hit men in the United States to create a blood bath. He sent me the wedding invite to prove he knew who, where, and when. Nothing happened but he is still emailing me daily.”

The scammer started using a second, more supportive, persona. As an example of how this second persona was used, this bizarre, less aggressive email came after the threats to disrupt the wedding (all sic):

“I woke up with sadness in my spirit due to the recent threats against your children …

I have about $2500 in my wallet and if you can send the balance today that would be great so we can end this immediately instead of waiting for your son wedding to become a disaster or endangering his guess. I am willing to assist with $2500 if you can come up with the balance today and also the board will be in an agreement to prevent any future harm against your children. Get back to me as soon as possible.”

This persona expresses concern and sadness about the threats against the victim’s children and criticizes “Hillary” for continuing the threats. This dual-role tactic is a classic psychological manipulation technique often used in scams:

  • The victim feels fear and urgency from the threat.
  • Then they feel relief and trust from the “helper” who appears to be on their side.
  • This builds rapport and pressure to comply with demands.
  • The combination makes the victim feel psychologically cornered, pushing them to do things which they’d normally consider irrational.

Our investigation

An analysis of the language and style of the emails from the two personae shows it’s very likely the same person or same group of people working from the same script.

Many of the Gmail addresses the scammer used were removed after complaints to Google, but it’s trivial to set up a new one. Google did tell Karen that at least some of the accounts were set up from Nigeria.

Our own analysis of the headers of some recent emails didn’t reveal much useful information, unfortunately.

Email authentication and origin:

  • The email was sent from the Gmail server (mail-sor-f41.google.com) with IP 209.85.220.41, which is a legitimate Google mail server IP.
  • SPF, DKIM, and DMARC authentication all passed successfully for the domain gmail.com and the sending addresses charleshillary****@gmail.com and cortneymalander***@gmail.com. This means the emails were indeed sent through Google’s infrastructure and the sender address was not spoofed at the SMTP level.
  • ARC (Authenticated Received Chain) signatures are present but show no chain validation (cv=none), which is typical for a first hop.
  • The Return-Path and From address match, which is a clear sign that the envelope sender and header sender are consistent.

Conclusion: The sender’s Gmail accounts were likely compromised or set up for this scam, rather than the email being forged or spoofed at the server level. Looking at the list of past email addresses, we are pretty sure that all of them were created specifically for this scam.

We also followed up on some wire transfers that Karen made to pay the scammer, but we found that the receivers were scam victims as well, which the scammers used as money mules. The receivers of the wires were instructed to collect the money and put it in a Bitcoin wallet. Most of Karen’s payments went directly into those wallets.

We’ve advised Karen to ignore the scammers and not even open their emails anymore. At some point they will give up and turn their attention to other victims. Meanwhile, Karen will have to keep working two jobs as she has a remaining $20,000 debt.

Even after a month of not replying, Karen reports that she still receives emails from the scammer. They haven’t given up on extracting more money out of her. Her exhaustion and isolation showed in this reply to me:

“Appreciate your help so much. Wish I had found you a long time ago. Could have saved me money, 2nd jobs and a marriage from nearly going under. The devastation they cause is real.

This is what I daily beat myself up over. I saw the signs of scam. I was told it was scam, but they make it so dang real that I could not wrap my head around it being anything but truth. I looked for any and every sign of them stepping all over each other in their stories but never did until about two or so months ago.”

How to tell if you’re talking to a scammer

A few things that should have warned Karen:

  • The person that contacted you on one platform now wants to move to a different platform. Whether that is WhatsApp, Signal, Telegram, or as in Karen’s case, Google Hangouts. For a scammer this is not a matter of convenience, but of staying under the radar.
  • Time zones don’t match up. Based on their activity, you can make a rough guess about the time zone the person you are communicating with is in. Check if that matches their story. In Karen’s case the scammer picked Ireland which is very likely their actual time zone but given their use of the English language, not their actual location.
  • Dodgy paperwork. The documents Karen received would not have survived any legal or professional scrutiny. But since Karen was too embarrassed to tell anyone what she was involved in, she didn’t get a second opinion on the papers.
  • A second person starts messaging. Granted that the scammers had a decently thought out script, linguistic analysis would have shown Karen that the two separate personas were one and the same person with a very high accuracy.

If you feel like you might be talking to a scammer, STOP and think of the following tips:

  1. Slow down: Don’t let urgency or pressure push you to take action.
  2. Test them: Ask questions they should know the answer to, especially if you think they are posing as someone you know
  3. Opt out – Don’t be afraid to end the conversation.
  4. Prove it – If any companies are involved, confirm the request by contacting the company through a verified, trusted channel like an official website or method you’ve used in the past.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Meta ignored child sex abuse in VR, say whistleblowers

Two former employees at Meta testified against the company at a Senate hearing this week, accusing it of downplaying the dangers of child abuse in its virtual reality (VR) environment.

The whistleblowers say they saw incidents where children were asked for sex acts and nude photos in Facebook’s VR world, which it calls the ‘metaverse’. This is a completely immersive world that people enter by wearing a Meta virtual reality headset. There, they are able to use a variety of apps that that surround them in 360-degree visuals. They can interact with the environment, and with other users.

At the hearing, held by the US Senate Judiciary Subcommittee on Privacy, Technology and the Law, the two former employees warned that Meta deliberately turned a blind eye to potential child harms. It restricted the information that researchers could collect about child safety and even altered research designs so that it could preserve plausible deniability, they said, adding that it also made researchers delete data that showed harm was being done to kids in VR.

“We researchers were directed how to write reports to limit risk to Meta,” said Jason Sattizahan, who researched integrity in Meta’s VR initiative during his six-year stint at the company. “Internal work groups were locked down, making it nearly impossible to share data and coordinate between teams to keep users safe. Mark Zuckerberg disparaged whistleblowers, claiming past disclosures were ‘used to construct a false narrative’”.

“When our research uncovered that underage children using Meta VR in Germany were subject to demands for sex acts, nude photos and other acts that no child should ever be exposed to, Meta demanded that we erase any evidence of such dangers that we saw,” continued Sattizahan. The company, which completely controlled his research, demanded that he change his methods to avoid collecting data on emotional and psychological harm, he said.

“Meta is aware that its VR platform is full of underage children,” said Cayce Savage, who led research on youth safety and virtual reality at Meta between 2019 and 2023. She added that recognizing this problem would force the company to kick them off the system, which would harm its engagement numbers. “Meta purposely turns a blind eye to this knowledge, despite it being obvious to anyone using their products.”

The dangers to children in VR are especially severe, Savage added, arguing that real-life physical movements made using the headsets and their controllers are required to affect the VR environment.

“Meta is aware that children are being harmed in VR. I quickly became aware that it is not uncommon for children in VR to experience bullying, sexual assault, to be solicited for nude photographs and sexual acts by pedophiles, and to be regularly exposed to mature content like gambling and violence, and to participate in adult experiences like strip clubs and watching pornography with strangers,” she said, adding that she had seen these things happening herself. “I wish I could tell you the percentage of children in VR experiencing these harms, but Meta would not allow me to conduct this research.”

In one case, abusers coordinated to set up a virtual strip club in the app Roblox and pay underage users the in-game currency, ‘Robux’, to have their avatars strip in the environment. Savage said she told Meta not to allow the app on its VR platform. “You can now download it in their app store,” she added.

This isn’t the first time that Meta has been accused of ignoring harm to children. In November 2023, a former employee warned that the company had ignored sexual dangers for children on Instagram, testifying that his own child had received unsolicited explicit pictures. In 2021, former employee Frances Haugen accused the company of downplaying risks to young users.

Facebook has reportedly referred to the “claims at the heart” of the hearing as “nonsense”.

Senator Marsha Blackburn, who chaired the meeting, has proposed the Kids Online Safety Act to force platforms into responsible design choices that would prevent harm to children.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Fake Bureau of Motor Vehicles texts are after your personal and banking details

Scammers are sending out texts that claim to be from the Bureau of Motor Vehicles (BMV), saying that you have outstanding traffic tickets.

Here’s an example, which was sent to one of our employees.

text message scam BMV

“Ohio (BMV) Final Notice: Enforcement Begins September 10nd.

Our records indicate that as of today, you still have an outstanding traffic ticket. Pursuant to Ohio Administrative Code 15C-16.003, if you fail to pay by September 9, 20025, we will take the following actions:

1. Report to the BMV violation database

2. Suspend your vehicle registration effective September 9st

3. Suspend your driving privileges for 30 days

4. Pay a 35% service fee at toll booths

5. You may be prosecuted, and your credit score will be affected.

Pay Now:

link

Please pay immediately before enforcement begins to avoid license suspension and further legal trouble. (Reply Y and reopen this message, or copy it to your browser.)

The Ohio Department of Public Safety actually warned about this scam a few months ago, and the Bowling Green (OH) Police Division repeated that warning on Facebook this week.

The people in Ohio are not alone. We found similar warnings issued by the Indiana DMV, Colorado DMV, West-Virginia DMV, Hawaii County, Arizona Department of Transportation, and the New Hampshire DMV.

If you click the link in the message, you’ll be taken to a website that mimics that of the department in question. The site contains a form to fill out your personal details and payment information, which can then be used for financial fraud or even identity theft.

The scam messages all look the same except for the domains which are rotated very fast, as is habitual in scams. Because they are all from the same campaign, it’s easy to recognize them though.

Red flags in the scam text:

There are some tell-tale signs in these scams which you can look for to recognize them as such;

  1. Spelling and grammar mistakes: the scammers seem to have problems with formatting dates. For example “September 10nd”, “9st” (instead of 9th or 1st).
  2. Urgency: you only have one or two days to pay. Or else…..
  3. The over-the-top threats: Real agencies won’t say your “credit score will be affected” for an unpaid traffic violation.
  4. Made-up legal codes: “Ohio Administrative Code 15C-16.003” doesn’t match any real Ohio BMV administrative codes. When a code looks fake, it probably is!
  5. Sketchy payment link: Real BMVs don’t send urgent “pay now or else” links by text. If you pay through the link, your wallet—or worse, your identity—is the real victim here.
  6. Vague or missing personalization: Genuine government agencies tend to use your legal name, not a generic scare message sent to many people at the same time.

How to stay safe

Recognizing scams is the most important part of protecting yourself, so always consider these golden rules:

  • Always search phone numbers and email addresses to look for associations with known scams.
  • When in doubt, go directly to the website of the organization that contacted you to see if there are any messages for you.
  • Do not get rushed into decisions without thinking them through.
  • Do not click on links in unsolicited text messages.
  • Do not reply, even if the text message explicitly tells you to do so.

If you have engaged with the scammers’ website:

  • Immediately change your passwords for any accounts that may have been compromised. 
  • Contact your bank or financial institution to report the incident and take any necessary steps to protect your accounts, such as freezing them or monitoring for suspicious activity. 
  • Consider a fraud alert or credit freeze. To start layering protection, you might want to place a fraud alert or credit freeze on your credit file with all three of the primary credit bureaus. This makes it harder for fraudsters to open new accounts in your name.
  • US citizens can report confirmed cases of identity theft to the FTC at identitytheft.gov.

Indicators of Compromise (IOCs)

We found the following domains involved in these scams, but there are probably many, many more. Hopefully it will give you an idea of what type of links the scammers are using:

https://ohio.dtetazt[.]shop/bmv?cdr=Bue4ZZ
https://askasas[.]top/portal
https://dmv.colorado-govw[.]icu/us


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

When AI chatbots leak and how it happens

In a recent article on Cybernews there were two clear signs of how fast the world of AI chatbots is growing. A company I had never even heard of had over 150 million app downloads across its portfolio, and it also had an exposed unprotected Elasticsearch instance.

This needs a bit of an explanation. I had never heard of Vyro AI, a company that probably still doesn’t ring many bells, but its app ImagineArt has over 10 million downloads on Google Play. Vyro AI also markets Chatly, which has over 100,000 downloads, and Chatbotx, a web-based chatbot with about 50,000 monthly visits.

An Elasticsearch instance is a database server running a tool used to quickly store and search lots of data. If it’s unsecured because it lacks passwords, authentication, or network restrictions, it is unprotected against unauthorized visitors. This means it’s freely accessible to access by anyone with internet access that happens to find it. And without any protection like a password or a firewall, anyone who finds the database online can read, copy, change, or even delete all its data.

The researcher that found the database says it covered both production and development environments and stored about 2–7 days’ worth of logs, including 116GB of user logs in real time from the company’s three popular apps.

The information that was accessible included:

  • AI prompts that users typed into the apps. AI prompts are the questions and instructions that users submit to the AI.
  • Bearer authentication tokens, which function similarly to cookies so the user does not have to log in before every session, and allows the user to view their history and enter prompts. An attacker could even hijack an account using these tokens.
  • User agents which are strings of text sent with requests to a server to identify the application, its version, and the device’s operating system. For native mobile apps, developers might include a custom user agent string within the HTTP headers of their requests. This allows developers to identify specific app users, and tailor content and experiences for different app versions or platforms.

The researcher found that the database was first indexed by IoT search engines in mid-February. IoT search engines actively find and list devices or servers that anyone can access on the internet. They help users discover vulnerable devices (such as cameras, printers, and smart home gadgets) and also locate open databases.

This means that attackers have had a chance to “stumble” over this open database for months. And with the information there they could have taken over user accounts, accessed chat histories and generated images, and made fraudulent AI credit purchases.

How does this happen all the time?

Generative AI has found a place in many homes and even more companies, which means there is a lot of money to be made.

But the companies delivering these AI chatbots feel they can only be relevant when they push out new products. So, their engineering efforts are put there where they can control the cash flow. Security and privacy concerns are secondary at best.

Just looking at the last few months, we have reported about:

  • Prompt injection vulnerabilities, where someone inserts carefully crafted input in the form of an ordinary conversation or data, to nudge or outright force an AI into doing something it wasn’t meant to do.
  • An AI chatbot used to launch a cybercrime spree where cybercriminals were found to be using a chatbot to help them defraud people and breach organizations.
  • AI chats showing up in Google search results. These findings concerned Grok, ChatGPT, and Meta AI (twice).
  • An insecure backend application that exposed data about chatbot interactions of job applicants at McDonalds.

As diverse as the causes of the data breaches are—they stem from a combination of human error, platform weaknesses, and architectural flaws—the call to do something about them is starting to get heard.

Hopefully, 2025 will be remembered as a starting point for compliance regulations in the AI chatbots landscape.

The AI Act is a European regulation on artificial intelligence (AI). The Act entered into force on August 1, 2024, and is the first comprehensive regulation on AI by a major regulator anywhere.

The Act assigns applications of AI to three risk categories. First, applications and systems that create an unacceptable risk, such as government-run social scoring of the type used in China, are banned. Second, high-risk applications, such as a CV-scanning tool that ranks job applicants, are subject to specific legal requirements. But lastly, applications not explicitly banned or listed as high-risk are largely left unregulated.

Although not completely ironed out, the NIS2 Directive is destined to have significant implications for AI providers, especially those operating in the EU or serving EU customers. Among others, AI model endpoints, APIs, and data pipelines must be protected to prevent breaches and attacks, ensuring secure deployment and operation.

And, although not cybersecurity related, the California State Assembly took a big step toward regulating AI on September 10, 2025, passing SB 243: a bill that aims to regulate AI companion chatbots in order to protect minors and vulnerable users. One of the major requirements is repeated warnings that the user is “talking to” an AI chatbot and not a real person, and that they should take a break.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.