IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Is your phone listening to you? (re-air) (Lock and Code S07E03)

This week on the Lock and Code podcast…

In January, Google settled a lawsuit that pricked up a few ears: It agreed to pay $68 million to a wide array of people who sued the company together, alleging that Google’s voice-activated smart assistant had secretly recorded their conversations, which were then sent to advertisers to target them with promotions.

Google denied any admission of wrongdoing in the settlement agreement, but the fact stands that one of the largest phone makers in the world decided to forego a trial against some potentially explosive surveillance allegations. It’s a decision that the public has already seen in the past, when Apple agreed to pay $95 million last year to settle similar legal claims against its smart assistant, Siri.

Back-to-back, the stories raise a question that just seems to never go away: Are our phones listening to us?

This week, on the Lock and Code podcast with host David Ruiz, we revisit an episode from last year in which we tried to find the answer. In speaking to Electronic Frontier Foundation Staff Technologist Lena Cohen about mobile tracking overall, it becomes clear that, even if our phones aren’t literally listening to our conversations, the devices are stuffed with so many novel forms of surveillance that we need not say something out loud to be predictably targeted with ads for it.

“Companies are collecting so much information about us and in such covert ways that it really feels like they’re listening to us.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

AI chat app leak exposes 300 million messages tied to 25 million users

An independent security researcher uncovered a major data breach affecting Chat & Ask AI, one of the most popular AI chat apps on Google Play and Apple App Store, with more than 50 million users.

The researcher claims to have accessed 300 million messages from over 25 million users due to an exposed database. These messages reportedly included, among other things, discussions of illegal activities and requests for suicide assistance.

Behind the scenes, Chat & Ask AI is a “wrapper” app that plugs into various large language models (LLMs) from other companies, including OpenAI’s ChatGPT, Anthropic’s Claude, and Google’s Gemini. Users can choose which model they want to interact with.

The exposed data included user files containing their entire chat history, the models used, and other settings. But it also revealed data belonging to users of other apps developed by Codeway—the developer of Chat & Ask AI.

The vulnerability behind this data breach is a well-known and documented Firebase misconfiguration. Firebase is a cloud-based backend-as-a-service (BaaS) platform provided by Google that helps developers build, manage, and scale mobile and web applications.

Security researchers often refer to a set of preventable errors in how developers set up Google Firebase services, which leave backend data, databases, and storage buckets accessible to the public without authentication.

One of the most common Firebase misconfigurations is leaving Security Rules set to public. This allows anyone with the project URL to read, modify, or delete data without authentication.

This prompted the researcher to create a tool that automatically scans apps on Google Play and Apple App Store for this vulnerability—with astonishing results. Reportedly, the researcher, named Harry, found that 103 out of 200 iOS apps they scanned had this issue, collectively exposing tens of millions of stored files. 

To draw attention to the issue, Harry set up a website where users can see the apps affected by the issue. Codeway’s apps are no longer listed there, as Harry removes entries once developers confirm they have fixed the problem. Codeway reportedly resolved the issue across all of its apps within hours of responsible disclosure.

How to stay safe

Besides checking if any apps you use appear in Harry’s Firehound registry, there are a few ways to better protect your privacy when using AI chatbots.

  • Use private chatbots that don’t use your data to train the model.
  • Don’t rely on chatbots for important life decisions. They have no experience or empathy.
  • Don’t use your real identity when discussing sensitive subjects.
  • Keep shared information impersonal. Don’t use real names and don’t upload personal documents.
  • Don’t share your conversations unless you absolutely have to. In some cases, it makes them searchable.
  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you’re not logged in to that social media platform. Your conversations could be linked to your social media account, which might contain a lot of personal information.

Always remember that the developments in AI are going too fast for security and privacy to be baked into technology. And that even the best AIs still hallucinate.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Fake 7-Zip downloads are turning home PCs into proxy nodes

A convincing lookalike of the popular 7-Zip archiver site has been serving a trojanized installer that silently converts victims’ machines into residential proxy nodes—and it has been hiding in plain sight for some time.

“I’m so sick to my stomach”

A PC builder recently turned to Reddit’s r/pcmasterrace community in a panic after realizing they had downloaded 7‑Zip from the wrong website. Following a YouTube tutorial for a new build, they were instructed to download 7‑Zip from 7zip[.]com, unaware that the legitimate project is hosted exclusively at 7-zip.org.

In their Reddit post, the user described installing the file first on a laptop and later transferring it via USB to a newly built desktop. They encountered repeated 32‑bit versus 64‑bit errors and ultimately abandoned the installer in favor of Windows’ built‑in extraction tools. Nearly two weeks later, Microsoft Defender alerted on the system with a generic detection: Trojan:Win32/Malgent!MSR.

The experience illustrates how a seemingly minor domain mix-up can result in long-lived, unauthorized use of a system when attackers successfully masquerade as trusted software distributors.

A trojanized installer masquerading as legitimate software

This is not a simple case of a malicious download hosted on a random site. The operators behind 7zip[.]com distributed a trojanized installer via a lookalike domain, delivering a functional copy of functional 7‑Zip File Manager alongside a concealed malware payload.

The installer is Authenticode‑signed using a now‑revoked certificate issued to Jozeal Network Technology Co., Limited, lending it superficial legitimacy. During installation, a modified build of 7zfm.exe is deployed and functions as expected, reducing user suspicion. In parallel, three additional components are silently dropped:

  • Uphero.exe—a service manager and update loader
  • hero.exe—the primary proxy payload (Go‑compiled)
  • hero.dll—a supporting library

All components are written to C:WindowsSysWOW64hero, a privileged directory that is unlikely to be manually inspected.

An independent update channel was also observed at update.7zip[.]com/version/win-service/1.0.0.2/Uphero.exe.zip, indicating that the malware payload can be updated independently of the installer itself.

Abuse of trusted distribution channels

One of the more concerning aspects of this campaign is its reliance on third‑party trust. The Reddit case highlights YouTube tutorials as an inadvertent malware distribution vector, where creators incorrectly reference 7zip.com instead of the legitimate domain.

This shows how attackers can exploit small errors in otherwise benign content ecosystems to funnel victims toward malicious infrastructure at scale.

Execution flow: from installer to persistent proxy service

Behavioral analysis shows a rapid and methodical infection chain:

1. File deployment—The payload is installed into SysWOW64, requiring elevated privileges and signaling intent for deep system integration.

2. Persistence via Windows services—Both Uphero.exe and hero.exe are registered as auto‑start Windows services running under System privileges, ensuring execution on every boot.

3. Firewall rule manipulation—The malware invokes netsh to remove existing rules and create new inbound and outbound allow rules for its binaries. This is intended to reduce interference with network traffic and support seamless payload updates.

4. Host profiling—Using WMI and native Windows APIs, the malware enumerates system characteristics including hardware identifiers, memory size, CPU count, disk attributes, and network configuration. The malware communicates with iplogger[.]org via a dedicated reporting endpoint, suggesting it collects and reports device or network metadata as part of its proxy infrastructure.

Functional goal: residential proxy monetization

While initial indicators suggested backdoor‑style capabilities, further analysis revealed that the malware’s primary function is proxyware. The infected host is enrolled as a residential proxy node, allowing third parties to route traffic through the victim’s IP address.

The hero.exe component retrieves configuration data from rotating “smshero”‑themed command‑and‑control domains, then establishes outbound proxy connections on non‑standard ports such as 1000 and 1002. Traffic analysis shows a lightweight XOR‑encoded protocol (key 0x70) used to obscure control messages.

This infrastructure is consistent with known residential proxy services, where access to real consumer IP addresses is sold for fraud, scraping, ad abuse, or anonymity laundering.

Shared tooling across multiple fake installers

The 7‑Zip impersonation appears to be part of a broader operation. Related binaries have been identified under names such as upHola.exe, upTiktok, upWhatsapp, and upWire, all sharing identical tactics, techniques, and procedures:

  • Deployment to SysWOW64
  • Windows service persistence
  • Firewall rule manipulation via netsh
  • Encrypted HTTPS C2 traffic

Embedded strings referencing VPN and proxy brands suggest a unified backend supporting multiple distribution fronts.

Rotating infrastructure and encrypted transport

Memory analysis uncovered a large pool of hardcoded command-and-control domains using hero and smshero naming conventions. Active resolution during sandbox execution showed traffic routed through Cloudflare infrastructure with TLS‑encrypted HTTPS sessions.

The malware also uses DNS-over-HTTPS via Google’s resolver, reducing visibility for traditional DNS monitoring and complicating network-based detection.

Evasion and anti‑analysis features

The malware incorporates multiple layers of sandbox and analysis evasion:

  • Virtual machine detection targeting VMware, VirtualBox, QEMU, and Parallels
  • Anti‑debugging checks and suspicious debugger DLL loading
  • Runtime API resolution and PEB inspection
  • Process enumeration, registry probing, and environment inspection

Cryptographic support is extensive, including AES, RC4, Camellia, Chaskey, XOR encoding, and Base64, suggesting encrypted configuration handling and traffic protection.

Defensive guidance

Any system that has executed installers from 7zip.com should be considered compromised. While this malware establishes SYSTEM‑level persistence and modifies firewall rules, reputable security software can effectively detect and remove the malicious components. Malwarebytes is capable of fully eradicating known variants of this threat and reversing its persistence mechanisms. In high‑risk or heavily used systems, some users may still choose a full OS reinstall for absolute assurance, but it is not strictly required in all cases.

Users and defenders should:

  • Verify software sources and bookmark official project domains
  • Treat unexpected code‑signing identities with skepticism
  • Monitor for unauthorized Windows services and firewall rule changes
  • Block known C2 domains and proxy endpoints at the network perimeter

Researcher attribution and community analysis

This investigation would not have been possible without the work of independent security researchers who went deeper than surface-level indicators and identified the true purpose of this malware family.

  • Luke Acha provided the first comprehensive analysis showing that the Uphero/hero malware functions as residential proxyware rather than a traditional backdoor. His work documented the proxy protocol, traffic patterns, and monetization model, and connected this campaign to a broader operation he dubbed upStage Proxy. Luke’s full write-up is available on his blog.
  • s1dhy expanded on this analysis by reversing and decoding the custom XOR-based communication protocol, validating the proxy behavior through packet captures, and correlating multiple proxy endpoints across victim geolocations. Technical notes and findings were shared publicly on X (Twitter).
  • Andrew Danis contributed additional infrastructure analysis and clustering, helping tie the fake 7-Zip installer to related proxyware campaigns abusing other software brands.

Additional technical validation and dynamic analysis were published by researchers at RaichuLab on Qiita and WizSafe Security on IIJ.

Their collective work highlights the importance of open, community-driven research in uncovering long-running abuse campaigns that rely on trust and misdirection rather than exploits.

Closing thoughts

This campaign demonstrates how effective brand impersonation combined with technically competent malware can operate undetected for extended periods. By abusing user trust rather than exploiting software vulnerabilities, attackers bypass many traditional security assumptions—turning everyday utility downloads into long‑lived monetization infrastructure.

Malwarebytes detects and blocks known variants of this proxyware family and its associated infrastructure.

Indicators of Compromise (IOCs)

File paths

  • C:WindowsSysWOW64heroUphero.exe
  • C:WindowsSysWOW64herohero.exe
  • C:WindowsSysWOW64herohero.dll

File hashes (SHA-256)

  • e7291095de78484039fdc82106d191bf41b7469811c4e31b4228227911d25027 (Uphero.exe)
  • b7a7013b951c3cea178ece3363e3dd06626b9b98ee27ebfd7c161d0bbcfbd894 (hero.exe)
  • 3544ffefb2a38bf4faf6181aa4374f4c186d3c2a7b9b059244b65dce8d5688d9 (hero.dll)

Network indicators

Domains:

  • soc.hero-sms[.]co
  • neo.herosms[.]co
  • flux.smshero[.]co
  • nova.smshero[.]ai
  • apex.herosms[.]ai
  • spark.herosms[.]io
  • zest.hero-sms[.]ai
  • prime.herosms[.]vip
  • vivid.smshero[.]vip
  • mint.smshero[.]com
  • pulse.herosms[.]cc
  • glide.smshero[.]cc
  • svc.ha-teams.office[.]com
  • iplogger[.]org

Observed IPs (Cloudflare-fronted):

  • 104.21.57.71
  • 172.67.160.241

Host-based indicators

  • Windows services with image paths pointing to C:WindowsSysWOW64hero
  • Firewall rules named Uphero or hero (inbound and outbound)
  • Mutex: Global3a886eb8-fe40-4d0a-b78b-9e0bcb683fb7

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (February 2 – February 8)

Last week on Malwarebytes Labs:

Stay safe!

Apple Pay phish uses fake support calls to steal payment details

It started with an email that looked boringly familiar: Apple logo, a clean layout, and a subject line designed to make the target’s stomach drop.

The message claimed Apple has stopped a high‑value Apple Pay charge at an Apple Store, complete with a case ID, timestamp, and a warning that the account could be at risk if the target doesn’t respond.​

In some cases, there was even an “appointment” booked on their behalf to “review fraudulent activity,” plus a phone number they should call immediately if the time didn’t work.​ Nothing in the email screams amateur. The display name appears to be Apple, the formatting closely matches real receipts, and the language hits all the right anxiety buttons.

This is how most users are lured in by a recent Apple Pay phishing campaign.

The call that feels like real support

The email warns recipients not to Apple Pay until they’ve spoken to “Apple Billing & Fraud Prevention,” and it provides a phone number to call.​

partial example of the phish

After dialing the number, an agent introduces himself as part of Apple’s fraud department and asks for details such as Apple ID verification codes or payment information.

The conversation is carefully scripted to establish trust. The agent explains that criminals attempted to use Apple Pay in a physical Apple Store and that the system “partially blocked” the transaction. To “fully secure” the account, he says, some details need to be verified.

The call starts with harmless‑sounding checks: your name, the last four digits of your phone number, what Apple devices you own, and so on.

Next comes a request to confirm the Apple ID email address. While the victim is looking it up, a real-looking Apple ID verification code arrives by text message.

The agent asks for this code, claiming it’s needed to confirm they’re speaking to the rightful account owner. In reality, the scammer is logging into the account in real time and using the code to bypass two-factor authentication.

Once the account is “confirmed,” the agent walks the victim through checking their bank and Apple Pay cards. They ask questions about bank accounts and suggest “temporarily securing” payment methods so criminals can’t exploit them while the “Apple team” investigates.

The entire support process is designed to steal login codes and payment data. At scale, campaigns like this work because Apple’s brand carries enormous trust, Apple Pay involves real money, and users have been trained to treat fraud alerts as urgent and to cooperate with “support” when they’re scared.

One example submitted to Malwarebytes Scam Guard showed an email claiming an Apple Gift Card purchase for $279.99 and urging the recipient to call a support number (1-812-955-6285).

Another user submitted a screenshot showing a fake “Invoice Receipt – Paid” styled to look like an Apple Store receipt for a 2025 MacBook Air 13-inch laptop with M4 chip priced at $1,157.07 and a phone number (1-805-476-8382) to call about this “unauthorized transaction.”

What you should know

Apple doesn’t set up fraud appointments through email. The company also doesn’t ask users to fix billing problems by calling numbers in unsolicited messages.

Closely inspect the sender’s address. In these cases, the email doesn’t come from an official Apple domain, even if the display name makes it seem legitimate.

Never share two-factor authentication (2FA) codes, SMS codes, or passwords with anyone, even if they claim to be from Apple.

Ignore unsolicited messages urging you to take immediate action. Always think and verify before you engage. Talk to someone you trust if you’re not sure.

Malwarebytes Scam Guard helped several users identify this type of scam. For those without a subscription, you can use Scam Guard in ChatGPT.

If you’ve already engaged with these Apple Pay scammers, it is important to:

  • Change the Apple ID password immediately from Settings or appleid.apple.com, not from any link provided by email or SMS.
  • Check active sessions, sign out of all devices, then sign back in only on devices you recognize and control.
  • Rotate your Apple ID password again if you see any new login alerts, and confirm 2FA is still enabled. If not, turn it on.
  • In Wallet, check every card for unfamiliar Apple Pay transactions and recent in-store or online charges. Monitor bank and credit card statements closely for the next few weeks and dispute any unknown transactions immediately.
  • Check if the primary email account tied to your Apple ID is yours, since control of that email can be used to take over accounts.

We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Open the wrong “PDF” and attackers gain remote access to your PC

Cybercriminals behind a campaign dubbed DEAD#VAX are taking phishing one step further by delivering malware inside virtual hard disks that pretend to be ordinary PDF documents. Open the wrong “invoice” or “purchase order” and you won’t see a document at all. Instead, Windows mounts a virtual drive that quietly installs AsyncRAT, a backdoor Trojan that allows attackers to remotely monitor and control your computer.

It’s a remote access tool, which means attackers gain remote hands‑on‑keyboard control, while traditional file‑based defenses see almost nothing suspicious on disk.

From a high-level view, the infection chain is long, but every step looks just legitimate enough on its own to slip past casual checks.

Victims receive phishing emails that look like routine business messages, often referencing purchase orders or invoices and sometimes impersonating real companies. The email doesn’t attach a document directly. Instead, it links to a file hosted on IPFS (InterPlanetary File System), a decentralized storage network increasingly abused in phishing campaigns because content is harder to take down and can be accessed through normal web gateways.

The linked file is named as a PDF and has the PDF icon, but is actually a virtual hard disk (VHD) file. When the user double‑clicks it, Windows mounts it as a new drive (for example, drive E:) instead of opening a document viewer. Mounting VHDs is perfectly legitimate Windows behavior, which makes this step less likely to ring alarm bells.

Inside the mounted drive is what appears to be the expected document, but it’s actually a Windows Script File (WSF). When the user opens it, Windows executes the code in the file instead of displaying a PDF.

After some checks to avoid analysis and detection, the script injects the payload—AsyncRAT shellcode—into trusted, Microsoft‑signed processes such as RuntimeBroker.exe, OneDrive.exe, taskhostw.exe, or sihost.exe. The malware never writes an actual executable file to disk. It lives and runs entirely in memory inside these legitimate processes, making detection and eventually at a later stage, forensics much harder. It also avoids sudden spikes in activity or memory usage that could draw attention.

For an individual user, falling for this phishing email can result in:

  • Theft of saved and typed passwords, including for email, banking, and social media.
  • Exposure of confidential documents, photos, or other sensitive files taken straight from the system.
  • Surveillance via periodic screenshots or, where configured, webcam capture.
  • Use of the machine as a foothold to attack other devices on the same home or office network.

How to stay safe

Because detection can be hard, it is crucial that users apply certain checks:

  • Don’t open email attachments until after verifying, with a trusted source, that they are legitimate.
  • Make sure you can see the actual file extensions. Unfortunately, Windows allows users to hide them. So, when in reality the file would be called invoice.pdf.vhd the user would only see invoice.pdf. To find out how to do this, see below.
  • Use an up-to-date, real-time anti-malware solution that can detect malware hiding in memory.

Showing file extensions on Windows 10 and 11

To show file extensions in Windows 10 and 11:

  • Open Explorer (Windows key + E)
  • In Windows 10, select View and check the box for File name extensions.
  • In Windows 11, this is found under View > Show > File name extensions.

Alternatively, search for File Explorer Options to uncheck Hide extensions for known file types.

For older versions of Windows, refer to this article.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Flock cameras shared license plate data without permission

Mountain View, California, pulled the plug on its entire license plate reader camera network this week. It discovered that Flock Safety, which ran the system, had been sharing city data with hundreds of law enforcement agencies, including federal ones, without permission.

Flock Safety runs an automated license plate recognition (ALPR) system that uses AI to identify vehicles’ number plates on the road. Mountain View Police Department (MVPD) policy chief Mike Canfield ordered all 30 of the city’s Flock cameras disabled on February 3.

Two incidents of unauthorized sharing came to light. The first was a “national lookup” setting that was toggled on for one camera at the intersection of the city’s Charleston and San Antonio roads. Flock allegedly switched it on without telling the city.

That setting could violate California’s 2015 statute SB 34, which bars state and local agencies from sharing license plate reader data with out-of-state or federal entities. The law states:

“A public agency shall not sell, share, or transfer ALPR information, except to another public agency, and only as otherwise permitted by law.”

The statute defines a public agency as the state, or any city or county within it, covering state and local law enforcement agencies.

Last October, the state Attorney General sued the Californian city of El Cajon for knowingly violating that law by sharing license place data with agencies in more than two dozen states.

However, MVPD said that Flock kept no records from the national lookup period, so nobody can determine what information actually left the system.

Mountain View says it never chose to share, which makes the violation different in kind. For the people whose plates were scanned, the distinction is academic.

A separate “statewide lookup” feature had also been active on 29 of the city’s 30 cameras since the initial installation, running for 17 straight months until Mountain View found and disabled it on January 5. Through that tool, more than 250 agencies that had never signed any data agreement with Mountain View ran an estimated 600,000 searches over a single year, according to local paper the Mountain View Voice, which first uncovered the issue after filing a public records request.

Over the past year, more than two dozen municipalities across the country have ended contracts with Flock, many citing the same worry that data collected for local crime-fighting could be used for federal immigration enforcement. Santa Cruz became the first in California to terminate its contract last month.

Flock’s own CEO reportedly acknowledged last August that the company had been running previously undisclosed pilot programs with Customs and Border Protection and Homeland Security Investigations.

The cameras will remain offline until the City Council meets on February 24. Canfield says that he still supports license plate reader technology, just not this vendor.

This goes beyond one city’s vendor dispute. If strict internal policies weren’t enough to prevent unauthorized sharing, it raises a harder question: whether policy alone is an adequate safeguard when surveillance systems are operated by third parties.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Grok continues producing sexualized images after promised fixes

Journalists decided to test whether the Grok chatbot still generates non‑consensual sexualized images, even after xAI, Elon Musk’s artificial intelligence company, and X, the social media platform formerly known as Twitter, promised tighter safeguards.

Unsurprisingly, it does.

After scrutiny from regulators all over the world—triggered by reports that Grok could generate sexualized images of minors—xAI framed it as an “isolated” lapse and said it was urgently fixing “lapses in safeguards.”

A Reuters retest suggests the core abuse pattern remains. Reuters had nine reporters run dozens of controlled prompts through Grok after X announced new limits on sexualized content and image editing. In the first round, Grok produced sexualized imagery in response to 45 of 55 prompts. In 31 of those 45, the reporters explicitly said the subject was vulnerable or would be humiliated by the pictures.

A second round, five days later, still yielded sexualized images in 29 of 43 prompts, even when reporters said the subjects had not consented.

Competing systems from OpenAI, Google, and Meta refused identical prompts and instead warned users against generating non‑consensual content.

The prompts were deliberately framed as real‑world abuse scenarios. Reporters told Grok the photos were of friends, co-workers, or strangers who were body‑conscious, timid, or survivors of abuse, and that they had not agreed to editing. Despite that, Grok often complied—for example, turning a “friend” into a woman in a revealing purple two‑piece or putting a male acquaintance into a small gray bikini, oiled up and posed suggestively. In only seven cases did Grok explicitly reject requests as inappropriate; in others it failed silently, returning generic errors or generating different people instead.

The result is a system illustrating the same lesson its creators say they’re trying to learn: if you ship powerful visual models without exhaustive abuse testing and robust guardrails, people will use them to sexualize and humiliate others, including children. Grok’s record so far suggests that lesson still hasn’t sunk in.

Grok limited AI image editing to paid users after the backlash. But paywalling image tools—and adding new curbs—looks more like damage control than a fundamental safety reset. Grok still accepts prompts that describe non‑consensual use, still sexualizes vulnerable subjects, and still behaves more permissively than rival systems when asked to generate abusive imagery. For victims, the distinction between “public” and private generations is meaningless if their photos can be weaponized in DMs or closed groups at scale.

Sharing images

If you’ve ever wondered why some parents post images of their children with a smiley emoji across their face, this is part of the reason.

Don’t make it easy for strangers to copy, reuse, or manipulate your photos.

This is another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

And treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Firefox is giving users the AI off switch

Some software providers have decided to lead by example and offer users a choice about the Artificial Intelligence (AI) features built into their products.

The latest example is Mozilla, which now offers users a one-click option to disable generative AI features in the Firefox browser.

Audiences are divided about the use of AI, or as Mozilla put it on their blog:

“AI is changing the web, and people want very different things from it. We’ve heard from many who want nothing to do with AI. We’ve also heard from others who want AI tools that are genuinely useful. Listening to our community, alongside our ongoing commitment to offer choice, led us to build AI controls.”

Mozilla is adding an AI Controls area to Firefox settings that centralizes the management of all generative AI features. This consists mainly of a master switch, “Block AI enhancements,” which lets users effectively run Firefox “without AI.” It blocks existing and future generative AI features and hides pop‑ups or prompts advertising them.

Once you set your AI preferences in Firefox, they stay in place across updates. You can also change them whenever you want.

Starting with Firefox 148, which rolls out on February 24, you’ll find a new AI controls section within the desktop browser settings.

Firefox AI choices
Image courtesy of Mozilla

You can turn everything off with one click or take a more granular approach. At launch, these features can be controlled individually:

  • Translations, which help you browse the web in your preferred language.
  • Alt text in PDFs, which add accessibility descriptions to images in PDF pages.
  • AI-enhanced tab grouping, which suggests related tabs and group names.
  • Link previews, which show key points before you open a link.
  • An AI chatbot in the sidebar, which lets you use your chosen chatbot as you browse, including options like Anthropic Claude, ChatGPT, Microsoft Copilot, Google Gemini and Le Chat Mistral.

We applaud this move to give more control to the users. Other companies have done the same, including Mozilla’s competitor DuckDuckGo, which made AI optional after putting the decision to a user vote. Earlier, browser developer Vivaldi took a stand against incorporating AI altogether.

Open-source email service Tuta also decided not to integrate AI features. After only 3% of Tuta users requested them, Tuta removed an AI copilot from its development roadmap.

Even Microsoft seems to have recoiled from pushing AI to everyone, although so far it has focused on walking back defaults and tightening per‑feature controls rather than offering a single, global off switch.

Choices

Many people are happy to use AI features, and as long as you’re aware of the risks and the pitfalls, that’s fine. But pushing these features on users who don’t want them is likely to backfire on software publishers.

Which is only right. After all, you’re paying the bill, so you should have a choice. Before installing a new browser, inform yourself not only about its privacy policy, but also about what control you’ll have over AI features.

Looking at recent voting results, I think it’s safe to say that in the AI gold rush, the real premium feature isn’t a chatbot button—it’s the off switch.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

AT&T breach data resurfaces with new risks for customers

When data resurfaces, it never comes back weaker. A newly shared dataset tied to AT&T shows just how much more dangerous an “old” breach can become once criminals have enough of the right details to work with.

The dataset, privately circulated since February 2, 2026, is described as AT&T customer data likely gathered over the years. It doesn’t just contain a few scraps of contact information. It reportedly includes roughly 176 million records, with…

  • Up to 148 million Social Security numbers (full SSNs and last four digits)
  • More than 133 million full names and street addresses
  • More than 132 million phone numbers.
  • Dates of birth for around 75 million people
  • More than 131 million email addresses

Taken together, that’s the kind of rich, structured data set that makes a criminal’s life much easier.

On their own, any one of these data points would be inconvenient but manageable. An email address fuels spam and basic phishing. A phone number enables smishing and robocalls. An address helps attackers guess which services you might use. But when attackers can look up a single person and see name, full address, phone, email, complete or partial SSN, and date of birth in one place, the risk shifts from “annoying” to high‑impact.

That combination is exactly what many financial institutions and mobile carriers still rely on for identity checks. For cybercriminals, this sort of dataset is a Swiss Army knife.

It can be used to craft convincing AT&T‑themed phishing emails and texts, complete with correct names and partial SSNs to “prove” legitimacy. It can power large‑scale SIM‑swap attempts and account takeovers, where criminals call carriers and banks pretending to be you, armed with the answers those call centers expect to hear. It can also enable long‑term identity theft, with SSNs and dates of birth abused to open new lines of credit or file fraudulent tax returns.

The uncomfortable part is that a fresh hack isn’t always required to end up here. Breach data tends to linger, then get merged, cleaned up, and expanded over time. What’s different in this case is the breadth and quality of the profiles. They include more email addresses, more SSNs, more complete records per person. That makes the data more attractive, more searchable, and more actionable for criminals.

For potential victims, the lesson is simple but important. If you have ever been an AT&T customer, treat this as a reminder that your data may already be circulating in a form that is genuinely useful to attackers. Be cautious of any AT&T‑related email or text, enable multi‑factor authentication wherever possible, lock down your mobile account with extra passcodes, and consider monitoring your credit. You can’t pull your data back out of a criminal dataset—but you can make sure it’s much harder to use against you.

What to do when your data is involved in a breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but it increases risk if a retailer suffers a breach.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

Use Malwarebytes’ free Digital Footprint scan to see whether your personal information has been exposed online.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.