IT NEWS

pcTattletale founder pleads guilty as US cracks down on stalkerware

Reportedly, pcTattletale founder Bryan Fleming has pleaded guilty in US federal court to computer hacking, unlawfully selling and advertising spyware, and conspiracy.

This is good news not just because we despise stalkerware like pcTattletale, but because it is only the second US federal stalkerware prosecution in a decade. It could could open the door to further cases against people who develop, sell, or promote similar tools.

In 2021, we reported that “employee and child-monitoring” software vendor pcTattletale had not been very careful about securing the screenshots it secretly captured from victims’ phones. A security researcher testing a trial version discovered that the app uploaded screenshots to an unsecured online database, meaning anyone could view them without authentication, such as a username and password.

In 2024, we revisited the app after researchers found it was once again leaking a database containing victim screenshots. One researcher discovered that pcTattletale’s Application Programming Interface (API) allowed anyone to access the most recent screen capture recorded from any device on which the spyware is installed. Another researcher uncovered a separate vulnerability that granted full access to the app’s backend infrastructure. That access allowed them to deface the website and steal AWS credentials, which turned out to be shared across all devices. As a result, the researcher obtained data about both victims and the customers who were doing the tracking.

This is no longer possible. Not because the developers fixed the problems, but because Amazon locked pcTattletale’s entire AWS infrastructure. Fleming later abandoned the product and deleted the contents of its servers.

However, Homeland Security Investigations had already started investigating pcTattletale in June 2021 and did not stop. A few things made Fleming stand out among other stalkerware operators. While many hide behind overseas shell companies, Fleming appeared to be proud of his work. And while others market their products as parental control or employee monitoring tools, pcTattletale explicitly promoted spying on romantic partners and spouses, using phrases such as “catch a cheater” and “surreptitiously spying on spouses and partners.” This made it clear the software was designed for non-consensual surveillance of adults.

Fleming is expected to be sentenced later this year.

Removing stalkerware

Malwarebytes, as one of the founding members of the Coalition Against Stalkerware, makes it a priority to detect and remove stalkerware-type apps from your device.

It is important to keep in mind, however, that removing stalkerware may alert the person spying on you that the app has been discovered. The Coalition Against Stalkerware outlines additional steps and considerations to help you decide the safest next move.

Because the apps often install under different names and hide themselves from users, they can be difficult to find and remove. That is where Malwarebytes can help you.

To scan your device:

  1. Open your Malwarebytes dashboard
  2. Start a Scan

The scan may take a few minutes.

 If malware is detected, you can choose one of the following actions:

  • Uninstall. The threat will be deleted from your device.
  • Ignore Always. The file detection will be added to the Allow List, and excluded from future scans. Legitimate files are sometimes detected as malware. We recommend reviewing scan results and adding files to Ignore Always that you know are safe and want to keep.
  • Ignore Once: The detection is ignored for this scan only. It will be detected again during your next scan.

Malwarebytes detects pcTattleTale as PUP.Optional.PCTattletale.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Are we ready for ChatGPT Health?

How comfortable are you with sharing your medical history with an AI?

I’m certainly not.

OpenAI’s announcement about its new ChatGPT Health program prompted discussions about data privacy and how the company plans to keep the information users submit safe.

ChatGPT Health is a dedicated “health space” inside ChatGPT that lets users connect their medical records and wellness apps so the model can answer health and wellness questions in a more personalized way.

ChatGPT health

OpenAI promises additional, layered protections designed specifically for health, “to keep health conversations protected and compartmentalized.”

First off, it’s important to understand that this is not a diagnostic or treatment system. It’s framed as a support tool to help understand health information and prepare for care.

But this is the part that raised questions and concerns:

“You can securely connect medical records and wellness apps to ground conversations in your own health information, so responses are more relevant and useful to you.”

In other words, ChatGPT Health lets you link medical records and apps such as Apple Health, MyFitnessPal, and others so the system can explain lab results, track trends (e.g., cholesterol), and help you prepare questions for clinicians or compare insurance options based on your health data.

Given our reservations about the state of AI security in general and chatbots in particular, this is a line that I don’t dare cross. For now, however, I don’t even have the option, since only users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the United Kingdom can sign up for the waitlist.

OpenAI only uses partners and apps in ChatGPT Health that meet OpenAI’s privacy and security requirements, which, by design, shifts a great deal of trust onto ChatGPT Health itself.

Users should realize that health information is very sensitive and as Sara Geoghegan, senior counsel at the Electronic Privacy Information Center told The Record: by sharing their electronic medical records with ChatGPT Health, users in the US could effectively remove the HIPAA protection from those records, which is a serious consideration for anyone sharing medical data.

She added:

“ChatGPT is only bound by its own disclosures and promises, so without any meaningful limitation on that, like regulation or a law, ChatGPT can change the terms of its service at any time.”

Should you decide to try this new feature out, we would advise you to proceed with caution and take the advice to enable 2FA for ChatGPT to heart. OpenAI claims 230 million users already ask ChatGPT health and wellness questions each week. I’d encourage them to do the same.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

CISA warns of active attacks on HPE OneView and legacy PowerPoint

The US Cybersecurity and Infrastructure Security Agency (CISA) added both a newly discovered flaw and a much older one to its catalog of Known Exploited Vulnerabilities (KEV).

The KEV catalog gives Federal Civilian Executive Branch (FCEB) agencies a list of vulnerabilities that are known to be exploited in the wild, along with deadlines for when they must be patched. In both of these cases, the due date is January 28, 2026.

But CISA alerts are not just for government agencies. They also provide guidance to businesses and end users about which vulnerabilities should be patched first, based on real-world exploitation.

A critical flaw in HPE OneView

The recently found vulnerability, tracked as CVE-2025-37164, carries a CVSS score of 10 out of 10 and allows remote code execution. The flaw affects HPE OneView, a platform used to manage IT infrastructure, and a patch was released on December 17, 2025.

This critical vulnerability allows a remote, unauthenticated attacker to execute code and potentially gain large-scale control over servers, firmware, and lifecycle management. Management platforms like HPE OneView are often deployed deep inside enterprise networks, where they have extensive privileges and limited monitoring because they are trusted.

Proof of Concept (PoC) code, in the form of a Metasploit module, was made public just one day after the patch was released.

A PowerPoint vulnerability from 2009 resurfaces

The cybersecurity dinosaur here is a vulnerability in Microsoft PowerPoint, tracked as CVE-2009-0556, that dates back more than 15 years. It affects:

  • Microsoft Office PowerPoint 2000 SP3
  • PowerPoint 2002 SP3
  • PowerPoint 2003 SP3
  • PowerPoint in Microsoft Office 2004 for Mac

The flaw allows remote attackers to execute arbitrary code by tricking a victim into opening a specially crafted PowerPoint file that triggers memory corruption.

In the past, this vulnerability was exploited by malware known as Apptom. CISA rarely adds vulnerabilities to the KEV catalog based on ancient exploits, so the “sudden” re‑emergence of the 2009 PowerPoint vulnerability suggests attackers are targeting still‑deployed legacy Office installs.

Successful exploitation can allow attackers to run arbitrary code, deploy malware, and establish a foothold for lateral movement inside a network. Unlike the HPE OneView flaw, this attack requires user interaction—the target must open the malicious PowerPoint file.

Stay safe

When it comes to managing vulnerabilities, prioritizing which patches to apply is an important part of staying safe. So, to make sure you don’t fall victim to exploitation of known vulnerabilities:

  • Keep an eye on the CISA KEV catalog as a guide of what’s currently under active exploitation.
  • Update as fast as you can without interrupting daily routine.
  • Use a real-time up-to-date anti-malware solution to intercept exploits and malware attacks.
  • Don’t open unsolicited attachments without verifying with the—trusted—sender.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Lego’s Smart Bricks explained: what they do, and what they don’t

Lego just made what it claims is its most important product release since it introduced minifigures in 1978. No, it’s not yet another brand franchise. It’s a computer in a brick.

Called the Smart Brick, it’s part of a broader system called Smart Play that Lego hopes will revolutionize your child’s interaction with Lego.

These aren’t your grandma’s Lego bricks. The 2×4 techno-brick houses a custom ASIC chip that Lego says is smaller than a single Lego stud, measuring about 4.1mm. Inside are accelerometers, light and sound sensors, an LED array, and a miniature speaker with an onboard synthesizer that generates sound effects in real time, rather than just playing pre-recorded clips.

How the pieces talk to each other

The bricks charge wirelessly on a dedicated pad and contain batteries that Lego says can last for years. They also communicate with each other to trigger actions, such as interactive sound effects.

This is where the other Smart Play components come in: Smart Tags and Smart Minifigures. The 2×2 stud-less Smart Tags contain unique digital IDs that tell bricks how to behave. A helicopter tag, for example, might trigger propeller sounds.

There’s also a Neighbor Position Measurement system that detects brick proximity and orientation. So a brick might do different things as it gets closer to a Smart Tag or Smart Minifigure, for example.

The privacy implications of Smart Bricks

Any time parents hear about toys communicating with other devices, they’re right to be nervous. They’ve had to contend with toys that give up kids’ sensitive personal data and allegedly have the potential to become listening devices for surveillance.

However, Lego says its proprietary Bluetooth-based protocol, called BrickNet, comes with encryption and built-in privacy controls.

One clear upside is that the system doesn’t need an internet connection for these devices to work, and there are no screens or companion apps involved either. For parents weary of reading about children’s apps quietly harvesting data, that alone will come as a relief.

Lego also makes specific privacy assurances. Yes, there’s a microphone in the Smart Brick, but no, it doesn’t record sound (it’s just a sensor), the company says. There are no cameras either.

Perhaps the biggest relief of all, though, is that there’s no AI in this brick.

At a time when “AI-powered” is being sprinkled over everything from washing machines to toilets, skipping AI may be the smartest design decision here. AI-driven toys come with their own risks, especially when children don’t get a meaningful choice about how that technology behaves once it’s out of the box.

In the past, they’ve been subjected to sexual content from AI-powered teddy bears. Against that backdrop, Lego’s restraint feels deliberate, and welcome.

Are these the bricks you’re looking for?

Will the world take to Smart Bricks? Probably.

Should it? The best response comes from my seven-year-old, scoffing,

“Kids can make enough annoying noises themselves.”

We won’t have long to wait to find out. Lego announced Lucasafilm as its first Smart Play partner when it unveiled the system at CES 2026 in Las Vegas this week, and pre-orders open on January 9. The initial lineup includes three kits: Tie Fighters, X-Wings, and A-Wings, complete with associated scenery.

Expect lots of engine, laser, and light sabre sounds from those rigs—and perhaps a lack of adorable sound effects from your kids when the blocks start doing the work. That makes us a little sad.

More optimistically, perhaps there are opportunities for creative play, such as devices that spin, flip, and light up based on their communications with other bricks. That could turn this into more of a experiment in basic circuitry and interaction than a simple noise-making device. One of the best things about watching kids play is how far outside the box they think.

Whatever your view on Lego’s latest development, it doesn’t seem like it’ll let people tailor advertising to your kids, whisper atrocities at them from afar, or hack your home network. That, at the very least, is a win.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Fake WinRAR downloads hide malware behind a real installer

A member of our web research team pointed me to a fake WinRAR installer that was linked from various Chinese websites. When these links start to show up, that’s usually a good indicator of a new campaign.

So, I downloaded the file and started an analysis, which turned out to be something of a Matryoshka doll. Layer after layer, after layer.

WinRAR is a popular utility that’s often downloaded from “unofficial” sites, which gives campaigns offering fake downloads a bigger chance of being effective.

Often, these payloads contain self-extracting or multi-stage components that can download further malware, establish persistence, exfiltrate data, or open backdoors, all depending on an initial system analysis. So it was no surprise that one of the first actions this malware took was to access sensitive Windows data in the form of Windows Profiles information.

This, along with other findings from our analysis (see below), indicates that the file selects the “best-fit” malware for the affected system before further compromising or infecting it.

How to stay safe

Mistakes are easily made when you’re looking for software to solve a problem, especially when you want that solution fast. A few simple tips can help keep you safe in situations like this.

  • Only download software from official and trusted sources. Avoid clicking links that promise to deliver that software on social media, in emails, or on other unfamiliar websites.
  • Use a real-time, up-to-date anti-malware solution to block threats before they can run.

Analysis

The original file was called winrar-x64-713scp.zip and the initial analysis with Detect It Easy (DIE) already hinted at several layers.

Detect It Easy first analysis
Detect It Easy first analysis: 7-Zip, UPX, SFX — anything else?

Unzipping the file produced winrar-x64-713scp.exe which turned out to be a UPX packed file that required the --force option to unpack it due to deliberate PE anomalies. UPX normally aborts compression if it finds unexpected values or unknown data in the executable header fields, as that data may be required for the program to run correctly. The --force option tells UPX to ignore these anomalies and proceed with decompression anyway.

Looking at the unpacked file, DIE showed yet another layer: (Heur)Packer: Compressed or packed data[SFX]. Looking at the strings inside the file I noticed two RunProgram instances:

RunProgram="nowait:"1winrar-x64-713scp1.exe" "

RunProgram="nowait:"youhua163

These commands tell the SFX archive to run the embedded programs immediately after extraction, without waiting for it to complete (nowait).

Using PeaZip, I extracted both embedded files.

Analysis 2

The Chinese characters “安装” complicated the string analysis, but they translate as “install,” which further piqued my interest. The file 1winrar-x64-713scp1.exe turned out to be the actual WinRAR installer, likely included to ease suspicion for anyone running the malware.

After removing another layer, the other file turned out to be a password-protected zip file named setup.hta. The obfuscation used here led me to switch to dynamic analysis. Running the file on a virtual machine showed that setup.hta is unpacked at runtime directly into memory. The memory dump revealed another interesting string: nimasila360.exe.

This is a known file often created by fake installers and associated with the Winzipper malware. Winzipper is a known Chinese-language malicious program that pretends to be a harmless file archive so it can sneak onto a victim’s computer, often through links or attachments. Once opened and installed, it quietly deploys a hidden backdoor that lets attackers remotely control the machine, steal data, and install additional malware, all while the victim believes they’ve simply installed legitimate software.

Indicators of Compromise (IOCs)

Domains:

winrar-tw[.]com

winrar-x64[.]com

winrar-zip[.]com

Filenames:

winrar-x64-713scp.zip

youhua163安装.exe

setup.hta (dropped in C:Users{username}AppDataLocalTemp)

Malwarebytes’ web protection component blocks all domains hosting the malicious file and installer.

Malwarebytes blocks winrar-tw[.]com
Malwarebytes blocks winrar-tw[.]com

One million customers on alert as extortion group claims massive Brightspeed data haul

US fiber broadband company Brightspeed is investigating claims by the Crimson Collective extortion group that it stole sensitive data belonging to more than 1 million residential customers, including extensive personally identifiable information (PII), as well as account and billing details.

Brightspeed is one of the largest fiber broadband providers in the US and serves customers across 20 states.

On January 4, the Crimson Collective posted this message on its Telegram channel:

Telegram post Crimson Collective about Brightspeed

“If anyone has someone working at BrightSpeed, tell them to read their mails fast!

We have in our hands over 1m+ residential user PII’s, which contains the following:

  • Customer/account master records containing full PII such as names, emails, phone numbers, billing and service addresses, account status, network type, consent flags, billing system, service instance, network assignment, and site IDs.
  • Address qualification responses with address IDs, full postal addresses, latitude and longitude coordinates, qualification status (fiber/copper/4G), maximum bandwidth, drop length, wire center, marketing profile codes, and eligibility flags.
  • User-level account details keyed by session/user IDs, overlapping with PII including names, emails, phones, service addresses, account numbers, status, communication preferences, and suspend reasons.
  • Payment history per account, featuring payment IDs, dates, amounts, invoice numbers, card types and masked card numbers (last 4 digits), gateways, and status; some entries indicate null or empty histories.
  • Payment methods per account, including default payment method IDs, gateways, masked credit card numbers, expiry dates, BINs, holder names and addresses, status flags (Active/Declined), and created/updated timestamps.
  • Appointment/order records per billing account, with customer PII such as names, emails, phones, addresses, order numbers, status, appointment windows, dispatch and technician information, and install types.

Sample will be dropped on monday night time, letting them some time first to answer to us. (UTC+9, Japan is quite fun for new years while dumping company data)”

The promised sample was later made available and contains 50 entries from each of the following database tables:

  • [get-account-details]
    account details sample
  • [getAddressQualification]
  • [getUserAccountDetails]
  • [listPaymentHistory]
  • [listPaymentMethods]
    payment methods sample
  • [user-appointments]

In a separate Telegram message, the group also claimed it had disconnected a large number of Brightspeed customers. However, this allegation appears only in the group’s own messaging and has not been corroborated by any public reporting.

While there are some customer complaints circulating on social media, it remains unclear whether these issues are actually caused by any actions taken by the Crimson Collective.

StatusISDown update about Brightspeed

Brightspeed told BleepingComputer:

“We take the security of our networks and protection of our customers’ and employees’ information seriously and are rigorous in securing our networks and monitoring threats. We are currently investigating reports of a cybersecurity event. As we learn more, we will keep our customers, employees and authorities informed.”

Protecting yourself after a data breach

If you think you have been affected by a data breach, here are steps you can take to protect yourself:

  • Check the company’s advice. Every breach is different, so check with the company to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop, or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for impersonators. The thieves may contact you posing as the breached platform. Check the official website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to let sites remember your card details, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Phishing campaign abuses Google Cloud services to steal Microsoft 365 logins

Attackers are sending very convincing fake “Google” emails that slip past spam filters, route victims through several trusted Google-owned services, and ultimately lead to a look-alike Microsoft 365 sign-in page designed to harvest usernames and passwords.

Researchers found that cybercriminals used Google Cloud Application Integration’s Send Email feature to send phishing emails from a legitimate Google address: noreply-application-integration@google[.]com.

Google Cloud Application Integration allows users to automate business processes by connecting any application with point-and-click configurations. New customers currently receive free credits, which lowers the barrier to entry and may attract some cybercriminals.

The initial email arrives from what looks like a real Google address and references something routine and familiar, such as a voicemail notification, a task to complete, or permissions to access a document. The email includes a link that points to a genuine Google Cloud Storage URL, so the web address appears to belong to Google and doesn’t look like an obvious fake.

After the first click, you are redirected to another Google‑related domain (googleusercontent[.]com) showing a CAPTCHA or image check. Once you pass the “I’m not a robot check,” you land on what looks like a normal Microsoft 365 sign‑in page, but on close inspection, the web address is not an official Microsoft domain.

Any credentials provided on this site will be captured by the attackers.

The use of Google infrastructure provides the phishers with a higher level of trust from both email filters and the receiving users. This is not a vulnerability, just an abuse of cloud-based services that Google provides.

Google’s response

Google said it has taken action against the activity:

“We have blocked several phishing campaigns involving the misuse of an email notification feature within Google Cloud Application Integration. Importantly, this activity stemmed from the abuse of a workflow automation tool, not a compromise of Google’s infrastructure. While we have implemented protections to defend users against this specific attack, we encourage continued caution as malicious actors frequently attempt to spoof trusted brands. We are taking additional steps to prevent further misuse.”

We’ve seen several phishing campaigns that abuse trusted workflows from companies like Google, PayPal, DocuSign, and other cloud-based service providers to lend credibility to phishing emails and redirect targets to their credential-harvesting websites.

How to stay safe

Campaigns like these show that some responsibility for spotting phishing emails still rests with the recipient. Besides staying informed, here are some other tips you can follow to stay safe.

  • Always check the actual web address of any login page; if it’s not a genuine Microsoft domain, do not enter credentials.​ Using a password manager will help because they will not auto-fill your details on fake websites.
  • Be cautious of “urgent” emails about voicemails, document shares, or permissions, even if they appear to come from Google or Microsoft.​ Creating urgency is a common tactic by scammers and phishers.
  • Go directly to the service whenever possible. Instead of clicking links in emails, open OneDrive, Teams, or Outlook using your normal bookmark or app.
  • Use multi‑factor authentication (MFA) so that stolen passwords alone are not enough, and regularly review which apps have access to your account and remove anything you don’t recognize.

Pro tip: Malwarebytes Scam Guard can recognize emails like this as scams. You can upload suspicious text, emails, attachments and other files and ask for its opinion. It’s really very good at recognizing scams.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Disney fined $10m for mislabeling kids’ YouTube videos and violating privacy law

Disney will pay a $10m settlement over allegations that it violated kids’ privacy rights, the Federal Trade Commission (FTC) said this week.

The agreement, first proposed in September 2025, resolves a dispute over Disney’s labeling of child-targeted content on YouTube. The thousands of YouTube videos it targets at kids makes it subject to a US law called the Children’s Online Privacy Protection Act (COPPA). Enacted in 1998, COPPA is designed to protect children under the age of 13 from having their data collected and used online.

That protection matters because children are far less able to understand data collection, advertising, or profiling, and cannot understandingfully consent to it. When COPPA safeguards fail, children may be tracked across videos, served targeted ads, or profiled based on viewing habits, all without parental knowledge or approval.

In 2019, YouTube introduced a policy to help creators comply with COPPA by labeling their content as made for kids (MFK) or not made for kids (NMFK). Content labeled MFK is automatically restricted. For example, it can’t autoplay into related content, appear in the miniplayer, or be added to playlists.

This policy came about after the YouTube’s own painful COPPA-related experience in 2019, when it settled for $170m with the FTC after failing to properly label content directed at children. That still ranks as the biggest ever COPPA settlement by far.

Perhaps the two most important restrictions for videos labeled MFK are these: MFK videos should only autoplay into other kid-appropriate content, preventing (at least in theory) kids from seeing inappropriate content. And advertisers are prohibited from collecting personal data from children watching those videos.

A chastened YouTube warned content creators, including Disney, that they could violate COPPA if they failed to label content correctly. They could do this in two ways: Creators could label entire channels (Disney has about 1,250 of these for its different content brands) or individual videos. So, a channel marked NMFK could still host MFK videos, but those individual videos needed to be labeled correctly.

According to the FTC, Disney’s efforts fell short and plenty of child-targeted videos were incorrectly labeled.

The court complaint stated that Disney applied blanket NMFK labels to entire YouTube channels instead of reviewing videos individually. As a result, some child-targeted videos were incorrectly labeled, allowing data collection and ad targeting that COPPA is meant to prevent. For example, the Pixar channel was labeled NMFK, but showed “very similar” videos from the Pixar Cars channel, which was labeled MFK.

The FTC said YouTube warned Disney in June 2020 that it had reclassified more than 300 of its videos as child-directed across channels including Pixar, Disney Movies, and Walt Disney Animation Studios.

This is not Disney’s first privacy rodeo

Disney has a history of tussles with child privacy laws. In 2011, its Playdom subsidiary paid $3 million (at that point the largest COPPA penalty ever) for collecting data from more than 1.2 million children across 20 virtual world websites. In 2021, Disney also settled a lawsuit that accused it and others of collecting and selling kids’ information via child-focused mobile apps.

In the current case, the FTC voted 3-0 to refer this current case to the Department of Justice, with Commissioners Ferguson, Holyoak, and Meador citing what they described as,

“Disney’s abuse of parents’ trust.”

Under the settlement, Disney must do more than pay up. It also has to notify parents before collecting personal information from children under 13 and obtain parents’ consent to use it. Disney must also review whether individual videos should be labeled as made for kids. However, the FTC provides a get-out clause: Disney won’t have to do this if YouTube implements age assurance technologies that determine a viewer’s age (or age category).

Age assurance is clearly something the FTC is pursuing, saying:

“This forward-looking provision reflects and anticipates the growing use of age assurance technologies to protect kids online.”


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

ALPRs are recording your daily drive (Lock and Code S06E26)

This week on the Lock and Code podcast…

There’s an entire surveillance network popping up across the United States that has likely already captured your information, all for the non-suspicion of driving a car.

Automated License Plate Readers, or ALPRs, are AI-powered cameras that scan and store an image of every single vehicle that passes their view. They are mounted onto street lights, installed under bridges, disguised in water barrels, and affixed onto telephone poles, lampposts, parking signs, and even cop cars.

Once installed, these cameras capture a vehicle’s license plate number, along with its make, model, and color, and any identifying features, like a bumper sticker, or damage, or even sport trim options. Because nearly every ALPR camera has an associated location, these devices can reveal where a car was headed, and at what time, and by linking data from multiple ALPRs, it’s easy to determine a car’s daylong route and, by proxy, it’s owner’s daily routine.

This deeply sensitive information has been exposed in recent history.

In 2024, the US Cybersecurity and Information Security Agency discovered seven vulnerabilities in cameras made by Motorola Solutions, and at the start of 2025, the outlet Wired reported that more than 150 ALPR cameras were leaking their live streams.

But there’s another concern with ALPRs besides data security and potential vulnerability exploits, and that’s with what they store and how they’re accessed.

ALPRs are almost uniformly purchased and used by law enforcement. These devices have been used to help solve crime, but their databases can be accessed by police who do not live in your city, or county, or even state, and who do not need a warrant before making a search.

In fact, when police access the databases managed by one major ALPR manufacturer, named Flock, one of the few guardrails those police encounter is needing to type a single word in a basic text box. When Electronic Frontier Foundation analyzed 12 million searches made by police in Flock’s systems, they learned that police sometimes filled that text box with the word “protest,” meaning that police were potentially investigating activity that is protected by the First Amendment.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Will Freeman, founder of the ALRP-tracking project DeFlock Me, about this growing tide of neighborhood surveillance and the flimsy protections afforded to everyday people.

“License plate readers are a hundred percent used to circumvent the Fourth Amendment because [police] don’t have to see a judge. They don’t have to find probable cause. According to the policies of most police departments, they don’t even have to have reasonable suspicion.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

Grok apologizes for creating image of young girls in “sexualized attire”

Another AI system designed to be powerful and engaging ends up illustrating how guardrails routinely fail when development speed and feature races outrun safety controls.

In a post on X, AI chatbot Grok confirmed that it generated an image of young girls in “sexualized attire.”

Apologizing post by Grok

The potential violation of US laws regarding child sexual abuse material (CSAM) demonstrates the AI chatbot’s apparent lack of guardrails. Or, at least, the guardrails are far from as effective as we’d like them to be.

xAI, the company behind Musk’s chatbot, is reviewing the incident “to prevent future issues,” and the user responsible for the prompt reportedly had their account suspended. Reportedly, in a separate post on X, Grok described the incident as an isolated case and said that urgent fixes were being issued after “lapses in safeguards” were identified.

During the holiday period, we discussed how risks increased when AI developments and features are rushed out the door without adequate safety testing. We keep pushing the limits of what AI can do faster than we can make it safe. Visual models that can sexualize minors are precisely the kind of deployment that should never go live without rigorous abuse testing.

So, while on one hand we see geo-blocking due to national and state content restrictions, the AI linked to one of the most popular social media platforms failed to block content that many would consider far more serious than what lawmakers are currently trying to regulate. In effect, centralized age‑verification databases become breach targets while still failing to prevent AI tools from generating abusive material.

Women have also reported being targeted by Grok’s image-generation features. One X user tweeted:

“Literally woke up to so many comments asking Grok to put me in a thong / bikini and the results having so many bookmarks. Even worse I went onto the Grok page and saw slimy disgusting lowlifes doing that to pictures of CHILDREN. Genuinely disgusting.”

We can only imagine the devastating results when cybercriminals would abuse this type of weakness to defraud or extort parents with fabricated explicit content of their young ones. Tools for inserting real faces into AI-generated content are already widely available, and current safeguards appear unable to reliably prevent abuse.

Tips

This incident is yet another compelling reason to reduce your digital footprint. Think carefully before posting photos of yourself, your children, or other sensitive information on public social media accounts.

Treat everything you see online—images, voices, text—as potentially AI-generated unless they can be independently verified. They’re not only used to sway opinions, but also to solicit money, extract personal information, or create abusive material.


We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.