IT NEWS

What the Flock is happening with license plate readers?

You’re driving home after another marathon day of work and kid-shuttling, nursing a lukewarm coffee in a mug that’s trying too hard. As you turn onto your street, something new catches your eye. It’s a tall pole with a small, boxy device perched on top. But it’s not a bird-house and there’s no sign. There is, however, a camera pointed straight at your car.

It feels reassuring at first. After all, a neighbor was burglarized a few weeks ago. But then, dropping your kids at school the next morning, you pass another, and you start to wonder: Is my daily life being recorded and who is watching it?

That’s what happened to me. After a break-in on our street, a neighborhood camera caught an unfamiliar truck. It provided the clue police needed to track down the suspects. The same technology has shown up in major investigations, including the “Coroner Affair” murder case on ABC’s 20/20. These cameras aren’t just passive hardware. They’re everywhere now, as common as mailboxes, quietly logging where we go.

So if they’re everywhere, what do they collect? Who’s behind them? And what should the rest of us know before we get too comfortable or too uneasy?

A mounting mountain of surveillance

ALPRs aren’t hikers in the Alps. They’re Automatic License Plate Readers. Think of them as smart cameras that can “read” license plates. They snap a photo, use software to convert the plate into text, and store it. Kind of like how your phone scans handwriting and turns it into digital notes.

People like them because they make things quick and hands-free, whether you’re rolling through a toll or entering a gated neighborhood. But the “A” in ALPR (automatic) is where the privacy questions start. These cameras don’t just record problem cars. They record every car they see, wherever they’re pointed.

What exactly is Flock?

Flock Safety is a company that makes specialized ALPR systems, designed to scan and photograph every plate that passes, 24/7. Unlike gated-community or private driveway cameras, Flock systems stream footage to off-site servers, where it’s processed, analyzed, and added to a growing cloud database.

At the time of writing, there are probably well over 100,000 Flock cameras installed in the United States and increasingly rapidly. To put this in perspective, that’s one Flock camera for every 4,000 US citizens. And each camera tracks twice as many vehicles on average with no set limit.

Think of it like a digital neighborhood watch that never blinks. The cameras snap high-resolution images, tag timestamps, and note vehicle details like color and distinguishing features. All of it becomes part of a searchable log for authorized users, and that log grows by the second.

Adoption has exploded. Flock said in early 2024 that its cameras were used in more than 4,000 US cities. That growth has been driven by word of mouth (“our HOA said break-ins dropped after installing them”) and, in some cases, early-adopter discounts offered to communities.

A positive perspective

Credit where it’s due: these cameras can help. For many neighborhoods, Flock cameras make them feel safer. When crime ticks up or a break-in happens nearby, putting a camera at the entrance feels like a concrete way to regain control. And unlike basic security cameras, Flock systems can flag unfamiliar vehicles and spot patterns, which are useful for police when every second counts.

In my community, Flock footage has helped recover stolen cars and given police leads that would’ve otherwise gone cold. After our neighborhood burglary, the moms’ group chat calmed down a little knowing there was a digital “witness” watching the entrance.

In one Texas community, a spree of car break-ins stopped after a Flock camera caught a repeat offender’s plate, leading to an arrest within days. And in the “Coroner Affair” murder case, Flock data helped investigators map vehicle movements, leading to crucial evidence.

Regulated surveillance can also help fight fake videos. Skilled AI and CGI artists sometimes create fake surveillance footage that looks real, showing someone or their car doing something illegal or being somewhere suspicious. That’s a serious problem, especially if used in court. If surveillance is carefully managed and trusted, it can help prove what really happened and expose fabricated videos for what they are, protecting people from false accusations.

The security vs overreach tradeoff

Like any powerful tool, ALPRs come with pros and cons. On the plus side, they can help solve crimes by giving police crucial evidence—something that genuinely reassures residents who like having an extra set of “digital eyes” on the neighborhood. Some people also believe the cameras deter would-be burglars, though research on that is mixed.

But there are real concerns too. ALPRs collect sensitive data, often stored by third-party companies, which creates risk if that information is misused or hacked. And then there’s “surveillance creep,” which is the slow expansion of monitoring until it feels like everyone is being watched all the time.

So while there are clear benefits, it’s important to think about how the technology could affect your privacy and the community as a whole.

What’s being recorded and who gets to see it

Here’s the other side of the coin: What else do these cameras capture, who can see it, and how long is it kept?

Flock’s system is laser-focused on license plates and cars, not faces. The company says they don’t track what you’re wearing or who’s sitting beside you. Still, in a world where privacy feels more fragile every year, people (myself included) wonder how much these systems quietly log.

  • What’s recorded: License plate numbers, vehicle color/make/model, time, location. Some cameras can capture broader footage; some are strictly plate readers.
  • How long is it kept: Flock’s standard is 30 days, after which data is automatically deleted (unless flagged in an active investigation).
  • Who has access? This is where things get dicey:
    • Using Flock’s cloud, only “authorized users”, which can include community leaders and law enforcement, ideally with proper permissions or warrants, can view footage. Residents can make requests for someone to determine privileges.
    • Flock claims they don’t sell data, but it’s stored off-site, raising the stakes of a breach. The bigger the database, the more appealing it is to attackers.
    • Unlike a home security camera that you can control, these systems by design track everyone who comes and goes…not just the “bad guys.”

And while these cameras don’t capture people, they do capture patterns, like vehicles entering or leaving a neighborhood. That can reveal routines, habits, and movement over time. A neighbor was surprised to learn it had logged every one of her daily trips, including gym runs, carpool, and errands. Not harmful on its own, but enough to make you realize how detailed a picture these systems build of ordinary life.

The place for ALPRs… and where they don’t belong

If you’re feeling unsettled, you’re not alone. ALPRs are being installed at lightspeed, often faster than the laws meant to govern them. Will massive investment shape how future rules are written?

Surveillance and data collection laws

  • Federal: There’s no nationwide ban on license plate readers; law enforcement has used them for years. (We’ve also reported on police using drones to read license plates, raising similar concerns about oversight.) However, courts in the US increasingly grapple with how this data impacts Fourth Amendment “reasonable expectation of privacy” standards.
  • Local: Some states and cities have rules about where cameras can be placed on public and private roadways. They have also ordained how long footage can be kept. Check your local ordinances or ask your community for policy.

A good example is Oakland, where the City Council limited ALPR data retention to six months unless tied to an active investigation. Only certain authorized personnel can access the footage, every lookup is logged and auditable, and the city must publish annual transparency reports showing usage, access, and data-sharing. The policy also bans tracking anyone based on race, religion, or political views. It’s a practical attempt to balance public safety with privacy rights.

Are your neighbors allowed to record your car?

If your neighborhood is private property, usually yes. HOAs and community boards can install cameras at entrances and exits, much like a private parking lot. They still have to follow state law and, ideally, notify residents, so always read the fine print in those community updates.

What if the footage is misused or hacked?

This is the big one. If footage leaves your neighborhood, such as handed to police, shared too widely, or leaked online, it can create liability issues. Flock says its system is encrypted and tightly controlled, but no technology is foolproof. If you think footage was misused, you can request an audit or raise it with your HOA or local law enforcement.

Meet your advocates

snapshot cameras
Image courtesy of defock.me. This is just a snapshot-in-time of their map showing the locations of APLR cameras.

For surveillance

One thing stands out in this debate: the strongest supporters of ALPRs are the groups that use or sell them, i.e. law enforcement and the companies that profit from the technology. It is difficult to find community organizations or privacy watchdogs speaking up in support. Instead, many everyday people and civil liberties groups are raising concerns. It’s worth asking why the push for ALPRs comes primarily from those who benefit directly, rather than from the wider public who are most affected by increased surveillance.

For privacy

As neighborhood ALPRs like Flock cameras become more common, a growing set of advocacy and educational sites has stepped in to help people understand the technology, and to push back when needed:

Deflock.me is one of the most active. It helps residents opt their vehicles out where possible, track Flock deployments, and organize local resistance to unwanted surveillance.

Meanwhile, Have I Been Flocked? takes an almost playful approach to a very real issue: it lets people check whether their car has appeared in Flock databases. That simple search often surprises users and highlights how easily ordinary vehicles are tracked.

For folks seeking a deeper dive, Eyes on Flock and ALPR Watch map where Flock cameras and other ALPRs have been installed, providing detailed databases and reports. By shining a light on their proliferation, the sites empower residents to ask municipal leaders hard questions about the balance between public safety and civil liberties.

If you want to see the broader sweep of surveillance tech in the US, the Atlas of Surveillance is a collaboration between the Electronic Frontier Foundation (EFF) and University of Nevada, Reno. It offers an interactive map of surveillance systems, showing ALPRs like Flock in context of a growing web of automated observation.

Finally, Plate Privacy provides practical tools: advocacy guides, legal resources, and tips for shielding plates from unwanted scanning. It supports anyone who wants to protect the right to move through public space without constant tracking.

Together, these initiatives paint a clear picture: while ALPRs spread rapidly in the name of safety, an equally strong movement is demanding transparency, limits, and respect for privacy. Whether you’re curious, cautious, or concerned, these sites offer practical help and a reminder that you’re not alone in questioning how much surveillance is too much.

How to protect your privacy around ALPRs

This is where I step out of the weeds and offer real-world advice… one neighbor to another.

Talk to your neighborhood or city board

  • Ask about privacy: Who can access footage? How long is it stored? What counts as a “valid” reason to review it?
  • Request transparency: Push for clear, written policies that everyone can see.
  • Ask about opt-outs: Even if your state doesn’t require one, your community may still offer an option.

Key questions to ask about any new camera system

  • Who will have access to the footage?
  • How long will data be stored?
  • What’s the process for police, or anyone else, to request footage?
  • What safeguards are in place if the data is lost, shared, or misused?

Protecting your own privacy

  • Check your community’s camera policies regularly. Homeowners Associations (HOAs) update them more often than you’d think.
  • Consider privacy screens or physical barriers if a camera directly faces your home.
  • Stay updated on your state’s surveillance laws. Rules around data retention and access can change.

Finding the balance

You don’t have to choose between feeling safe and feeling free. With the right policies and a bit of open conversation communities can use technology without giving up privacy. The goal isn’t to pit safety against rights, but to make sure both can coexist.

What’s your take? Have ALPRs made you feel safer, more anxious, or a bit of both? Share your thoughts in the comments, and let’s keep the conversation welcoming, practical, and focused on building communities we’re proud to live in. Let’s watch out for each other not just with cameras, but with compassion and dialogue, too. You can message me on Linkedin at https://www.linkedin.com/in/mattburgess/


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Gmail can read your emails and attachments to train its AI, unless you opt out

Under the radar, Google has added features that allow Gmail to access all private messages and attachments for training its AI models.

If you use Gmail, you need to be aware of an important change that’s quietly rolling out. Reportedly, Google has recently started automatically opting users in to allow Gmail to access all private messages and attachments for training its AI models. This means your emails could be analyzed to improve Google’s AI assistants, like Smart Compose or AI-generated replies. Unless you decide to take action.

The reason behind this is Google’s push to power new Gmail features with its Gemini AI, helping you write emails faster and manage your inbox more efficiently. To do that, Google is using real email content, including attachments, to train and refine its AI models. Some users are now reporting that these settings are switched on by default instead of asking for explicit opt-in.

Which means that if you don’t manually turn these setting off, your private messages may be used for AI training behind the scenes. Even though Google promises strong privacy measures like anonymization and data security during AI training, for anyone handling sensitive or confidential information, that may not feel reassuring..

Sure, your Gmail experience would get smarter and more personalized. Features like predictive text and AI-powered writing assistance rely on this kind of data. But is it worth the risks? I see plenty of reasons to make one uncomfortable.

Yes, these features can make Gmail smarter and more personalized. But the lack of explicit consent feels like a step backward for people who want control over how their personal data is used.

How to opt out

Opting out requires you to change settings in two places, so I’ve tried to make it as easy to follow as possible. Feel free to let me know in the comments if I missed anything.

To fully opt out, you must turn off Gmail’s “Smart features” in two separate locations in your settings. Don’t miss one, or AI training may continue.

Step 1: Turn off Smart Features in Gmail, Chat, and Meet settings

  • Open Gmail on your desktop or mobile app.
  • Click the gear icon → See all settings (desktop) or Menu → Settings (mobile).
  • Find the section called Smart Features in Gmail, Chat, and Meet. You’ll need to scroll down quite a bit.
Smart features settings
  • Uncheck this option.
  • Scroll down and hit Save changes if on desktop.

Step 2: Turn off Google Workspace Smart Features

  • Still in Settings, locate Google Workspace smart features.
  • Click on Manage Workspace smart feature settings.
  • You’ll see two options: Smart features in Google Workspace and Smart features in other Google products.
Smart feature settings

  • Toggle both off.
  • Save again in this screen.

Step 3: Verify if both are off

  • Make sure both toggles remain off.
  • Refresh your Gmail app or sign out and back in to confirm changes.

Why two places?

Google separates “Workspace” smart features (email, chat, meet) from smart features used across other Google apps. To fully opt out of feeding your data into AI training, both must be disabled.

Note

Your account might not show these settings enabled by default yet (mine didn’t). Google appears to be rolling this out gradually. But if you care about privacy and control, double-check your settings today.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Attackers are using “Sneaky 2FA” to create fake sign-in windows that look real

Attackers have a new trick to steal your username and password: fake browser pop-ups that look exactly like real sign-in windows. These “Browser-in-the-Browser” attacks can fool almost anyone, but a password manager and a few simple habits can keep you safe.


Phishing attacks continue to evolve, and one of the more deceptive tricks in the attacker’s arsenal today is the Browser-in-the-Browser (BitB) attack. At its core, BitB is a social engineering technique that makes users believe they’re interacting with a genuine browser pop-up login window when, in reality, they’re dealing with a convincing fake built right into a web page.

Researchers recently found a Phishing-as-a-Service (PhaaS) kit known as “Sneaky 2FA” that’s making these capabilities available on the criminal marketplace. Customers reportedly receive a licensed, obfuscated version of the source code and can deploy it however they like.

Attackers use this kit to create a fake browser window using HTML and CSS. It’s very deceptive because it includes a perfectly rendered address bar showing the legitimate website’s URL. From a user’s perspective, everything looks normal: the window design, the website address, even the login form. But it’s a carefully crafted illusion designed to steal your username and password the moment you start typing.

Normally we tell people to check whether the URL in the address bar matches your expectations, but in this case that won’t help. The fake URL bar can fool the human eye, it can’t fool a well-designed password manager. Password managers are built to recognize only the legitimate browser login forms, not HTML fakes masquerading as browser windows. This is why using a password manager consistently matters. It not only encourages strong, unique passwords but also helps spot inconsistencies by refusing to autofill on suspicious forms.

Sneaky 2FA uses various tricks to avoid detection and analysis. For example, by preventing security tools from accessing the phishing pages: the phishers redirect unwanted visitors to harmless sites and show the BitB page only to high-value targets. For those targets the pop-up window adapts to match each visitor’s operating system and browser.

The domains the campaigns use are also short-lived. Attackers “burn and replace” them to stay ahead of blocklists. Which makes it hard to block these campaigns based on domain names.

So, what can we do?

In the arms race against phishing schemes, pairing a password manager with multi-factor authentication (MFA) offers the best protection.

As always, you’re the first line of defense. Don’t click on links in unsolicited messages of any type before verifying and confirming they were sent by someone you trust. Staying informed is important as well, because you know what to expect and what to look for.

And remember: it’s not just about trusting what you see on the screen. Layered security stops attackers before they can get anywhere.

Another effective security layer to defend against BitB attacks is Malwarebytes’ free browser extension, Browser Guard, which detects and blocks these attacks heuristically.


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Mac users warned about new DigitStealer information stealer

A new infostealer called DigitStealer is going after Mac users. It avoids detection, skips older devices, and steals files, passwords, and browser data. We break down what it does and how to protect your Mac.


Researchers have described a new malware called DigitStealer that steals sensitive information from macOS users.

This variant comes with advanced detection-evasion techniques and a multi-stage attack chain. Most infostealers go after the same types of data and use similar methods to get it, but DigitStealer is different enough to warrant attention.

A few things make it stand out: platform-specific targeting, fileless operation, and anti-analysis techniques. Together, they pose relatively new challenges for Mac users.

The attack starts with a file disguised as a utility app called “DynamicLake,” which is hosted on a fake website rather than the legitimate company’s site. To trick users, it instructs you to drag a file into Terminal, which will initiate the download and installation of DigitStealer.

If your system matches certain regions or is a virtual machine, the malware won’t run. That’s likely to hinder analysis by researchers and to steer clear of infecting people in its home country, which is enough in some countries to stay out of prison. It also limits itself to devices with newer ARM features introduced with M2 chips or later. chips, skipping older Macs, Intel-based chips, and most virtual machines.

The attack chain is largely fileless so it won’t leave many traces behind on an affected machine. Unlike file-based attacks that execute the payload in the hard drive, fileless attacks execute the payload in Random Access Memory (RAM). Running malicious code directly in the memory instead of the hard drive has several advantages for attackers:

  • Evasion of traditional security measures: Fileless attacks bypass antivirus software and file-signature detection, making them harder to identify using conventional security tools.   
  • Harder to remediate: Since fileless attacks don’t create files, they can be more challenging to remove once detected. This can make it extra tricky for forensics to trace an attack back to the source and restore the system to a secure state.

DigitStealer’s initial payload asks for your password and tries to steal documents, notes, and files. If successful, it uploads them to the attackers’ servers.

The second stage of the attack goes after browser information from Chrome, Brave, Edge, Firefox and others, as well as keychain passwords, crypto wallets, VPN configurations (specifically OpenVPN and Tunnelblick), and Telegram sessions.

How to protect your Mac

DigitStealer shows how Mac malware keeps evolving. It’s different from other infostealers, splitting its attack into stages, targeting new Mac hardware, and leaving barely any trace.

But you can still protect yourself:

Malwarebytes detects DigitStealer
  • Always be careful what you run in Terminal. Don’t follow instructions from unsolicited messages.
  • Be careful where you download apps from.
  • Keep your software, especially your operating system and your security defenses, up to date.
  • Turn on multi-factor authentication so a stolen password isn’t enough to break into your accounts.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Sharenting: are you leaving your kids’ digital footprints for scammers to find? 

Let’s be real: the online world is a huge part of our kids’ lives these days. From the time they’re tiny, we share photos, moments, and milestones online—proud parent stuff! Schools, friends, and family all get involved too. Before we know it, our kids have a whole digital history they didn’t even know they were building. Unlike footprints at the beach, this trail never washes away. 

That habit even has a name now: sharenting. It’s when parents share details of their child’s life online, often without realizing how public or permanent those posts can become. 

What exactly is a digital footprint? 

Think of your child’s digital footprint as the trail they (and you) leave across the internet. It includes every photo, post, comment, and account, plus all the data quietly collected behind the scenes. 

There are two sides to it: 

  • Active footprints: what you or your child share directly, such as photos, TikTok videos, usernames, or status updates. Even “private” posts can be screenshot or reshared. 
  • Passive footprints: what gets collected automatically. Cookies, location data, and app activity quietly build profiles of who your child is and what they do. 

Both add up to a digital version of your child that can stick around for years. 

Why guard your child’s digital footprint like gold? 

For kids and teens, their online presence shapes how the world sees them—friends, teachers, even future employers. But it also creates risks: 

  • Cyberbullying: once something’s online, it can be copied or mocked. 
  • Future opportunities: colleges and jobs may see old posts that no longer reflect who they are. 
  • Safety concerns: oversharing locations or routines can make it easier for strangers to find or trick them. 
  • Identity theft: birthdates, school names, and addresses can help criminals create fake identities. 

Practicing good digital hygiene keeps those risks small. 

Kids leave hidden trails too 

Kids don’t need social media accounts to leave data behind. Gaming platforms, smartwatches, school apps, and even voice assistants collect fragments of personal information. 

That innocent photo from a class project might live in a public gallery. A leaderboard can display a real name or score history. Even nicknames or in-game chat can expose more than intended. 

Help your kids check what’s visible publicly and what isn’t. 

How sharenting can make it worse 

Don’t worry, I’ve done some of these too! We love to share and celebrate our kids, but sometimes we give away more than we mean to: 

  • Posting full names, birthdays, and locations on open social media. 
  • Sharing photos with school logos, house numbers, or nearby landmarks visible. 
  • Leaving geotagging or location data on by accident (it’s scary how precise this can be). 
  • Talking about routines, worries, or personal struggles in public forums. 
  • Forgetting to clean up old posts as our kids get bigger. 

And it’s easy to forget about all those apps we sign up to “just to try it”. They might be collecting info in the background, too. 

Two real-life sharenting stories 

Karen loves her son, Max. She posts his awards, soccer games, and milestones online, sometimes tagging the school or leaving her phone’s location on. 

It’s innocent… until someone strings the details together. A fake gamer profile messages Max: “Hey, don’t you go to Graham Elementary? I saw your soccer pics!” Suddenly, a friendly chat feels personal and real. 

Karen meant well, but her posts created a map for someone else to follow. 

Then there’s the story we covered of a mother in Florida who picked up the phone to hear her daughter sobbing. She’d been in a car accident, hit a pregnant woman, and needed bail money right away. The voice sounded exactly like her child. Terrified, she followed the caller’s instructions and handed over $15,000. Only later did she learn her daughter had been safe at work the whole time. Scammers had used AI to clone her voice from a short online video. It’s a chilling reminder that even something as ordinary as a video or social post can become fuel for manipulation. 

Simple steps parents can take 

  • Be a model: before you post, ask, “Would I be OK with a stranger seeing this?” 
  • Start young: teach privacy basics early and update as they grow. 
  • Lock it down: review privacy settings together on both your accounts. 
  • Use pseudonyms: encourage nicknames for games or public forums. 
  • Agree as a family: set boundaries for what’s OK to share. 
  • Turn off geotags: remove automatic location data from photos. 

Know what to do if something goes wrong 

Everyone messes up online sometimes. It happens to the best of us. We’ve all shared something we wish we hadn’t. The goal isn’t to scare our kids (or ourselves) away from the internet, but to help them feel confident, safe, and smart about it all. 

If your child ever feels uncomfortable or gets into a sticky situation online: 

  • Stay calm and let them know you are safe to talk to. 
  • Keep record of any sketchy messages or harassment. 
  • Use blocking, reporting, and privacy tools. 
  • Loop in school counselors or other trusted adults if you need backup. 
  • If there’s a real threat or criminal activity, contact the proper authorities. 

You’ve got this! 

The online world is always changing, and honestly, we’re all learning as we go. But by staying curious, keeping the lines open, and setting a good example yourself, you’ll help your kids build a digital life they can be proud of. 

Let’s look out for each other. If you’ve got thoughts or tips about sharenting and online safety, do share them with me. You can message me on Linkedin at https://www.linkedin.com/in/mattburgess/. We’re all in this together. 


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Chrome zero-day under active attack: visiting the wrong site could hijack your browser

Google has released an update for its Chrome browser that includes two security fixes. Both are classified as high severity, and one is reportedly exploited in the wild. These flaws were found in Chrome’s V8 engine, which is the part of Chrome (and other Chromium-based browsers) that runs JavaScript.

Chrome is by far the world’s most popular browser, used by an estimated 3.4 billion people. That scale means when Chrome has a security flaw, billions of users are potentially exposed until they update.

These vulnerabilities are serious because they affect the code that runs almost every website you visit. Every time you load a page, your browser executes JavaScript from all sorts of sources, whether you notice it or not. Without proper safety checks, attackers can sneak in malicious instructions that your browser then runs—sometimes without you clicking anything. That could lead to stolen data, malware infections, or even a full system compromise.

That’s why it’s important to install these patches promptly. Staying unpatched means you could be open to an attack just by browsing the web, and attackers often exploit these kinds of flaws before most users have a chance to update. Always let your browser update itself, and don’t delay restarting to apply security patches, because updates often fix exactly this kind of risk.

How to update

The Chrome update brings the version number to 142.0.7444.175/.176 for Windows, 142.0.7444.176 for macOS and 142.0.7444.175 for Linux. So, if your Chrome is on the version number 142.0.7444.175 or later, it’s protected from these vulnerabilities.

The easiest way to update is to allow Chrome to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To update manually, click the “More” menu (three stacked dots), then choose Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then relaunch Chrome to complete the update, and you’ll be protected against these vulnerabilities.

You can find more detailed update instructions and how to read the version number in our article on how to update Chrome on every operating system.

Chrome is up to date

Technical details

Both vulnerabilities are characterized as “type confusion” flaws in V8.

Type confusion happens when code doesn’t verify the object type it’s handling and then uses it incorrectly. In other words, the software mistakes one type of data for another—like treating a list as a single value or a number as text. This can cause Chrome to behave unpredictably and, in some cases, let attackers manipulate memory and execute code remotely through crafted JavaScript on a malicious or compromised website.

The actively exploited vulnerability—Google says “an exploit for CVE-2025-13223 exists in the wild”—was discovered by Google’s Threat Analysis Group (TAG). It can allow a remote attacker to exploit heap corruption via a malicious HTML page. Which means just visiting the “wrong” website might be enough to compromise your browser.

Google hasn’t shared details yet about who is exploiting the flaw, how they do it in real-world attacks, or who’s being targeted. However, the TAG team typically focuses on spyware and nation-state attackers that abuse zero days for espionage.

The second vulnerability, tracked as CVE-2025-13224, was discovered by Google’s Big Sleep, an AI-driven project to discover vulnerabilities. It has the same potential impact as the other vulnerability, but cybercriminals probably haven’t yet figured out how to use it.

Users of other Chromium-based browsers—like Edge, Opera, and Brave—can expect similar updates in the near future.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Thieves order a tasty takeout of names and addresses from DoorDash

DoorDash is known for delivering takeout food, but last month the company accidentally served up a tasty plate of personal data, too. It disclosed a breach on October 25, 2025, where an employee fell for a social engineering attack that allowed attackers to gain account access.

Breaches like these are sadly common, but it’s how DoorDash handled this breach, along with another security issue, that have given some cause for concern.

Information stolen during the breach varied by user, according to DoorDash, which connects gig economy delivery drivers with people wanting food bought to their door. It said that names, phone numbers, email addresses, and physical addresses were stolen.

DoorDash said that as well as telling law enforcement, it has added more employee training and awareness, hired a third party company to help with the investigation, and deployed unspecified improvements to its security systems to help stop similar breaches from happening again. It cooed:

“At DoorDash, we believe in continuous improvement and getting 1% better every day.”

However, it might want to get a little better at disclosing breaches, warn experts. It left almost three weeks in between the discovery of the event on October 25 and notifying customers on November 13, angering some customers.

Just as irksome for some was the company’s insistence that “no sensitive information was accessed”. It classifies this as Social Security numbers or other government-issued identification numbers, driver’s license information, or bank or payment card information. While that data wasn’t taken, names, addresses, phone numbers, and emails are pretty sensitive.

One Canadian user on X was angry enough to claim a violation of Canadian breach law, and promised further action:

“I should have been notified immediately (on Oct 25) of the leak and its scope, and told they would investigate to determine if my account was affected—that way I could take the necessary precautions to protect my privacy and security. […] This process violates Canadian data breach law. I’ll be filing a case against DoorDash in provincial small claims court and making a complaint to the Office of the Privacy Commissioner of Canada.”

How soon should breach notifications happen?

How long is too long when it comes to breach notification? From an ethical standpoint, companies should tell customers as quickly as possible to ensure that individuals can protect themselves—but they also need time to understand what has happened. Some of these attacks can be complex, involving bad actors that have been inside networks for months and have established footholds in the system.

In some jurisdictions, privacy law dictates notification within a certain period, while others are vague. Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) simply requires notification as soon as is feasible. In the US, disclosure laws are currently set on a per-state level. For example, California recently passed Senate Bill 446, which mandates reporting breaches to consumers within 30 days as of January 1, 2026. That would still leave DoorDash’s latest breach report in compliance though.

Another disclosure spat

This isn’t the only disclosure controversy currently surrounding DoorDash. Security researcher doublezero7 discovered an email spoofing flaw in DoorDash for Business, its platform for companies to handle meal deliveries.

The flaw allowed anyone to create a free account, add fake employees, and send branded emails from DoorDash servers. Those mails would pass various email client security tests and land without a spam message in email inboxes, the researcher said.

The researcher filed a report with bug bounty program HackerOne in July 2024, but it was closed as “Informative”. DoorDash didn’t fix it until this month, after the researcher complained.

However, all might not be as it seems. DoorDash has complained that the researcher made financial demands around disclosure timelines that felt extortionate, according to Bleeping Computer.

What actions can you take?

Back to the data breach issue. What can you do to protect yourself against events like these? The Canadian X user explains that they used a fake name and forwarded email address for their account, but that didn’t stop their real phone number and physical address being leaked.

You can’t avoid using your real credit card number, either—although many ecommerce sites will make saving credit card details optional.

Perhaps the best way to stay safe is to use a credit monitoring service, and to watch news sites like this one for information about breaches… whenever companies decide to disclose them.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Why it matters when your online order is drop-shipped

Online shopping has never been easier. A few clicks can get almost anything delivered straight to your door, sometimes at a surprisingly low price. But behind some of those deals lies a fulfillment model called drop-shipping. It’s not inherently fraudulent, but it can leave you disappointed, stranded without support, or tangled in legal and safety issues.

I’m in the process of de-Googling myself, so I’m looking to replace my Fitbit. Since Google bought Fitbit, it’s become more difficult to keep your information from them—but that’s a story for another day.

Of course, Facebook picked up on my searches for replacements and started showing me ads for smartwatches. Some featured amazing specs at very reasonable prices. But I had never heard of the brands, so I did some research and quickly fell into the world of drop-shipping.

What is drop-shipping, and why is it risky?

Drop-shipping means the seller never actually handles the stock they advertise. Instead, they pass your order to another company—often an overseas manufacturer or marketplace vendor—and the product is then shipped directly to you. On the surface, this sounds efficient: less overhead for sellers and more choices for buyers. In reality, the lack of oversight between you and the actual supplier can create serious problems.

One of the biggest concerns is quality control, or the lack of it. Because drop-shippers rely on third parties they may never have met, product descriptions and images can differ wildly from what’s delivered. You might expect a branded electronic device and receive a near-identical counterfeit with dubious safety certifications. With chargers, batteries, and children’s toys, poor quality control isn’t just disappointing, it can be downright dangerous. Goods may not meet local standards and safety protocols, and contain unhealthy amounts of chemicals.

Buyers might unknowingly receive goods that lack market approval or conformity marks such as CE (Conformité Européenne = European Conformity), the UL (Underwriters Laboratories) mark, or FCC certification for electronic devices. Customs authorities can and do seize noncompliant imports, resulting in long delays or outright confiscation. Some buyers report being asked to provide import documentation for items they assumed were domestic purchases.

Then there’s the issue of consumer rights. Enforcing warranties or returns gets tricky when the product never passed through the seller’s claimed country of origin. Even on platforms like Amazon or eBay that offer buyer protection, resolving disputes can take a while to resolve.

Drop-shipping also raises data privacy concerns. Third-party sellers in other jurisdictions might receive your personal address and phone number directly. With little enforcement across borders, this data could be reused or leaked into marketing lists. In some cases, multiple resellers have access to the same dataset, amplifying the risk.

In the case of the watches, other users said they were pushed to install Chinese-made apps with different names than the brand of the watch.. We’ve talked before about the risks that come with installing unknown apps.

What you can do

A few quick checks can spare you a lot of trouble.

  • Research unfamiliar sellers, especially if the price looks too good to be true.
  • Check where the goods ship from before placing an order.
  • Use payment methods with strong buyer protection.
  • Stick with platforms that verify sellers and offer clear refund policies.
  • Be alert for unexpected shipping fees, extra charges, or requests for more personal information after you buy.

Drop-shipping can be legitimate when done well, but when it isn’t, it shifts nearly all risk to the buyer. And when counterfeits, privacy issues and surprise fees intersect, the “deal” is your data, your safety, or your patience.

If you’re unsure about an ad, you can always submit it to Malwarebytes Scam Guard. It’ll help you figure out whether the offer is safe to pursue.

And when buying any kind of smart device that needs you to download an app, it’s worth remembering these actions:

  • Question the permissions an app asks for. Does it serve a purpose for you, the user, or is it just some vendor being nosy?
  • Read the privacy policy—yes, really. Sometimes they’re surprisingly revealing.
  • Don’t hand over personal data manufacturers don’t need. What’s in it for you, and what’s the price you’re going to pay? They may need your name for the warranty, but your gender, age, and (most of the time) your address isn’t needed.

Most importantly’worry about what companies do with the information and how well they protect it from third-party abuse or misuse.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Your coworker is tired of AI “workslop” (Lock and Code S06E23)

This week on the Lock and Code podcast…

Everything’s easier with AI… except having to correct it.

In just the three years since OpenAI released ChatGPT, not only has onlife life changed at home—it’s also changed at work. Some of the biggest software companies today, like Microsoft and Google, are forwarding a vision of an AI-powered future where people don’t write their own emails anymore, or make their own slide decks for presentations, or compile their own reports, or even read their own notifications, because AI will do it for them.

But it turns out that offloading this type of work onto AI has consequences.

In September, a group of researchers from Stanford University and BetterUp Labs published findings from an ongoing study into how AI-produced work impacts the people who receive that work. And it turns out that the people who receive that work aren’t its biggest fans, because it’s not just work that they’re having to read, review, and finalize. It is, as the researchers called it, “workslop.”

Workslop is:

“AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task. It can appear in many different forms, including documents, slide decks, emails, and code. It often looks good, but is overly long, hard to read, fancy, or sounds off.”

Far from an indictment on AI tools in the workplace, the study instead reveals the economic and human costs that come with this new phenomenon of “workslop.” The problem, according to the researchers, is not that people are using technology to help accomplish tasks. The problem is that people are using technology to create ill-fitting work that still requires human input, review, and correction down the line.

“The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work,” the researchers wrote.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Dr. Kristina Rapuano, senior research scientist at BetterUp Labs, about AI tools in the workplace, the potential lost productivity costs that come from “workslop,” and the sometimes dismal opinions that teammates develop about one another when receiving this type of work.

“This person said, ‘Having to read through workshop is demoralizing. It takes away time I could be spending doing my job because someone was too lazy to do theirs.’”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

The price of ChatGPT’s erotic chat? $20/month and your identity

To talk dirty to ChatGPT, you may soon have to show it your driver’s license.

OpenAI announced last month that ChatGPT will soon offer erotica—but only for verified adults. That sounds like a clever guardrail until you realize what “verified” might mean: uploading government identification to a company that already knows your search history, your conversations, and maybe your fantasies.

It’s a surreal moment for technology. The most famous AI tool in the world is turning into a porn gatekeeper. And it’s not happening in a vacuum. California just passed a law requiring age checks for app downloads. Discord’s age-verification partner was hacked this summer, exposing 70,000 government-issued IDs that are now being used for extortion. Twenty-four US states have passed similar laws.

What began as an effort to keep kids off adult sites has quietly evolved into the largest digital ID system ever built. One we never voted for.

The normalization of online ID checkpoints

Age verification started as a moral crusade. Lawmakers wanted to protect minors from explicit material. However, every system that requires an ID online transforms into something else entirely: a surveillance checkpoint. To prove you’re an adult, you hand over the same information criminals and governments dream of having—and to a patchwork of private vendors who store it indefinitely.

We’ve already seen where that leads. In the UK, after age-gating rules took effect under the Online Safety Act, one of the verification companies was breached. In the US, the AU10TIX breach exposed user data from Uber, X, and TikTok. Each time, the same story: people forced to upload passports, driver’s licenses, or selfies, only to watch that data leak.

If hackers wanted to design a dream scenario for mass identity theft, this would be it. Governments legally requiring millions of adults to upload the exact documents criminals need.

The illusion of safety

The irony is that none of this actually protects children. In the UK, VPN sign-ups spiked 1,400% the day the new restrictions went live. We hope that’s from adults balking at handing over personal data, but the point is any teen with a search bar can bypass an age-gate in minutes. The result isn’t a safer internet—it’s an internet that collects more data about adults while pushing kids toward sketchier, unregulated corners of the web.

Parents already have better options for keeping inappropriate content at bay: device-level controls, filtered browsers, phones built for kids. None of those require turning the rest of us into walking ID tokens.

From bars to browsers

Defenders like to compare online verification to showing ID at a bar. But when you flash your license to buy a beer, the cashier doesn’t scan it, store it, and build a permanent record of your drinking habits. Online verification does exactly that. Every log-in becomes another data point linking your identity to what you read, watch, and say.

It’s not hard to imagine how this infrastructure expands. Today it’s porn, violence, and “mature” chatbots. Tomorrow it could be reproductive-health forums, LGBTQ+ resources, or political discussion groups flagged as “sensitive.” Once the pipes exist, someone will always find a new reason to use them.

When innovation starts to feel invasive

Let’s be honest. We could all make money if we just decided to build porn machines, and that’s what this new offering from ChatGPT feels like. It didn’t take long for AI to grab a slice of the OnlyFans market. Except the price of admission isn’t only $20 a month; it’s potentially your identity and a whole lot of heartache.

As Jason Kelley of the Electronic Frontier Foundation explained on my Lock and Code podcast,

“Once you are asked to give certain types of information to a website, there’s no way to know what that company, who’s supposedly verifying your age, is doing with that information.”

The verification process itself becomes a form of surveillance, creating detailed records of legal adult behavior that governments and cybercriminals can exploit.

This is how surveillance gets normalized: one “safety” feature at a time.

ChatGPT’s erotic mode will make ID-upload feel routine—a casual step before chatting with your favorite AI companion. But beneath the surface, those IDs will feed a new class of data brokers and third-party verifiers whose entire business depends on linking your real identity to everything you do online.

We’ve reached the point where governments and corporations don’t need to build a single centralized database; we’re volunteering one piece at a time.

ChatGPT’s latest intentions are a preview of what’s next. The internet has been drifting toward identity for years—from social logins to verified profiles—and AI is simply accelerating that shift. What used to be pockets of anonymity are becoming harder to find, replaced by a web that expects to know exactly who you are.

The future of “safe” online spaces shouldn’t depend on handing over your driver’s license to an AI.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.