IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Chrome zero-day under active attack: visiting the wrong site could hijack your browser

Google has released an update for its Chrome browser that includes two security fixes. Both are classified as high severity, and one is reportedly exploited in the wild. These flaws were found in Chrome’s V8 engine, which is the part of Chrome (and other Chromium-based browsers) that runs JavaScript.

Chrome is by far the world’s most popular browser, used by an estimated 3.4 billion people. That scale means when Chrome has a security flaw, billions of users are potentially exposed until they update.

These vulnerabilities are serious because they affect the code that runs almost every website you visit. Every time you load a page, your browser executes JavaScript from all sorts of sources, whether you notice it or not. Without proper safety checks, attackers can sneak in malicious instructions that your browser then runs—sometimes without you clicking anything. That could lead to stolen data, malware infections, or even a full system compromise.

That’s why it’s important to install these patches promptly. Staying unpatched means you could be open to an attack just by browsing the web, and attackers often exploit these kinds of flaws before most users have a chance to update. Always let your browser update itself, and don’t delay restarting to apply security patches, because updates often fix exactly this kind of risk.

How to update

The Chrome update brings the version number to 142.0.7444.175/.176 for Windows, 142.0.7444.176 for macOS and 142.0.7444.175 for Linux. So, if your Chrome is on the version number 142.0.7444.175 or later, it’s protected from these vulnerabilities.

The easiest way to update is to allow Chrome to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To update manually, click the “More” menu (three stacked dots), then choose Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then relaunch Chrome to complete the update, and you’ll be protected against these vulnerabilities.

You can find more detailed update instructions and how to read the version number in our article on how to update Chrome on every operating system.

Chrome is up to date

Technical details

Both vulnerabilities are characterized as “type confusion” flaws in V8.

Type confusion happens when code doesn’t verify the object type it’s handling and then uses it incorrectly. In other words, the software mistakes one type of data for another—like treating a list as a single value or a number as text. This can cause Chrome to behave unpredictably and, in some cases, let attackers manipulate memory and execute code remotely through crafted JavaScript on a malicious or compromised website.

The actively exploited vulnerability—Google says “an exploit for CVE-2025-13223 exists in the wild”—was discovered by Google’s Threat Analysis Group (TAG). It can allow a remote attacker to exploit heap corruption via a malicious HTML page. Which means just visiting the “wrong” website might be enough to compromise your browser.

Google hasn’t shared details yet about who is exploiting the flaw, how they do it in real-world attacks, or who’s being targeted. However, the TAG team typically focuses on spyware and nation-state attackers that abuse zero days for espionage.

The second vulnerability, tracked as CVE-2025-13224, was discovered by Google’s Big Sleep, an AI-driven project to discover vulnerabilities. It has the same potential impact as the other vulnerability, but cybercriminals probably haven’t yet figured out how to use it.

Users of other Chromium-based browsers—like Edge, Opera, and Brave—can expect similar updates in the near future.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Thieves order a tasty takeout of names and addresses from DoorDash

DoorDash is known for delivering takeout food, but last month the company accidentally served up a tasty plate of personal data, too. It disclosed a breach on October 25, 2025, where an employee fell for a social engineering attack that allowed attackers to gain account access.

Breaches like these are sadly common, but it’s how DoorDash handled this breach, along with another security issue, that have given some cause for concern.

Information stolen during the breach varied by user, according to DoorDash, which connects gig economy delivery drivers with people wanting food bought to their door. It said that names, phone numbers, email addresses, and physical addresses were stolen.

DoorDash said that as well as telling law enforcement, it has added more employee training and awareness, hired a third party company to help with the investigation, and deployed unspecified improvements to its security systems to help stop similar breaches from happening again. It cooed:

“At DoorDash, we believe in continuous improvement and getting 1% better every day.”

However, it might want to get a little better at disclosing breaches, warn experts. It left almost three weeks in between the discovery of the event on October 25 and notifying customers on November 13, angering some customers.

Just as irksome for some was the company’s insistence that “no sensitive information was accessed”. It classifies this as Social Security numbers or other government-issued identification numbers, driver’s license information, or bank or payment card information. While that data wasn’t taken, names, addresses, phone numbers, and emails are pretty sensitive.

One Canadian user on X was angry enough to claim a violation of Canadian breach law, and promised further action:

“I should have been notified immediately (on Oct 25) of the leak and its scope, and told they would investigate to determine if my account was affected—that way I could take the necessary precautions to protect my privacy and security. […] This process violates Canadian data breach law. I’ll be filing a case against DoorDash in provincial small claims court and making a complaint to the Office of the Privacy Commissioner of Canada.”

How soon should breach notifications happen?

How long is too long when it comes to breach notification? From an ethical standpoint, companies should tell customers as quickly as possible to ensure that individuals can protect themselves—but they also need time to understand what has happened. Some of these attacks can be complex, involving bad actors that have been inside networks for months and have established footholds in the system.

In some jurisdictions, privacy law dictates notification within a certain period, while others are vague. Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA) simply requires notification as soon as is feasible. In the US, disclosure laws are currently set on a per-state level. For example, California recently passed Senate Bill 446, which mandates reporting breaches to consumers within 30 days as of January 1, 2026. That would still leave DoorDash’s latest breach report in compliance though.

Another disclosure spat

This isn’t the only disclosure controversy currently surrounding DoorDash. Security researcher doublezero7 discovered an email spoofing flaw in DoorDash for Business, its platform for companies to handle meal deliveries.

The flaw allowed anyone to create a free account, add fake employees, and send branded emails from DoorDash servers. Those mails would pass various email client security tests and land without a spam message in email inboxes, the researcher said.

The researcher filed a report with bug bounty program HackerOne in July 2024, but it was closed as “Informative”. DoorDash didn’t fix it until this month, after the researcher complained.

However, all might not be as it seems. DoorDash has complained that the researcher made financial demands around disclosure timelines that felt extortionate, according to Bleeping Computer.

What actions can you take?

Back to the data breach issue. What can you do to protect yourself against events like these? The Canadian X user explains that they used a fake name and forwarded email address for their account, but that didn’t stop their real phone number and physical address being leaked.

You can’t avoid using your real credit card number, either—although many ecommerce sites will make saving credit card details optional.

Perhaps the best way to stay safe is to use a credit monitoring service, and to watch news sites like this one for information about breaches… whenever companies decide to disclose them.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Why it matters when your online order is drop-shipped

Online shopping has never been easier. A few clicks can get almost anything delivered straight to your door, sometimes at a surprisingly low price. But behind some of those deals lies a fulfillment model called drop-shipping. It’s not inherently fraudulent, but it can leave you disappointed, stranded without support, or tangled in legal and safety issues.

I’m in the process of de-Googling myself, so I’m looking to replace my Fitbit. Since Google bought Fitbit, it’s become more difficult to keep your information from them—but that’s a story for another day.

Of course, Facebook picked up on my searches for replacements and started showing me ads for smartwatches. Some featured amazing specs at very reasonable prices. But I had never heard of the brands, so I did some research and quickly fell into the world of drop-shipping.

What is drop-shipping, and why is it risky?

Drop-shipping means the seller never actually handles the stock they advertise. Instead, they pass your order to another company—often an overseas manufacturer or marketplace vendor—and the product is then shipped directly to you. On the surface, this sounds efficient: less overhead for sellers and more choices for buyers. In reality, the lack of oversight between you and the actual supplier can create serious problems.

One of the biggest concerns is quality control, or the lack of it. Because drop-shippers rely on third parties they may never have met, product descriptions and images can differ wildly from what’s delivered. You might expect a branded electronic device and receive a near-identical counterfeit with dubious safety certifications. With chargers, batteries, and children’s toys, poor quality control isn’t just disappointing, it can be downright dangerous. Goods may not meet local standards and safety protocols, and contain unhealthy amounts of chemicals.

Buyers might unknowingly receive goods that lack market approval or conformity marks such as CE (Conformité Européenne = European Conformity), the UL (Underwriters Laboratories) mark, or FCC certification for electronic devices. Customs authorities can and do seize noncompliant imports, resulting in long delays or outright confiscation. Some buyers report being asked to provide import documentation for items they assumed were domestic purchases.

Then there’s the issue of consumer rights. Enforcing warranties or returns gets tricky when the product never passed through the seller’s claimed country of origin. Even on platforms like Amazon or eBay that offer buyer protection, resolving disputes can take a while to resolve.

Drop-shipping also raises data privacy concerns. Third-party sellers in other jurisdictions might receive your personal address and phone number directly. With little enforcement across borders, this data could be reused or leaked into marketing lists. In some cases, multiple resellers have access to the same dataset, amplifying the risk.

In the case of the watches, other users said they were pushed to install Chinese-made apps with different names than the brand of the watch.. We’ve talked before about the risks that come with installing unknown apps.

What you can do

A few quick checks can spare you a lot of trouble.

  • Research unfamiliar sellers, especially if the price looks too good to be true.
  • Check where the goods ship from before placing an order.
  • Use payment methods with strong buyer protection.
  • Stick with platforms that verify sellers and offer clear refund policies.
  • Be alert for unexpected shipping fees, extra charges, or requests for more personal information after you buy.

Drop-shipping can be legitimate when done well, but when it isn’t, it shifts nearly all risk to the buyer. And when counterfeits, privacy issues and surprise fees intersect, the “deal” is your data, your safety, or your patience.

If you’re unsure about an ad, you can always submit it to Malwarebytes Scam Guard. It’ll help you figure out whether the offer is safe to pursue.

And when buying any kind of smart device that needs you to download an app, it’s worth remembering these actions:

  • Question the permissions an app asks for. Does it serve a purpose for you, the user, or is it just some vendor being nosy?
  • Read the privacy policy—yes, really. Sometimes they’re surprisingly revealing.
  • Don’t hand over personal data manufacturers don’t need. What’s in it for you, and what’s the price you’re going to pay? They may need your name for the warranty, but your gender, age, and (most of the time) your address isn’t needed.

Most importantly’worry about what companies do with the information and how well they protect it from third-party abuse or misuse.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Your coworker is tired of AI “workslop” (Lock and Code S06E23)

This week on the Lock and Code podcast…

Everything’s easier with AI… except having to correct it.

In just the three years since OpenAI released ChatGPT, not only has onlife life changed at home—it’s also changed at work. Some of the biggest software companies today, like Microsoft and Google, are forwarding a vision of an AI-powered future where people don’t write their own emails anymore, or make their own slide decks for presentations, or compile their own reports, or even read their own notifications, because AI will do it for them.

But it turns out that offloading this type of work onto AI has consequences.

In September, a group of researchers from Stanford University and BetterUp Labs published findings from an ongoing study into how AI-produced work impacts the people who receive that work. And it turns out that the people who receive that work aren’t its biggest fans, because it’s not just work that they’re having to read, review, and finalize. It is, as the researchers called it, “workslop.”

Workslop is:

“AI generated work content that masquerades as good work, but lacks the substance to meaningfully advance a given task. It can appear in many different forms, including documents, slide decks, emails, and code. It often looks good, but is overly long, hard to read, fancy, or sounds off.”

Far from an indictment on AI tools in the workplace, the study instead reveals the economic and human costs that come with this new phenomenon of “workslop.” The problem, according to the researchers, is not that people are using technology to help accomplish tasks. The problem is that people are using technology to create ill-fitting work that still requires human input, review, and correction down the line.

“The insidious effect of workslop is that it shifts the burden of the work downstream, requiring the receiver to interpret, correct, or redo the work,” the researchers wrote.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Dr. Kristina Rapuano, senior research scientist at BetterUp Labs, about AI tools in the workplace, the potential lost productivity costs that come from “workslop,” and the sometimes dismal opinions that teammates develop about one another when receiving this type of work.

“This person said, ‘Having to read through workshop is demoralizing. It takes away time I could be spending doing my job because someone was too lazy to do theirs.’”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium Security for Lock and Code listeners.

The price of ChatGPT’s erotic chat? $20/month and your identity

To talk dirty to ChatGPT, you may soon have to show it your driver’s license.

OpenAI announced last month that ChatGPT will soon offer erotica—but only for verified adults. That sounds like a clever guardrail until you realize what “verified” might mean: uploading government identification to a company that already knows your search history, your conversations, and maybe your fantasies.

It’s a surreal moment for technology. The most famous AI tool in the world is turning into a porn gatekeeper. And it’s not happening in a vacuum. California just passed a law requiring age checks for app downloads. Discord’s age-verification partner was hacked this summer, exposing 70,000 government-issued IDs that are now being used for extortion. Twenty-four US states have passed similar laws.

What began as an effort to keep kids off adult sites has quietly evolved into the largest digital ID system ever built. One we never voted for.

The normalization of online ID checkpoints

Age verification started as a moral crusade. Lawmakers wanted to protect minors from explicit material. However, every system that requires an ID online transforms into something else entirely: a surveillance checkpoint. To prove you’re an adult, you hand over the same information criminals and governments dream of having—and to a patchwork of private vendors who store it indefinitely.

We’ve already seen where that leads. In the UK, after age-gating rules took effect under the Online Safety Act, one of the verification companies was breached. In the US, the AU10TIX breach exposed user data from Uber, X, and TikTok. Each time, the same story: people forced to upload passports, driver’s licenses, or selfies, only to watch that data leak.

If hackers wanted to design a dream scenario for mass identity theft, this would be it. Governments legally requiring millions of adults to upload the exact documents criminals need.

The illusion of safety

The irony is that none of this actually protects children. In the UK, VPN sign-ups spiked 1,400% the day the new restrictions went live. We hope that’s from adults balking at handing over personal data, but the point is any teen with a search bar can bypass an age-gate in minutes. The result isn’t a safer internet—it’s an internet that collects more data about adults while pushing kids toward sketchier, unregulated corners of the web.

Parents already have better options for keeping inappropriate content at bay: device-level controls, filtered browsers, phones built for kids. None of those require turning the rest of us into walking ID tokens.

From bars to browsers

Defenders like to compare online verification to showing ID at a bar. But when you flash your license to buy a beer, the cashier doesn’t scan it, store it, and build a permanent record of your drinking habits. Online verification does exactly that. Every log-in becomes another data point linking your identity to what you read, watch, and say.

It’s not hard to imagine how this infrastructure expands. Today it’s porn, violence, and “mature” chatbots. Tomorrow it could be reproductive-health forums, LGBTQ+ resources, or political discussion groups flagged as “sensitive.” Once the pipes exist, someone will always find a new reason to use them.

When innovation starts to feel invasive

Let’s be honest. We could all make money if we just decided to build porn machines, and that’s what this new offering from ChatGPT feels like. It didn’t take long for AI to grab a slice of the OnlyFans market. Except the price of admission isn’t only $20 a month; it’s potentially your identity and a whole lot of heartache.

As Jason Kelley of the Electronic Frontier Foundation explained on my Lock and Code podcast,

“Once you are asked to give certain types of information to a website, there’s no way to know what that company, who’s supposedly verifying your age, is doing with that information.”

The verification process itself becomes a form of surveillance, creating detailed records of legal adult behavior that governments and cybercriminals can exploit.

This is how surveillance gets normalized: one “safety” feature at a time.

ChatGPT’s erotic mode will make ID-upload feel routine—a casual step before chatting with your favorite AI companion. But beneath the surface, those IDs will feed a new class of data brokers and third-party verifiers whose entire business depends on linking your real identity to everything you do online.

We’ve reached the point where governments and corporations don’t need to build a single centralized database; we’re volunteering one piece at a time.

ChatGPT’s latest intentions are a preview of what’s next. The internet has been drifting toward identity for years—from social logins to verified profiles—and AI is simply accelerating that shift. What used to be pockets of anonymity are becoming harder to find, replaced by a web that expects to know exactly who you are.

The future of “safe” online spaces shouldn’t depend on handing over your driver’s license to an AI.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Scammers are sending bogus copyright warnings to steal your X login

One of my favorite Forbes correspondents recently wrote about receiving several fake copyright-infringement notices from X.

Let’s suppose you get an email claiming it’s from X, warning:

“We’ve received a DMCA notice regarding your account.”

Chances are, you’ll be wondering what you did wrong. DMCA (Digital Millennium Copyright Act) notices are legal requests about copyrighted content, so it makes sense that many users would worry they broke the rules and feel eager to read the warning.

email sample
Image courtesy of Forbes

“Some recent activity on your page may not fully meet our community standards. Please take a moment to review the information below and ensure your shared content follow our usage rules.
Notice Date : {day received}”

  • Kindly review the material You’ve shared.
  • If you think this notice was sent in error, you can request a check using the link below.

Review Details {button}

If no update is received within 24 hours, your page visibility may stay temporarily limited until the review is complete.

We thank you for your attention and cooperation in keeping this space respectful and positive for all.”

As usual, the scammers add some extra pressure by claiming your account may be hidden or limited if you don’t act within 24 hours.

But the “Review Details” button doesn’t lead to anything on X. It does look a lot like the X login page, but it’s fake.

Any username and password typed there go straight to the hackers—which could leave you with a compromised account.

How to keep your X account safe

Having your X account stolen can be a major pain for you, your followers, and your reputation (especially if you’re in the cybersecurity field). So here are some tips to keep it safe:

  • Make sure 2FA is turned on. We wrote an article about how to do this back when it was still called Twitter.
  • When entering a username and password, or any type of sensitive information, check whether the URL in the address bar matches what you expect.
  • Use a password manager. It won’t enter your details on a fake site.
  • Use an up-to-date real-time anti malware solution with a web protection component.
  • Don’t click on links in unsolicited emails and check with the sender through another channel first.
  • A real DMCA notice from X will include a full copy of the reporter’s complaint, including contact details, plus instructions for filing a counter-notice.

Pro tip: You can upload suspicious messages of any kind to Malwarebytes Scam Guard. It will tell you whether it’s likely to be a scam and advise you what to do.

If you suspect your account may be compromised:

  • Change your password.
  • Make sure your email account associated with the account is secure.
  • Revoke connections to third-party applications.
  • Update your password in the third-party applications that you trust.
  • Contact Support if you can’t log in after trying the above.

Here are the full instructions from X for users who believe their accounts have been compromised.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

A week in security (November 10 – November 16)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Be careful responding to unexpected job interviews

One of our customers was contacted on LinkedIn about a job offer. The initial message was followed up by an email:

email contact

“Thank you for your interest in the Senior Construction Manager position at {company}. After reviewing your background, we were impressed with your experience and would like to invite you to the next stage of our selection process — a virtual interview.

In this session, we’ll discuss your project management experience, leadership approach, and how your expertise aligns with {company}’s current and upcoming construction initiatives.

A Zoom link will be shared in a follow-up email, which will allow you to select a time that’s most convenient for you.

If you have any questions in the meantime, please don’t hesitate to reach out. I look forward to speaking with you soon.

Warm regards,”

I edited out the company name and the name of the supposed recruiter, but when we Googled that alleged recruiter’s name, he does work at the impersonated company (just not in HR). That’s not unique, though. We’ve heard several variants of very similar stories involving other companies and other names.

Other red flags included the fact that the email came from a Gmail address (not a company domain), and that the company has no openings for a Senior Construction Manager.

When our target replied they were looking forward to the interview, they received the “Meeting invitation” by email:

meeting invitation

“Hi There,

      {recruiter} INVITED YOU TO A ZOOM REMOTE MEETING

Please click the button below to view the invitation within 30 days. By acceptance, you’ll be able to message and call each other.

               View Invitation {button}

To see the list of invited guests, click here.

Thank you.

Zoom”

Both links in this email were shortened t[.]co links that redirected to meetingzs[.]com/bt.

That site is currently unavailable, but users have reported seeing fake Windows update warnings, or notifications about having to install updates for their meeting application (Zoom, Teams—name your favorite). Our logs show that we blocked meetingzs[.]com for phishing and hosting a file called GoToResolveUnattendedUpdater.exe.

Malwarebytes blocks meetingzs[.]com

While this file is not malicious in itself, it can be abused by cybercriminals. It’s associated with LogMeIn Resolve, a remote support tool, which attackers can fake or misuse to execute ransomware payloads once installed.

This tactic is part of a broader trend where attackers pose as recruiters or trusted contacts, inviting targets to meetings and requiring them to install software updates to participate. Those updates, however, can be malware installers or Remote Monitoring and Management (RMM) tools which can give attackers direct access to your device.

This type of attack is a prime example of how social engineering is becoming the primary way to gain initial access to you or your company’s system.

How to stay safe

The best way to stay safe is to be able to recognize attacks like these, but there are some other things you can do.

  • Always keep your operating system, software, and security tools updated regularly with the latest patches to close vulnerabilities.
  • Use a real-time anti-malware solution with a web protection component.
  • Be extremely cautious with unsolicited communications, especially those inviting you to meetings or requesting software installs or updates; verify the sender and context independently.
  • Avoid clicking on links or downloading attachments from unknown or unexpected sources. Verify their authenticity first.
  • Compare the URL in the browsers’ address bar to what you’re expecting.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Your passport, now on your iPhone. Helpful or risky?

Apple has launched Digital ID, a way for users in the US to create and present a government-issued ID in Apple Wallet using their passport information. For now, it works only for identity verification at Transportation Security Administration (TSA) checkpoints in more than 250 airports.

Apple says the reason for the introduction is because users asked for it:

“Since introducing the ability to add a driver’s license or state ID to Apple Wallet in 2022, we’ve seen how much users love having their ID right on their devices. Digital IDs brings this secure and convenient option to even more users across the country, as they can now add an ID to Wallet using information from their U.S. passport.”

What does Apple’s Digital ID mean for users?

You add a Digital ID by scanning your physical passport (photo page and chip) and taking a selfie as part of a verification process. Your ID stays encrypted on the device and isn’t shared with Apple.

To present it, you hold your iPhone or Apple Watch near a reader and confirm with Face ID or Touch ID. You choose which information is shared, and you never have to unlock or hand over your device.

At launch, it’s TSA-only. Apple says wider use at businesses, organizations, and online services will come later. Digital ID does not replace a passport for international travel.

Pros of Apple’s Digital ID:

  • Convenience: Quickly present your ID from your iPhone or Apple Watch for TSA security, and eventually, for businesses or online checks.
  • Security: The ID data is locally encrypted and requires biometric authentication for access.
  • Privacy control: Users review and authorize the information shared, and Apple claims it doesn’t track when you use the ID.
  • Expanded access: It’s helpful for people without a REAL ID-compliant driver’s license who want to fly domestically.
  • No device hand-off: You don’t hand over your device for inspection. You just present your phone or watch to a reader.
  • Scalable: Apple already has the support of states and airports, and plans to expand.

Apple barely touches upon the risks that come with this new feature. We discussed many of them when we asked, should you let Chrome store your driver’s license and passport? Although Apple’s Digital ID looks safer than storing your ID in your browser, there are some additional concerns.

The risks of using Apple’s Digital ID

We had to look at other sources to find some of the more serious downsides.

  • Device dependency: Lose your phone or watch, and you lose access to your Digital ID. That’s not to mention the risks if the device is stolen.
  • Privacy and surveillance: Experts warn Digital ID adoption may lead to more ID checks in places that didn’t require them before, increasing surveillance and data tracking concerns.​
  • Potential for security breaches: Encrypted or not, digital IDs can still be targeted by device exploits, phishing, or social engineering.
  • Biometric spoofingFace ID or Touch ID can, in some cases, be spoofed or exploited.​
  • Platform lock-in: Apple’s system is closed, which means users are dependent on Apple’s legacy, update policies, and device ecosystem. If you switch platforms, you might find it hard to recover your digital ID.
  • Social risks: Critics worry police or other authorities could pressure users to unlock devices under the guise of ID verification.
  • Data sharing with state authorities: Your photo, video, and limited device analytics may be shared temporarily with issuing authorities for verification.​
  • Limited usefulness: Digital ID doesn’t replace your passport outside the US, so it’s not very useful for international travel, and it’s not accepted everywhere yet.

Summary

Apple’s Digital ID aims to make ID checks private, more secure, and convenient for most users. But concerns remain regarding privacy, device loss, ecosystem lock-in, and the potential for expanded surveillance and demands in everyday activities beyond TSA checkpoints.

We still see this option as safer than storing your ID in a browser, where attacks are far more common, but the drawbacks may still outweigh the benefits for many users. As one of our readers put it:

“The inconvenience of having to look through a drawer for my passport is not that big, that I would risk having my identity stolen.”


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Are you paying more than other people? NY cracks down on surveillance pricing

When you search for a product online, you might think you’re getting the same price as everyone else. Think again. Your price might be different based on everything from your location to what you’ve looked at online. Companies often use algorithms to set their prices that rely heavily on customers’ personal data. Now, the state of New York is forcing companies to come clean when they set prices using customer data.

Anyone using algorithms to adjust pricing for people in the state must now reveal when they’re doing it, thanks to legislation that the state began enforcing this week called the Algorithmic Pricing Disclosure Act.

Algorithmic pricing is also known as “surveillance pricing” because it relies on using a person’s personal data to offer them promotional pricing (or potentially higher prices, if the vendor thinks they’ll pay).

How software algorithms affect the prices you see

The Federal Trade Commission (FTC) warned about this in a report that it released in January this year. It had ordered eight companies (Mastercard, Revionics, Bloomreach, JPMorgan Chase, Task Software, PROS, Accenture, and McKinsey) to disclose the services they offer that use algorithms and consumer data to set or recommend individualized prices, as well as the data inputs, customer lists and potential impact on consumer pricing. From the report:

“A tool could be used to collect real-time information about a person’s browsing and transaction history and enable a company to offer—or not offer—promotions based on that consumer’s perceived affinity.”

This data could include where they are, who they are, what they’re doing, and what they’ve done in the past. The report suggests that companies could use a wide variety of customer data to achieve these goals, including everything from their geolocation to what they’ve looked at on a particular website.

For example, lingering over a particular item with their mouse or watching a certain percentage of a video on a website might alert companies that a consumer has a particular interest.

The same data could be used to create “buckets” of customers with similar profiles (called “segments” in marketing) that companies could use to target people with different pricing.

The FTC report had to use hypothetical examples, following push-back from the companies involved, but what it revealed was enlightening. A company might jack up prices of baby formula offered to a parent found searching for fast delivery, it said.

In another imagined case, a person visiting a car dealership and using an in-store kiosk to explore vehicles might be segmented as a first-time car buyer, the report said. The store might decide that they’re inexperienced about the financing options available, affecting the rates that they’re offered.

The FTC had issued a Request for Information (RFI) on the report, asking people for their own experiences of surveillance pricing. The public comment period was supposed to run until April 17, but the new FTC chair under the Trump administration, Andrew Ferguson, closed the RFI less than a week after the previous chair, Lina Khan, issued it.

Last week the state’s Attorney General, Letitia James, effectively re-opened it—at least for New York residents. She issued a consumer alert urging residents to help enforce the law, which threatens a $1,000 penalty each time a company violates it. The alert encouraged people to report companies they believe are using algorithms to determine pricing.

Under the New York law, businesses must display the exact text:

“This price was set by an algorithm using your personal data.”

They must display the text near the price shown, and they can’t use “protected class data” that is legally shielded from discrimination under the law. That includes ethnicity, national origin, disability, age, sex, sexual orientation, or gender identity. There are exceptions: insurance companies and other financial institutions are exempt under the Gramm-Leach-Bliley Act.

A long history of algorithmic pricing

Algorithmic pricing has been happening for years. For example, in 2013 Staples was found to be adjusting prices for different people according to their distance from a rival’s store. The retailer reportedly charged higher prices for households with lower incomes, although whether this was intentional or just an unintended by-product of the algorithm isn’t clear. That’s the problem though: algorithms can easily have unexpected results.

More recently, reporters have found people charged more for hotel rooms based on their IP addresses, while one report found Target charging more for goods viewed on its app when they were inside a Target store than when they were outside.

This pushback against surveillance pricing is spreading. California’s AB 325 bill amends the state’s Cartwright Act antitrust law to ban shared pricing algorithms that use competitor data between multiple businesses. Governor Gavin Newsom signed that into law last month, and it will take effect on January 1, 2026. He also passed SB 763, which increases civil and criminal penalties for violations of the Cartwright Act.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.