IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

The hidden costs of illegal streaming and modded Amazon Fire TV Sticks

Ahead of the holiday season, people who have bought cheap Amazon Fire TV Sticks or similar devices online should be aware that some of them could let cybercriminals access personal data, bank accounts, and even steal money.

BeStreamWise, a UK initiative established to counter illegal streaming, says the rise of illicit streaming devices preloaded with software that bypasses licensing and offers “free” films, sports, and TV comes with a risk.

Dodgy stick streaming typically involves preloaded or modified devices, frequently Amazon Fire TV Sticks, sold with unauthorized apps that connect to pirated content streams. These apps unlock premium subscription content like films, sports, and TV shows without proper licensing.

The main risks of using dodgy streaming sticks include:

  • Legal risks: Mostly for sellers, but in some cases for users too
  • Exposure to inappropriate content: Unregulated apps lack parental controls and may expose younger viewers to explicit ads or unsuitable content.
  • Growing countermeasures: Companies like Amazon are actively blocking unauthorized apps and updating firmware to prevent illegal streaming. Your access can disappear overnight because it depends on illegal channels.
  • Malware: These sticks, and the unofficial apps that run on them, often contain malware—commonly in the form of spyware.

BeStreamWise warns specifically about “modded Amazon Fire TV Sticks.” Reporting around the campaign notes that around two in five illegal streamers have fallen prey to fraud, likely linked to compromised hardware or the risky apps and websites that come with illegal streaming.

According to BeStreamWise, citing Dynata research:

“1 in 3 (32%) people who illegally stream in the UK say they, or someone they know, have been a victim of fraud, scams, or identity theft as a result.”

Victims lost an average of almost £1,700 (about $2,230) each. You could pay for a lot of legitimate streaming services with that. But it’s not just money that’s at stake. In January, The Sun warned all Fire TV Stick owners about an app that was allegedly “stealing identities,” showing how easily unsafe apps can end up on modified devices.

And if it’s not the USB device that steals your data or money, then it might be the website you use to access illegal streams. FACT highlights research from Webroot showing that:

“Of 50 illegal streaming sites analysed, every single one contained some form of malicious content – from sophisticated scams to extreme and explicit content.”

So, from all this we can conclude that illegal streaming is not the victimless crime that many assume it is. It creates victims on all sides: media networks lose revenue and illegal users can lose far more than they bargained for.

How to stay safe

The obvious advice here is to stay away from illegal streaming and be careful about the USB devices you plug into your computer or TV. When you think about it, you’re buying something from someone breaking the law, and hoping they’ll treat your data honestly.

There are a few additional precautions you can take though:

If you have already used a USB device or visited a website that you don’t trust:

  • Update your anti-malware solution.
  • Disconnect from the internet to prevent any further data being sent.
  • Run a full system scan for malware.
  • Monitor your accounts for unusual activity.
  • Change passwords and/or enable multifactor authentication (MFA/2FA) on the important ones.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Matrix Push C2 abuses browser notifications to deliver phishing and malware

Cybercriminals are using browser push notifications to deliver malware and phishing attacks.

Researchers at BlackFog described how a new command-and-control platform, called Matrix Push C2, uses browser push notifications to reach potential victims.

When we warned back in 2019 that browser push notifications were a feature just waiting to be abused, we noted that the Notifications API allows a website or app to send notifications that are displayed outside the page at the system level. This means it lets web apps send information to a user even when they’re idle or running in the background.

Here’s a common example of a browser push notification:

Browser notification with Block and Allow

This makes it harder for users to know where the notifications come from. In this case, the responsible app is the browser and users are tricked into allowing them by the usual “notification permission prompt” that you see on almost every other website.

But malicious prompts aren’t always as straightforward as legitimate ones. As we explained in our earlier post, attackers use deceptive designs, like fake video players that claim you must click “Allow” to continue watching.

Click allow to play video?

In reality, clicking “Allow” gives the site permission to send notifications, and often redirects you to more scam pages.

Granting browser push notifications on the wrong website gives attackers the ability to push out fake error messages or security alerts that look frighteningly real. They can make them look as if they came from the operating system (OS) or a trusted software application, including the titles, layout, and icons. There are pre-formatted notifications available for MetaMask, Netflix, Cloudflare, PayPal, TikTok, and more.

Criminals can adjust settings that make their messages appear trustworthy or cause panic. The Command and Control (C2) panel provides the attacker with granular control over how these push notifications appear.

Matrix C2 panel
Image courtesy of BlackFog

But that’s not all. According to the researchers, this panel provides the attacker with a high level of monitoring:

“One of the most prominent features of Matrix Push C2 is its active clients panel, which gives the attacker detailed information on each victim in real time. As soon as a browser is enlisted (by accepting the push notification subscription), it reports data back to the C2.”

It allows attackers to see which notifications have been shown and which ones victims have interacted with. Overall, this allows them to see which campaigns work best on which users.

Matrix Push C2 also includes shortcut-link management, with a built-in URL shortening service that attackers can use to create custom links for their campaign, leaving users clueless about the true destination. Until they click.

Ultimately, the end goal is often data theft or monetizing access, for example, by draining cryptocurrency wallets, or stealing personal information.

How to find and remove unwanted notification permissions

A general tip that works across most browsers: If a push notification has a gear icon, clicking it will take you to the browser’s notification settings, where you can block the site that sent it. If that doesn’t work or you need more control, check the browser-specific instructions below.

Chrome

To completely turn off notifications, even from extensions:

  • Click the three dots button in the upper right-hand corner of the Chrome menu to enter the Settings menu.
  • Select Privacy and Security.
  • Click Site settings.
  • Select Notifications.
  • By default, the option is set to Sites can ask to send notifications. Change to Don’t allow sites to send notifications if you want to block everything.
Chrome notifications settings

For more granular control, use Customized behaviors.

  • Selecting Remove will delete the item from the list. It will ask permission to show notifications again if you visit their site.
  • Selecting Block prevents permission prompts entirely, moved them to the block list.
Firefox Notifications settings
  • You can also check Block new requests asking to allow notifications at the bottom.
Web Site notifications settings

In the same menu, you can also set listed items to Block or Allow by using the drop-down menu behind each item.

Opera

Opera’s settings are very similar to Chrome’s:

  • Open the menu by clicking the O in the upper left-hand corner.
  • Go to Settings (on Windows)/Preferences (on Mac).
  • Click Advanced, then Privacy & security.
  • Under Content settings (desktop)/Site settings (Android) select Notifications.
website specific notifications Opera

On desktop, Opera behaves the same as Chrome. On Android, you can remove items individually or in bulk.

Edge

Edge is basically the same as Chrome as well:

  • Open Edge and click the three dots (…) in the top-right corner, then select Settings.
  • In the left-hand menu, click on Privacy, search, and services.
  • Under Sites permissions > All permissions, click on Notifications.
  • Turn on Quiet notifications requests to block all new notification requests. 
image b83ba1
  • Use Customized behaviors for more granular control.

Safari

To disable web push notifications in Safari, go to Safari > Settings > Websites > Notifications in the menu bar, select the website from the list, and change its setting to Deny. To stop all future requests, uncheck the box that says Allow websites to ask for permission to send notifications in the same window. 

For Mac users

  1. Go to Safari > Settings > Websites > Notifications.
  2. Select a site and change its setting to Deny or Remove.
  3. To stop all future prompts, uncheck Allow websites to ask for permission to send notifications.

For iPhone/iPad users

  1. Open Settings.
  2. Tap Notifications.
  3. Scroll to Application Notifications and select Safari.
  4. You’ll see a list of sites with permission.
  5. Toggle any site to off to block its notifications.

We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

Black Friday scammers offer fake gifts from big-name brands to empty bank accounts

Black Friday is supposed to be chaotic, sure, but not this chaotic.

While monitoring malvertising patterns ahead of the holiday rush, I uncovered one of the most widespread and polished Black Friday scam campaigns circulating online right now.

It’s not a niche problem. Our own research shows that 40% of people have been targeted by malvertising, and more than 1 in 10 have fallen victim, a trend that shows up again and again in holiday-season fraud patterns. Read more in our 2025 holiday scam overview.

Through malicious ads hidden on legitimate websites, users are silently redirected into an endless loop of fake “Survey Reward” pages impersonating dozens of major brands.

What looked like a single suspicious redirect quickly turned into something much bigger. One domain led to five more. Five led to twenty. And as the pattern took shape, the scale became impossible to ignore: more than 100 unique domains, all using the same fraud template, each swapping in different branding depending on which company they wanted to impersonate.

This is an industrialized malvertising operation built specifically for the Black Friday window.

The brands being impersonated

The attackers deliberately selected big-name, high-trust brands with strong holiday-season appeal. Across the campaign, I observed impersonations of:

  • Walmart
  • Home Depot
  • Lowe’s
  • Louis Vuitton
  • CVS Pharmacy
  • AARP
  • Coca-Cola
  • UnitedHealth Group
  • Dick’s Sporting Goods
  • YETI
  • LEGO
  • Ulta Beauty
  • Tourneau / Bucherer
  • McCormick
  • Harry & David
  • WORX
  • Northern Tool
  • POP MART
  • Lovehoney
  • Petco
  • Petsmart
  • Uncharted Supply Co.
  • Starlink (especially the trending Starlink Mini Kit)
  • Lululemon / “lalubu”-style athletic apparel imitators

These choices are calculated. If people are shopping for a LEGO Titanic set, a YETI bundle, a Lululemon-style hoodie pack, or the highly hyped Starlink Mini Kit, scammers know exactly what bait will get clicks.

In other words: They weaponize whatever is trending.

How the scam works

1. A malicious ad kicks off an invisible redirect chain

A user clicks a seemingly harmless ad—or in some cases, simply scrolls past it—and is immediately funneled through multiple redirect hops. None of this is visible or obvious. By the time the page settles, the user lands somewhere they never intended to go.

2. A polished “Survey About [Brand]” page appears

Every fake site is built on the same template:

  • Brand name and logo at the top
  • A fake timestamp (“Survey – November X, 2025 🇺🇸”)
  • A simple, centered reward box
  • A countdown timer to create urgency
  • A blurred background meant to evoke the brand’s store or product environment

It looks clean, consistent, and surprisingly professional.

3. The reward depends on which brand is being impersonated

Some examples of “rewards” I found in my investigation:

  • Starlink Mini Kit
  • YETI Ultimate Gear Bundle
  • LEGO Falcon Exclusive / Titanic set
  • Lululemon-style athletic packs
  • McCormick 50-piece spice kit
  • Coca-Cola mini-fridge combo
  • Petco / Petsmart “Dog Mystery Box”
  • Louis Vuitton Horizon suitcase
  • Home Depot tool bundles
  • AARP health monitoring kit
  • WORX cordless blower
  • Walmart holiday candy mega-pack

Each reward is desirable, seasonal, realistic, and perfectly aligned with current shopping trends. This is social engineering disguised as a giveaway. I wrote about the psychology behind this sort of scam in my article about Walmart gift card scams.

4. The “survey” primes the victim

The survey questions are generic and identical across all sites. They are there purely to build commitment and make the user feel like they’re earning the reward.

After the survey, the system claims:

  • Only 1 reward left
  • Offer expires in 6 minutes
  • A small processing/shipping fee applies

Scarcity and urgency push fast decisions.

5. The final step: a “shipping fee” checkout

Users are funneled into a credit card form requesting:

  • Full name
  • Address
  • Email
  • Phone
  • Complete credit card details, including CVV

The shipping fees typically range from $6.99 to $11.94. They’re just low enough to feel harmless, and worth the small spend to win a larger prize.

Some variants add persuasive nudges like:

“Receive $2.41 OFF when paying with Mastercard.”

While it’s a small detail, it mimics many legitimate checkout flows.

Once attackers obtain personal and payment data through these forms, they are free to use it in any way they choose. That might be unauthorized charges, resale, or inclusion in further fraud. The structure and scale of the operation strongly suggest that this data collection is the primary goal.

Why this scam works so well

Several psychological levers converge here:

  • People expect unusually good deals on Black Friday
  • Big brands lower skepticism
  • Timers create urgency
  • “Shipping only” sounds risk-free
  • Products match current hype cycles
  • The templates look modern and legitimate

Unlike the crude, typo-filled phishing of a decade ago, these scams are part of a polished fraud machine built around holiday shopping behavior.

Technical patterns across the scam network

Across investigations, the sites shared:

  • Identical HTML and CSS structure
  • The same JavaScript countdown logic
  • Nearly identical reward descriptions
  • Repeated “Out of stock soon / 1 left” mechanics
  • Swappable brand banners
  • Blurred backgrounds masking reuse
  • High-volume domain rotation
  • Multi-hop redirects originating from malicious ads

It’s clear these domains come from a single organized operation, not a random assortment of lone scammers.

Final thoughts

Black Friday always brings incredible deals, but it also brings incredible opportunities for scammers. This year’s “free gift” campaign stands out not just for its size, but for its timing, polish, and trend-driven bait.

It exploits, excitement, brand trust, holiday urgency, and the expectation of “too good to be true” deals suddenly becoming true.

Staying cautious and skeptical is the first line of defense against “free reward” scams that only want your shipping details, your identity, and your card information.

And for an added layer of protection against malicious redirects and scam domains like the ones uncovered in this campaign, users can benefit from keeping tools such as Malwarebytes Browser Guard enabled in their browser.

Stay safe out there this holiday season.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

A week in security (November 17 – November 23)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats—we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your, and your family’s, personal information by using identity protection.

AI teddy bear for kids responds with sexual content and advice about weapons

In testing, FoloToy’s AI teddy bear jumped from friendly chat to sexual topics and unsafe household advice. It shows how easily artificial intelligence can cross serious boundaries. It’s a fair moment to ask whether AI-powered stuffed animals are appropriate for children.

It’s easy to get swept up in the excitement of artificial intelligence, especially when it’s packaged as a plush teddy bear promising

“warmth, fun, and a little extra curiosity.”

But the recent controversy surrounding the Kumma bear is a reminder to slow down and ask harder questions about putting AI into toys for kids.

FoloToy, a Singapore-based toy company, marketed the $99 bear as the ultimate “friend for both kids and adults,” leveraging powerful conversational AI to deliver interactive stories and playful banter. The website described Kumma as intelligent and safe. Behind the scenes, the bear used OpenAI’s language model to generate its conversational responses. Unfortunately, reality didn’t match the sales pitch.

folotoy
Image courtesy of CNN, a screenshot taken from FoloToy’s website

According to a report from the US PIRG Education Fund, Kumma quickly veered into wildly inappropriate territory during researcher tests. Conversations escalated from innocent to sexual within minutes. The bear didn’t just respond to explicit prompts, which would have been more or less understandable. Researchers said it introduced graphic sexual concepts on its own, including BDSM-related topics, explained “knots for beginners,” and referenced roleplay scenarios involving children and adults. In some conversations, Kumma also probed for personal details or offered advice involving dangerous objects in the home.

It’s unclear whether the toy’s supposed safeguards against inappropriate content were missing or simply didn’t work. While children are unlikely to introduce BDSM as a topic to their teddy bear, the researchers warned just how low the bar was for Kumma to cross serious boundaries.

The fallout was swift. FoloToy suspended sales of Kumma and other AI-enabled toys, while OpenAI revoked the developer’s access for policy violations. But as PIRG researchers note, that response was reactive. Plenty of AI toys remain unregulated, and the risks aren’t limited to one product.

Which proves our point: AI does not automatically make something better. When companies rush out “smart” features without real safety checks, the risks fall on the people using them—especially children, who can’t recognize dangerous content when they see it.

Tips for staying safe with AI toys and gadgets

You’ll see “AI-powered” on almost everything right now, but there are ways to make safer choices.

  • Always research: Check for third-party safety reviews before buying any AI-enabled product marketed for kids.
  • Test first, supervise always: Interact with the device yourself before giving it to children. Monitor usage for odd or risky responses.
  • Use parental controls: If available, enable all content filters and privacy protections.
  • Report problems: If devices show inappropriate content, report to manufacturers and consumer protection groups.
  • Check communications: Find out what the device collects, who it shares data with, and what it uses the information for.

But above all, remember that not all “smart” is safe. Sometimes, plush, simple, and old-fashioned really is better.

AI may be everywhere, but designers and buyers alike need to put safety, privacy, and common sense ahead of the technological wow-factor.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Fake calendar invites are spreading. Here’s how to remove them and prevent more

We’re seeing a surge in phishing calendar invites that users can’t delete, or that keep coming back because they sync across devices. The good news is you can remove them and block future spam by changing a few settings.

Most of these unwanted calendar entries are there for phishing purposes. Most of them warn you about a “impending payment” but the difference is in the subject and the action they want the target to take.

Sometimes they want you to call a number:

"Call this number" scams

And sometimes they invite you to an actual meeting:

fake Geek Squad billing update meeting

We haven’t followed up on these scams, but when attackers want you to call them or join a meeting, the end goal is almost always financial. They might use a tech support scam approach and ask you to install a Remote Monitoring and Management tool, sell you an overpriced product, or simply ask for your banking details.

The sources are usually distributed as email attachments or as download links in messaging apps.

How to remove fake entries from your calendar

This blog focuses on how to remove these unwanted entries. One of the obstacles is that calendars often sync across devices.

Outlook Calendar

If you use Outlook:

  • Delete without interacting: Avoid clicking any links or opening attachments in the invite. If available, use the “Do not send a response” option when deleting to prevent confirming that your email is active.
  • Block the sender: Right-click the event and select the option to report the sender as junk or spam to help prevent future invites from that email address.
  • Adjust calendar settings: Access your Outlook settings and disable the option to automatically add events from email. This setting matters because even if the invite lands in your spam folder, auto-adding invites will still put the event on your calendar.
    Outlook accept settings
  • Report the invite: Report the spam invitation to Microsoft as phishing or junk.
  • Verify billing issues through official channels: If you have concerns about your account, go directly to the company’s official website or support, not the information in the invite.

Gmail Calendar

To disable automatic calendar additions:

  • Open Google Calendar.
  • Click the gear icon and select Settings in the upper right part of the screen.
    Gmail calendar settings
  • Under Event settings, change Add invitations to my calendar to either Only if the sender is known or When I respond to the invitation email. (The default setting is From everyone, which will add any invite to your calendar.)
  • Uncheck Show events automatically created by Gmail if you want to stop Gmail from adding to your calendar on its own.

Android Calendar

To prevent unknown senders from adding invites:

  • Open the Calendar app.
  • Tap Menu > Settings.
  • Tap General > Adding invitations > Add invitations to my calendar.
  • Select Only if the sender is known.

For help reviewing which apps have access to your Android Calendar, refer to the support page.

Mac Calendars

To control how events get added to your Calendar on a Mac:

  • Go to Apple menu > System Settings > Privacy & Security.
  • Click Calendars.
  • Turn calendar access on or off for each app in the list.
  • If you allow access, click Options to choose whether the app has full access or can only add events.

iPhone and iPad Calendar

The controls are similar to macOS, but you may also want to remove additional calendars:

  • Open Settings.
  • Tap Calendar > Accounts > Subscribed Calendars.
  • Select any unwanted calendars and tap the Delete Account option.

Additional calendars

Which brings me to my next point. Check both the Outlook Calendar and the mobile Calendar app for Additional Calendars or subscribed URLs and Delete/Unsubscribe. This will stop the attacker from being able to add even more events to your Calendar. And looking in both places will be helpful in case of synchronization issues.

Several victims reported that after removing an event, they just came back. This is almost always due to synchronization. Make sure you remove the unwanted calendar or event everywhere it exists.

Tracking down the source can be tricky, but it may help prevent the next wave of calendar spam.

How to prevent calendar spam

We’ve covered some of this already, but the main precautions are:

  • Turn off auto‑add or auto‑processing so invites stay as emails until you accept them.
  • Restrict calendar permissions so only trusted people and apps can add events.
  • In shared or resource calendars, remove public or anonymous access and limit who can create or edit items.
  • Use an up-to-date real-time anti-malware solution with a web protection component to block known malicious domains.
  • Don’t engage with unsolicited events. Don’t click links, open attachments, or reply to suspicious calendar events such as “investment,” “invoice,” “bonus payout,” “urgent meeting”—just delete the event.
  • Enable multi-factor authentication (MFA) on your accounts so attackers who compromise credentials can’t abuse the account itself to send or auto‑accept invitations.

Pro tip: If you’re not sure whether an event is a scam, you can feed the message to Malwarebytes Scam Guard. It’ll help you decide what to do next.

The Really Really Sale

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Budget Samsung phones shipped with unremovable spyware, say researchers

A controversy over data-gathering software secretly installed on Samsung phones has erupted again after a new accusatory post appeared on X last week.

In the post on the social media site, cybersecurity newsletter International Cyber Digest warned about a secretive application called AppCloud that Samsung had allegedly put on its phones. The software was, it said,

“unremovable Israeli spyware.”

This all harks back to May, when digital rights group SMEX published an open letter to Samsung. It accused the company of installing AppCloud on its Galaxy A and M series devices, although stopped short of calling it spyware, opting for the slightly more diplomatic “bloatware”.

The application, apparently installed on phones in West Asia and North Africa, did more than just take up storage space, though.According to SMEX, it collected sensitive information, including biometric data and IP addresses.

SMEX’s analysis says the software, developed by Israeli company ironSource, is deeply integrated into the device’s operating system. You need root access to remove it, and doing so voids the warranty.

Samsung has partnered with ironSource since 2022, carrying the its Aura toolkit for telecoms companies and device maker in more than 30 markets, including Europe. The pair expanded the partnership in November 2022—the same month that US company Unity Technologies (that makes the Unity game engine) completed its $4.4bn acquisition of ironSource. That expansion made ironSource

“Samsung’s sole partner on newly released A-series and M-series mobile devices in over 50 markets across MENA – strengthening Aura’s footprint in the region.”

SMEX’s investigation of ironSource’s products points to software called Install Core. It cites our own research of this software, which is touted as an advertising technology platform, but can install other products without the user’s permission.

AppCloud wasn’t listed on the Unity/Ironsource website this February when SMEX wrote its in-depth analysis. It still isn’t. It also doesn’t appear on the phone’s home screen. It runs quietly in the background, meaning there’s no privacy policy to read and no consent screen to click, says SMEX.

Screenshots shared online suggest AppCloud can access network connections, download files at will, and prevent phones from sleeping. However, this does highlight one important aspect of this software: While you might not be able to start it from your home screen or easily remove it, you can disable it in your application list. Be warned, though; it has a habit of popping up again after system updates, say users.

Not Samsung’s first privacy controversy

This isn’t Samsung’s first controversy around user privacy. Back in 2015, it was criticized for warning users that some smart TVs could listen to conversations and share them with third parties.

Neither is it the first time that budget phone users have had to endure pre-installed software that they might not have wanted. In 2020, we reported on malware that was pre-installed on budget phones made available via the US Lifeline program.

In fact, there have been many cases of pre-installed software on phones that are identifiable as either malware or potentially unwanted programs. In 2019, Maddie Stone, a security researcher for Google’s Project Zero, explained how this software makes its way onto phones before they reach the shelves. Sometimes, phone vendors will put malware onto their devices after being told that it’s legitimate software, she warned. This can result in botnets like Chamois, which was built on pre-installed malware purporting to be from an SDK.

One answer to this problem is to buy a higher-end phone, but you shouldn’t have to pay more to get basic privacy. Budget users should expect the same level of privacy as anyone else. We wrote a guide to removing bloatware— it’s from 2017, but the advice is still relevant.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Holiday scams 2025: These common shopping habits make you the easiest target

Every year, shoppers get faster, savvier, and more mobile. We compare prices on the go, download apps for coupons, and jump on deals before they disappear. But during deal-heavy periods like Black Friday, Cyber Monday, and the December shopping rush, convenience can work against us.

Quick check-outs, unknown websites, and ads promising unbeatable prices make shoppers easy targets.

Shopping scams can steal money or data, but they also steal peace of mind. Victims often describe a mix of frustration, embarrassment, and anger that lasts for a long time. And during the holidays when you’re already stretched thin, the financial and emotional fallout lands harder, spoiling plans, straining trust, and adding anxiety to what should be a joyful and restful time.

The data for deals exchange

Nearly 9 in 10 mobile consumers engage in data for deals.

During the holidays, deal-chasing behavior spikes. Nearly 9 in 10 mobile consumers hand over emails or phone numbers in the name of savings—often without realizing how much personal data they’re sharing.

  • 79% sign up for promotional emails to get offers.
  • 66% download an app for a coupon, discount, or free trial.
  • 58% give their phone number for texts to get a deal.

This constant “data for deals” exchange normalizes risky habits that scammers can easily exploit through fake promotions and reward campaigns.

The Walmart gift card scam

You’ve probably seen it. A bright message claiming you’ve qualified for a $750 or $1,000 Walmart gift card. All you have to do is answer a few questions. It looks harmless enough. But once you click, you find yourself in a maze of surveys, redirects, and “partner offers.”

Congratulations! You could win $1,000 in Walmart vouchers!

The scammers aren’t actually offering a free gift card. It’s a data-harvesting trap. Each form you fill out collects your name, email, phone number, ZIP code, and interests, all used to build a detailed profile that’s resold to advertisers or used for more scams down the line.

These so-called “holiday reward” scams pop up every year, promising gift cards, coupons, or cash-back bonuses, and they work because they play on the same instinct as legitimate deals: the urge to grab a bargain before it disappears.

Social media is new online mall

Scams show up wherever people shop. As holiday buying moves across social feeds, messaging apps, and mobile alerts, scammers follow the traffic.

Social platforms have become informal online malls: buy/sell groups, influencer offers, and limited-time stories all blur the line between social and shopping.

57% have bought from a buy/sell/trade group.53% have used a platform like Facebook Marketplace or OfferUp.38% have DM’d a company or seller for a discount.
  • 57% have bought from a buy/sell/trade group
  • 53% have used a platform like Facebook Marketplace or OfferUp
  • 38% have DM’d a company or seller for a discount

It’s a familiar environment, and that’s the problem. Fake listings and ads sit right beside real ones, making it hard to tell them apart when you’re scrolling fast. Half of people (51%) encounter scams on social media every week, and 1 in 4 (27%) see at least one scam a day.

Shopping has become social. It’s quick, conversational, and built on trust. But that same trust leads to some of the most common holiday scams.

A little skepticism when shopping via your social feeds can go a long way, especially when deals and deadlines make everything feel more urgent.

Three scams shoppers should watch out for

Exposure to scams is baked into the modern shopping experience—especially across social platforms and mobile marketplaces. Here are three common types that surge during the holidays.

Marketplace scams. 1 in 10 have fallen victim.

Marketplace scams

Marketplace scams are one of the most common traps during the holidays, precisely because they hide in plain sight. Shoppers tend to feel safe on familiar platforms, whether that’s a buy-and-sell group, a resale page, or a trusted marketplace app. But fake listings, spoofed profiles, and too-good-to-miss deals are everywhere.

Around a third of people (36%) come across a marketplace scam weekly (15% are targeted daily), and roughly 1 in 10 have fallen victim. Younger users are hit hardest: Gen Z and Millennials are the most impacted age group—70% of victims are Gen Z/Millennial (vs 57% victims overall). They also are more likely to lose money after clicking a fake ad or transferring payment for an item that never arrives. The result is a perfect storm of trust, speed, and urgency. The very ingredients scammers rely on.

Marketplace scams don’t just drain bank accounts, they also take a personal toll.

Many victims describe the experience as financially and emotionally exhausting, with some losing money they can’t recover, others discovering new accounts opened in their name, and some even locked out of their own. For others, the impact spreads further: embarrassment over being tricked, stress at work, and health problems triggered by anxiety or sleepless nights.

Post tracking scams. 12% have fallen victim.

Postal tracking scams

Postal tracking scams are already mainstream, but the holidays invite particular risk. With shoppers checking delivery updates several times a day, it’s easy to click without thinking.

Around 4 in 10 people have encountered one of these scams (62%), and more than 8 in 10 track packages directly from their phones (83%), making mobile users a prime target. Again, younger shoppers are the most impacted with 62% of victims being either Gen Z or Millennials (vs 57% of scam victims overall).

The messages look convincing: real courier logos, legitimate-sounding tracking numbers, and language that mirrors official updates.

UPS delivery scam SMS

A single click on what looks like a delivery confirmation can lead to a fake login page, a malicious download, or a request for personal information. It’s one of the simplest, most believable scams out there—and one of the easiest to fall for when you’re juggling gifts, deadlines, and constant delivery alerts.

Ad-related malware. 27% have fallen victim.

The hunt for flash sales, coupon codes, and last-minute deals can make shoppers more exposed to malicious ads and downloads.

More than half of people (58%) have encountered ad-related malware (or, “adware”, which is software that floods your screen with unwanted ads or tracks what you click to profit from your data), and over a quarter have fallen victim (27%). Gen Z users who spend the most time online are the age bracket that are most susceptible to adware, at nearly 40%.

Others scams involve malvertising, where criminals plant malicious code inside online ads that look completely legitimate, and just loading the page can be enough to start the attack. Malvertising too tends to spike during the holiday rush, when people are scrolling quickly through social feeds or searching for discounts. Forty percent of people have been targeted by malvertising and 11% have fallen victim. Adware targets 45% of people, claiming 20% as victims.

Fake ads are designed to look just like the real thing, complete with familiar branding and countdown timers. One wrong tap can install a malicious “shopping helper” app, redirect to a phishing site, or trigger a background download you never meant to start. It’s a reminder that even the most legitimate-looking ads deserve a second glance before you click.

Why shoppers drop their guard

The holidays bring joy but also a lot of pressure. There’s the financial strain, endless to-do lists, and that feeling that you don’t have enough time to do it all. Scammers know this, and use urgency, stress, and even guilt to make you click before you think. And when people do fall for a scam, the financial impact isn’t the only upsetting thing. Victims of scams are often embarrassed and blame themselves, and then have the stress of picking up the pieces.

Most shoppers worry about being scammed (61%) or losing money (73%), but with constant notifications, flashing ads, and countdown timers competing for attention, even the most careful shoppers can click before they check. Scammers count on that moment of distraction—and they only need one.

Mobile-first shopping has become second nature, and during the holidays it’s faster and more frantic than ever. Fifty-five percent of people get a scam text message weekly, while 27% are targeted daily.

Downloading new apps, checking delivery updates, or tapping limited-time offers all feel routine. Nearly 6 in 10 people say that downloading apps to buy products or engage with companies is now a way of life, and 39% admit they’re more likely to click a link on their phone than on their laptop.

How to shop smarter (and safer) this holiday

Most people don’t have protections that match the pace of holiday shopping, but the good news is, small steps make a big difference.

  • Keep an eye on your accounts. Make it a habit to glance over your bank or credit statements during the holidays. Spotting unexpected activity early is one of the simplest ways to stop fraud before it snowballs.
  • Add strong login protections. Use unique passwords, or a passkey, for your main shopping and payment accounts, and turn on two-factor authentication wherever it’s offered. It takes seconds to set up and can stop someone from breaking in, even if they have your password.
  • Guard against malicious ads and fake apps. Scam sites and pop-ups tend to spike during busy shopping periods, hiding behind flash sales or delivery updates. Malwarebytes Mobile Security and Malwarebytes Browser Guard can block these pages before they load, keeping scam domains, fake coupons, and malvertising out of sight and out of reach.
  • Protect your identity. Be careful about where you share personal details, especially for “free” offers or surveys. If something asks for more information than it needs, it’s probably not worth the risk. Using identity protection tools adds an extra layer of defense if your data ever does end up in the wrong hands.

A few minutes of setup now can save you days of stress later. Shop smart, stay skeptical, and enjoy the season safely.

The research in this article is based on a March 2025 survey prepared by an independent research consultant and distributed via Forsta among n=1,300 survey respondents ages 18 and older in the United States, UK, Austria, Germany and Switzerland. The sample was equally split for gender with a spread of ages, geographical regions and race groups, and weighted to provide a balanced view.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

What the Flock is happening with license plate readers?

You’re driving home after another marathon day of work and kid-shuttling, nursing a lukewarm coffee in a mug that’s trying too hard. As you turn onto your street, something new catches your eye. It’s a tall pole with a small, boxy device perched on top. But it’s not a bird-house and there’s no sign. There is, however, a camera pointed straight at your car.

It feels reassuring at first. After all, a neighbor was burglarized a few weeks ago. But then, dropping your kids at school the next morning, you pass another, and you start to wonder: Is my daily life being recorded and who is watching it?

That’s what happened to me. After a break-in on our street, a neighborhood camera caught an unfamiliar truck. It provided the clue police needed to track down the suspects. The same technology has shown up in major investigations, including the “Coroner Affair” murder case on ABC’s 20/20. These cameras aren’t just passive hardware. They’re everywhere now, as common as mailboxes, quietly logging where we go.

So if they’re everywhere, what do they collect? Who’s behind them? And what should the rest of us know before we get too comfortable or too uneasy?

A mounting mountain of surveillance

ALPRs aren’t hikers in the Alps. They’re Automatic License Plate Readers. Think of them as smart cameras that can “read” license plates. They snap a photo, use software to convert the plate into text, and store it. Kind of like how your phone scans handwriting and turns it into digital notes.

People like them because they make things quick and hands-free, whether you’re rolling through a toll or entering a gated neighborhood. But the “A” in ALPR (automatic) is where the privacy questions start. These cameras don’t just record problem cars. They record every car they see, wherever they’re pointed.

What exactly is Flock?

Flock Safety is a company that makes specialized ALPR systems, designed to scan and photograph every plate that passes, 24/7. Unlike gated-community or private driveway cameras, Flock systems stream footage to off-site servers, where it’s processed, analyzed, and added to a growing cloud database.

At the time of writing, there are probably well over 100,000 Flock cameras installed in the United States and increasingly rapidly. To put this in perspective, that’s one Flock camera for every 4,000 US citizens. And each camera tracks twice as many vehicles on average with no set limit.

Think of it like a digital neighborhood watch that never blinks. The cameras snap high-resolution images, tag timestamps, and note vehicle details like color and distinguishing features. All of it becomes part of a searchable log for authorized users, and that log grows by the second.

Adoption has exploded. Flock said in early 2024 that its cameras were used in more than 4,000 US cities. That growth has been driven by word of mouth (“our HOA said break-ins dropped after installing them”) and, in some cases, early-adopter discounts offered to communities.

A positive perspective

Credit where it’s due: these cameras can help. For many neighborhoods, Flock cameras make them feel safer. When crime ticks up or a break-in happens nearby, putting a camera at the entrance feels like a concrete way to regain control. And unlike basic security cameras, Flock systems can flag unfamiliar vehicles and spot patterns, which are useful for police when every second counts.

In my community, Flock footage has helped recover stolen cars and given police leads that would’ve otherwise gone cold. After our neighborhood burglary, the moms’ group chat calmed down a little knowing there was a digital “witness” watching the entrance.

In one Texas community, a spree of car break-ins stopped after a Flock camera caught a repeat offender’s plate, leading to an arrest within days. And in the “Coroner Affair” murder case, Flock data helped investigators map vehicle movements, leading to crucial evidence.

Regulated surveillance can also help fight fake videos. Skilled AI and CGI artists sometimes create fake surveillance footage that looks real, showing someone or their car doing something illegal or being somewhere suspicious. That’s a serious problem, especially if used in court. If surveillance is carefully managed and trusted, it can help prove what really happened and expose fabricated videos for what they are, protecting people from false accusations.

The security vs overreach tradeoff

Like any powerful tool, ALPRs come with pros and cons. On the plus side, they can help solve crimes by giving police crucial evidence—something that genuinely reassures residents who like having an extra set of “digital eyes” on the neighborhood. Some people also believe the cameras deter would-be burglars, though research on that is mixed.

But there are real concerns too. ALPRs collect sensitive data, often stored by third-party companies, which creates risk if that information is misused or hacked. And then there’s “surveillance creep,” which is the slow expansion of monitoring until it feels like everyone is being watched all the time.

So while there are clear benefits, it’s important to think about how the technology could affect your privacy and the community as a whole.

What’s being recorded and who gets to see it

Here’s the other side of the coin: What else do these cameras capture, who can see it, and how long is it kept?

Flock’s system is laser-focused on license plates and cars, not faces. The company says they don’t track what you’re wearing or who’s sitting beside you. Still, in a world where privacy feels more fragile every year, people (myself included) wonder how much these systems quietly log.

  • What’s recorded: License plate numbers, vehicle color/make/model, time, location. Some cameras can capture broader footage; some are strictly plate readers.
  • How long is it kept: Flock’s standard is 30 days, after which data is automatically deleted (unless flagged in an active investigation).
  • Who has access? This is where things get dicey:
    • Using Flock’s cloud, only “authorized users”, which can include community leaders and law enforcement, ideally with proper permissions or warrants, can view footage. Residents can make requests for someone to determine privileges.
    • Flock claims they don’t sell data, but it’s stored off-site, raising the stakes of a breach. The bigger the database, the more appealing it is to attackers.
    • Unlike a home security camera that you can control, these systems by design track everyone who comes and goes…not just the “bad guys.”

And while these cameras don’t capture people, they do capture patterns, like vehicles entering or leaving a neighborhood. That can reveal routines, habits, and movement over time. A neighbor was surprised to learn it had logged every one of her daily trips, including gym runs, carpool, and errands. Not harmful on its own, but enough to make you realize how detailed a picture these systems build of ordinary life.

The place for ALPRs… and where they don’t belong

If you’re feeling unsettled, you’re not alone. ALPRs are being installed at lightspeed, often faster than the laws meant to govern them. Will massive investment shape how future rules are written?

Surveillance and data collection laws

  • Federal: There’s no nationwide ban on license plate readers; law enforcement has used them for years. (We’ve also reported on police using drones to read license plates, raising similar concerns about oversight.) However, courts in the US increasingly grapple with how this data impacts Fourth Amendment “reasonable expectation of privacy” standards.
  • Local: Some states and cities have rules about where cameras can be placed on public and private roadways. They have also ordained how long footage can be kept. Check your local ordinances or ask your community for policy.

A good example is Oakland, where the City Council limited ALPR data retention to six months unless tied to an active investigation. Only certain authorized personnel can access the footage, every lookup is logged and auditable, and the city must publish annual transparency reports showing usage, access, and data-sharing. The policy also bans tracking anyone based on race, religion, or political views. It’s a practical attempt to balance public safety with privacy rights.

Are your neighbors allowed to record your car?

If your neighborhood is private property, usually yes. HOAs and community boards can install cameras at entrances and exits, much like a private parking lot. They still have to follow state law and, ideally, notify residents, so always read the fine print in those community updates.

What if the footage is misused or hacked?

This is the big one. If footage leaves your neighborhood, such as handed to police, shared too widely, or leaked online, it can create liability issues. Flock says its system is encrypted and tightly controlled, but no technology is foolproof. If you think footage was misused, you can request an audit or raise it with your HOA or local law enforcement.

Meet your advocates

snapshot cameras
Image courtesy of defock.me. This is just a snapshot-in-time of their map showing the locations of APLR cameras.

For surveillance

One thing stands out in this debate: the strongest supporters of ALPRs are the groups that use or sell them, i.e. law enforcement and the companies that profit from the technology. It is difficult to find community organizations or privacy watchdogs speaking up in support. Instead, many everyday people and civil liberties groups are raising concerns. It’s worth asking why the push for ALPRs comes primarily from those who benefit directly, rather than from the wider public who are most affected by increased surveillance.

For privacy

As neighborhood ALPRs like Flock cameras become more common, a growing set of advocacy and educational sites has stepped in to help people understand the technology, and to push back when needed:

Deflock.me is one of the most active. It helps residents opt their vehicles out where possible, track Flock deployments, and organize local resistance to unwanted surveillance.

Meanwhile, Have I Been Flocked? takes an almost playful approach to a very real issue: it lets people check whether their car has appeared in Flock databases. That simple search often surprises users and highlights how easily ordinary vehicles are tracked.

For folks seeking a deeper dive, Eyes on Flock and ALPR Watch map where Flock cameras and other ALPRs have been installed, providing detailed databases and reports. By shining a light on their proliferation, the sites empower residents to ask municipal leaders hard questions about the balance between public safety and civil liberties.

If you want to see the broader sweep of surveillance tech in the US, the Atlas of Surveillance is a collaboration between the Electronic Frontier Foundation (EFF) and University of Nevada, Reno. It offers an interactive map of surveillance systems, showing ALPRs like Flock in context of a growing web of automated observation.

Finally, Plate Privacy provides practical tools: advocacy guides, legal resources, and tips for shielding plates from unwanted scanning. It supports anyone who wants to protect the right to move through public space without constant tracking.

Together, these initiatives paint a clear picture: while ALPRs spread rapidly in the name of safety, an equally strong movement is demanding transparency, limits, and respect for privacy. Whether you’re curious, cautious, or concerned, these sites offer practical help and a reminder that you’re not alone in questioning how much surveillance is too much.

How to protect your privacy around ALPRs

This is where I step out of the weeds and offer real-world advice… one neighbor to another.

Talk to your neighborhood or city board

  • Ask about privacy: Who can access footage? How long is it stored? What counts as a “valid” reason to review it?
  • Request transparency: Push for clear, written policies that everyone can see.
  • Ask about opt-outs: Even if your state doesn’t require one, your community may still offer an option.

Key questions to ask about any new camera system

  • Who will have access to the footage?
  • How long will data be stored?
  • What’s the process for police, or anyone else, to request footage?
  • What safeguards are in place if the data is lost, shared, or misused?

Protecting your own privacy

  • Check your community’s camera policies regularly. Homeowners Associations (HOAs) update them more often than you’d think.
  • Consider privacy screens or physical barriers if a camera directly faces your home.
  • Stay updated on your state’s surveillance laws. Rules around data retention and access can change.

Finding the balance

You don’t have to choose between feeling safe and feeling free. With the right policies and a bit of open conversation communities can use technology without giving up privacy. The goal isn’t to pit safety against rights, but to make sure both can coexist.

What’s your take? Have ALPRs made you feel safer, more anxious, or a bit of both? Share your thoughts in the comments, and let’s keep the conversation welcoming, practical, and focused on building communities we’re proud to live in. Let’s watch out for each other not just with cameras, but with compassion and dialogue, too. You can message me on Linkedin at https://www.linkedin.com/in/mattburgess/


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Gmail can read your emails and attachments to train its AI, unless you opt out

Under the radar, Google has added features that allow Gmail to access all private messages and attachments for training its AI models.

If you use Gmail, you need to be aware of an important change that’s quietly rolling out. Reportedly, Google has recently started automatically opting users in to allow Gmail to access all private messages and attachments for training its AI models. This means your emails could be analyzed to improve Google’s AI assistants, like Smart Compose or AI-generated replies. Unless you decide to take action.

The reason behind this is Google’s push to power new Gmail features with its Gemini AI, helping you write emails faster and manage your inbox more efficiently. To do that, Google is using real email content, including attachments, to train and refine its AI models. Some users are now reporting that these settings are switched on by default instead of asking for explicit opt-in.

Which means that if you don’t manually turn these setting off, your private messages may be used for AI training behind the scenes. Even though Google promises strong privacy measures like anonymization and data security during AI training, for anyone handling sensitive or confidential information, that may not feel reassuring..

Sure, your Gmail experience would get smarter and more personalized. Features like predictive text and AI-powered writing assistance rely on this kind of data. But is it worth the risks? I see plenty of reasons to make one uncomfortable.

Yes, these features can make Gmail smarter and more personalized. But the lack of explicit consent feels like a step backward for people who want control over how their personal data is used.

How to opt out

Opting out requires you to change settings in two places, so I’ve tried to make it as easy to follow as possible. Feel free to let me know in the comments if I missed anything.

To fully opt out, you must turn off Gmail’s “Smart features” in two separate locations in your settings. Don’t miss one, or AI training may continue.

Step 1: Turn off Smart Features in Gmail, Chat, and Meet settings

  • Open Gmail on your desktop or mobile app.
  • Click the gear icon → See all settings (desktop) or Menu → Settings (mobile).
  • Find the section called Smart Features in Gmail, Chat, and Meet. You’ll need to scroll down quite a bit.
Smart features settings
  • Uncheck this option.
  • Scroll down and hit Save changes if on desktop.

Step 2: Turn off Google Workspace Smart Features

  • Still in Settings, locate Google Workspace smart features.
  • Click on Manage Workspace smart feature settings.
  • You’ll see two options: Smart features in Google Workspace and Smart features in other Google products.
Smart feature settings

  • Toggle both off.
  • Save again in this screen.

Step 3: Verify if both are off

  • Make sure both toggles remain off.
  • Refresh your Gmail app or sign out and back in to confirm changes.

Why two places?

Google separates “Workspace” smart features (email, chat, meet) from smart features used across other Google apps. To fully opt out of feeding your data into AI training, both must be disabled.

Note

Your account might not show these settings enabled by default yet (mine didn’t). Google appears to be rolling this out gradually. But if you care about privacy and control, double-check your settings today.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.