IT NEWS

5 riskiest places to get scammed online

Scammers love your smartphone.

They can text you fraudulent tracking links for packages you never bought. They can profess their empty love to you across your social media apps. They can bombard your email inbox with phishing attempts, impersonate a family member through a phone call, and even trick you into visiting malicious versions of legitimate websites.

But, according to new research from Malwarebytes, while scammers can reach people through just about any modern method of communication, they have at least five favored tracts for finding new victims—emails, phone calls and voicemails, malicious websites, social media platforms, and text messages. It’s here that people are most likely to find phishing attempts, romance scams, sextortion threats, and more, and it’s here that everyday people should stay most cautious when receiving messages from unknown senders or in responding to allegedly urgent requests for money or information.

For this research, Malwarebytes surveyed 1,300 people over the age of 18 in the US, UK, Austria, Germany, and Switzerland, asking about the frequency, type, impact, and consequences of any scams they found on their smartphones. Capturing just how aggravating today’s online world is, a full 78% of people said they encountered or received a scam on their smartphone at least once a week.

Here are the top five places that people actually encountered those weekly scams:

  • 65% of people encountered a scam at least once a week through their email
  • 53% encountered a scam at least once a week through phone calls and voicemails
  • 50% encountered a scam at least once a week through text messages (SMS)
  • 49% encountered a scam at least once a week through malicious websites
  • 47% encountered a scam at least once a week through social media platforms

Unfortunately, scam prevention cannot fixate on only these five channels, as scammers change their tactics based on how they’re trying to trick their victims. For instance, though people were least likely to encounter a scam once a week through a buying or selling platform like Facebook Marketplace or Craigslist (36%), such platforms were of course the most likely place for scam victims to have their credit card details and passwords stolen by a scammer masquerading as a legitimate business.

The noise from such daily strife has become deeply confusing, as just 15% of people strongly agreed that they could confidently identify a scam on their phone.

Daily dilemma

While 78% of people encountered a scam on their smartphone at least once a week, a shocking 44% of people encountered a scam at least daily. Similar to the weekly breakdown, here are the top five ways that people encountered scams once a day:

  • 34% of people encountered a scam at least once a day through their email
  • 25% encountered a scam at least once a day through malicious websites
  • 24% encountered a scam at least once a day through phone calls and voicemails
  • 24% encountered a scam at least once a day through social media platforms
  • 22% encountered a scam at least once a day through text messages (SMS)

This list encompasses so much of any person’s daily use of their smartphone. They use it to check emails, browse the internet, make phone calls, scroll through social media, and text family and friends. And yet, it is in these exact places that people have come to expect getting scammed. As if the 44% of people who encounter a daily scam wasn’t depressing enough, there are 28% of people who said they encounter scams “multiple times a day.”

But the frequency of scams can only reveal so much. How, exactly, are scammers trying to trick their targets?

Social engineering and extortion

Scams are so difficult to analyze because they vary both in their delivery method and their method of deceit. A message that tries to trick a person into clicking a package tracking link is a simple act of social engineering—relying on false urgency or faked identity to fool a victim. But that message itself can come through a text message or an email, and it can direct a person to a malicious website on the internet. A romance scam, similarly, can start on a social media platform but can move into a messaging service like WhatsApp. And sometimes, a threat to release private information—which can be categorized as “extortion”—can happen through a phone call, a text message, or any combination of other communication channels.

This is why, to understand how people were being harmed by scams, Malwarebytes asked respondents about roughly 20 types of cybercrime that they could encounter and experience.

Broadly, Malwarebytes found that 74% of people had “encountered” or come across a social engineering scam, and that 36% fell victim to such scams. These were the most common social engineering scams that people encountered and that they experienced:

  • Phishing/smishing/vishing: 53% encountered and 19% experienced
  • USPS/FedEx/postal scams: 42% encountered and 12% experienced
  • Impersonation scams: 35% encountered and 10% experienced
  • Marketplace or business scams: 33% encountered and 10% experienced
  • Romance scams: 33% encountered and 10% experienced

For respondents who experienced any type of scam—making them scam victims—Malwarebytes also asked where they had found or encountered that scam. Here, the results show a far more intimate picture of where scams are most likely to harm the public.

For instance, 26% of charity scam victims were originally tricked on social media platforms. 37% of postal notification scam victims were first reached, predictably, through SMS/text messages. And, interestingly, despite how frequently cryptocurrency scams spread through social media, the most likely place for such a scam victim to be contacted was through email (30% for email vs. 13% for social media).

In its research, Malwarebytes also discovered that 17% of people have fallen victim to extortion scams, which includes ransomware scares, virtual kidnapping schemes, and threats to release sexually explicit photos (sextortion) or deepfake images.

Here, scam victims again shared where these scams arrived. The most popular channels for deepfake scammers to victimize people were social media platforms and emails—both at 17%. For sextortion scam victims, the most popular channel was email, at 35%. And 24% of virtual kidnapping scam victims said they were contacted through text messages, making it the most popular way to deliver such a threat.

These numbers may look depressing, but they should instead educate. No, there is no such thing as a perfectly safe communication channel today. But that doesn’t mean there isn’t help.

Check if something is a scam

Malwarebytes Scam Guard is a free, AI-powered digital safety companion that reviews any concerning text, email, phone number, link, image, or online message and provides on the spot guidance to help users avert and report scams. Just share a screenshot of any questionable message—like that strange email demanding a password reset or that alarming text flagging a traffic penalty—and Scam Guard will guide you to safety.

Fake bank ads on Instagram scam victims out of money

Ads on Instagram—including deepfake videos—are impersonating trusted financial institutions like Bank of Montreal (BMO) and EQ Bank (Equitable Bank) in order to scam people, according to BleepingComputer.

There are some variations in how the scammers approach this. Some use Artificial Intelligence (AI) to create deepfake videos aimed at gathering personal information, while others link to typosquatted domains that not just look the same but also have very similar domain names as the impersonated bank.

BleepingComputer shows an example of an advertisement, which claims to be from “Eq Marketing” and closely mimics EQ Bank’s branding and color scheme, while promising a rather optimistic interest yield of “4.5%”.

Advertisement leading to fake website
Image courtesy of BleepingComputer

In this example, using the “Yes, continue with my account” button presents the user with a fraudulent “EQ Bank” login screen, prompting the visitor to provide their banking credentials. From there, it’s likely the scammers will empty the bank account and move on to their next victim.

Another fraudulent ad impersonates BMO bank’s Chief Investment Strategist and leader of the Investment Strategy Group Brian Belski. This may lead people to believe they are getting valuable financial advice, for example by luring them to a “private WhatsApp investment group”.

Impersonations of bank employees and authorities are increasing and can often sound very convincing. These scammers demand immediate payment or action to avoid further impacts, which can dupe individuals into inadvertently sending money to a fraudulent account.

It’s not just Instagram where WhatsApp investment groups are used as a lure by scammers. On X we see invites like these several times a week.

WhatsApp investment group invitation

Recommendations to stay safe

As cyberthreats and financial scams become more sophisticated, it is increasingly difficult for individuals to determine if a request coming via social media, email, text, phone call or even video call is authentic.

By staying alert and proactive, you can outsmart even the most convincing deepfake scams. Remember, a healthy dose of skepticism is your best companion in the digital age.

  • Verify before you trust: Always double-check the legitimacy of any ad or message claiming to be from your bank. Visit your bank’s official website or contact them directly using verified contact details before taking any action.
  • Doublecheck the advertiser account: BleepingComputer found that the advertiser accounts running the fake ads on Instagram only had pages on Facebook, not on Instagram itself.
  • Look for red flags: Be wary of ads that create a sense of urgency, promise unrealistic rewards, or ask for sensitive information like passwords or PINs. Authentic banks will never request such details through social media or ads.
  • Scrutinize visuals and language: Deepfakes can be convincing, but subtle inconsistencies in video quality, unnatural facial movements, or awkward phrasing can be giveaways. Trust your instincts if something feels off.
  • Enable Multi-Factor Authentication (MFA) : Strengthen your account security by enabling two-factor authentication on your banking and social media accounts. This adds an extra layer of protection even if your credentials are compromised.
  • Report suspicious content: If you encounter a suspicious ad or message, report it to Instagram and notify your bank immediately. Your vigilance can help prevent others from falling victim.
  • Use web protection: This can range from programs that block known malicious sites, to browser extensions that can detect skimmers, to sophisticated assistants that you can ask if something is a scam.
  • Stay informed: Keep up to date with the latest scam tactics and security advice from your bank and reputable cybersecurity sources. Awareness is your best defense.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Scammers hijack websites of Bank of America, Netflix, Microsoft, and more to insert fake phone number

The examples in this post are actual fraud attempts found by Malwarebytes Senior Director of Research, Jérôme Segura.

Cybercriminals frequently use fake search engine listings to take advantage of our trust in popular brands, and then scam us. It often starts, as with so many attacks, with a sponsored search result on Google.

In the latest example of this type of scam, we found tech support scammers hijacking the results of people looking for 24/7 support for Apple, Bank of America, Facebook, HP, Microsoft, Netflix, and PayPal.

sponsored search result for Netflix

Here’s how it works: Cybercriminals pay for a sponsored ad on Google pretending to be a major brand. Often, this ad leads people to a fake website. However, in the cases we recently found, the visitor is taken to the legitimate site with a small difference.

Visitors are taken to the help/support section of the brand’s website, but instead of the genuine phone number, the hijackers display their scammy number instead.

The browser address bar will show that of the legitimate site and so there’s no reason for suspicion. However, the information the visitor sees will be misleading, because the search results have been poisoned to display the scammer’s number prominently in what looks like an official search result.

Once the number is called, the scammers will pose as the brand with the aim of getting their victim to hand over personal data or card details, or even allow remote access to their computer. In the case of Bank of America or PayPal, the scammers want access to their victim’s financial account so they can empty it of money.

A technically more correct name for this type of attack would be a search parameter injection attack, because the scammer has crafted a malicious URL that embeds their own fake phone number into the genuine site’s legitimate search functionality.

See the below example on Netflix:

Netflix Help Center with scammer's number

These tactics are very effective because:

  • Users see the legitimate Netflix URL in their address bar
  • The page layout looks authentic (again, because it is the real Netflix site)
  • The fake number appears in what looks like a search result, making it seem official.

This is able to happen because Netflix’s search functionality blindly reflects whatever users put in the search query parameter without proper sanitization or validation. This creates a reflected input vulnerability that scammers can exploit.

Fortunately, Malwarebytes Browser Guard caught this and shows a warning about “Search Hijacking Detected,” and explains that unauthorized changes were made to search results with an overlaid phone number.

But Netflix is just one example. As we mentioned earlier, we found that other brands, such as PayPal, Apple, Microsoft, Facebook, Bank of America, and HP being abused in the same way by scammers.

HP Customer Service page with scammer's phone number

The HP example is a bit clearer to identify as suspicious, as it says “4 Results for” which is shown in front of the scammers text. But even then if you’re on a genuine website you expect to see a genuine number, right?

Interestingly, Apple is the one where we found the scammer’s number was the hardest to identify as false.

Apple Support page with scammer's phone number

This looks as if the web page tells the visitor they have no matches for their search, so they’d better call the number on display. That would drive them straight in the arms of scammers.

How to stay safe from tech support scams

As demonstrated in these cases, Malwarebytes Browser Guard is a great defense mechanism against this kind of scam, and it is free to use.

There are also some other red flags to keep an eye out for:

  • A phone number in the URL
  • Suspicious search terms like “Call Now” or “Emergency Support” in the address bar of the browser
  • Lots of encoded characters like the %20 (space) and %2B (+ sign) along with phone numbers
  • The website showing a search result before you entered one
  • The urgent language (Call Now, Account suspended, Emergency support) displayed on the website
  • An in-browser warning for known scams (don’t ignore this).

And before you call any brand’s support number, look up the official number in previous communications you’ve had with the company (such as an email, or on social media) and compare it to the one you found in the search results. If they are different, investigate until you’re sure which one is the legitimate one.

If during the call, you are asked for personal information or banking details that have nothing to do with the matter you’re calling about, hang up.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

WhatsApp to start targeting you with ads

WhatsApp has announced that it will start to show you targeted ads on the app. The ads, it says, will appear under the Updates tab.

WhatsApp launched the Updates tab a year ago, and now 1.5 billion people visit it every day. Updates has historically been a place for users to follow news and updates from their favorite companies, news organizations and celebrities. 

This is different to the Chats tab where users send and receive messages. Chats remain end-to-end encrypted and, according to Meta’s vice president for product management Nikila Srinivasan, will not display ads.

To determine your interests for ad purposes, WhatsApp says:

“We’ll use limited info like your country or city, language, the Channels you’re following, and how you interact with the ads you see. For people that have chosen to add WhatsApp to Accounts Center, we’ll also use your ad preferences and info from across your Meta accounts.”

That means that anyone who has linked their Facebook or Instagram accounts with their WhatsApp account will now have that data used for ad targeting. This cross-platform integration feels like a significant invasion of privacy, especially for users who expected WhatsApp to remain more private than Facebook or Instagram.

The European privacy group NOYB (None Of Your Business) has already voiced concerns, warning that WhatsApp may soon adopt the same “Pay or OK” model as Facebook and Instagram to obtain the user consent that’s required under EU law.

With Meta’s “Pay or OK” system, users face a choice between two options nobody asked for: either pay a monthly subscription fee to avoid targeted ads and tracking, or accept extensive data collection and personalized advertising in exchange for free access. If you don’t want your data tracked, you must pay. If you don’t pay, you must accept tracking and profiling for ads.

Meta introduced this model in response to strict privacy regulations in Europe, especially the General Data Protection Regulation (GDPR), which requires companies to get clear, “freely given” consent from users before using their data for personalized ads.

In the past, Meta has argued that it had obtained a ruling of the Court of Justice of the European Union (CJEU) that accepted the subscription model as a valid form of consent for an ads funded service.

Meta also said its pricing was in line with those of ad-free services such as YouTube Premium and Spotify Premium. However, it conveniently forgot to consider that ad-free services are not the same as those that gather data about you and sell them to the highest bidder to create personalized ads.

WhatsApp built its reputation on privacy, with end-to-end encryption and minimal data collection. And, as privacy advocates feared, bringing it into the Meta “family” moved the platform away from its privacy-first roots.

Even if WhatsApp says it won’t read your messages, it can still use your usage patterns, contacts, and other metadata to build detailed profiles for advertisers. This increases the risk of data leaks, misuse, or surveillance.

What can users do?

A while back I asked whether it was a good idea to move from WhatsApp to Signal. With this new development, the question may be worth reconsidering.

If you’re on iOS 18, you can now allow WhatsApp to access only selected contacts instead of your entire address book. This reduces the amount of data WhatsApp can collect about your network.

On Android, you can technically use WhatsApp without granting access to your contacts, but you’ll need to manually start chats using wa.me links. Or, for convenience, you can use a third-party app that does the work for you.

WhatsApp frequently adds or changes privacy options, so revisit your settings periodically to maintain control.

If you can, disassociate your WhatsApp account from other Meta accounts you may have. Don’t use the same email address, handle, etc. You can remove your WhatsApp account from the Meta Accounts Center, but it is unclear whether Meta will “remember” the link if it once existed.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Smart air fryers ordered to stop invading our digital privacy

In a confirmation that we’ve gone full Black Mirror, the UK’s privacy czar has wagged a finger at air fryer manufacturers and told them to stop playing with our data.

New draft guidance from the Information Commissioner’s Office (ICO) targets not just air fryer vendors but manufacturers of any smart home products, ranging from smart lighting systems through to internet-connected refrigerators and connected toys.

Collectively known as IoT (Internet of Things) devices, these connected objects have a nasty habit of collecting our data without us really understanding what they’re doing. It’s a problem with many of them, although late last year Which? magazine added air fryers to the list of offenders.

The guidance highlights data that IoT vendors might collect. This includes registration data such as an owner’s name, address, and email. It also means information gathered directly from the product that reveals how the user interacts with it. A device might simply tell its manufacturer when you used the product and how long for, but sensors embedded in it might monitor anything from temperature to motion.

The ICO is interested in enforcing privacy laws such as the UK’s version of the General Data Protection Regulation (UK GDPR). That allows products to process user information if it’s purely for domestic use, like asking a smart speaker to play Lady Gaga’s all-time greatest hits, say.

But if the IoT vendor uses audio recordings of the person’s interactions with the speaker to improve its own service or even to make inferences about that person from their musical choices, then that isn’t domestic use. That’s processing for the company’s own purposes, and it falls on the wrong side of the law.

The guidance tells vendors to ask for consent when processing this kind of data. That means ensuring that users can easily tell what they’re consenting to, and be able to make a clear choice not to do so.

Users should be able to find out how the manufacturer is using their information after they sign up for the service, says the ICO. They must also be able to withdraw consent at any time. In practice, that helps people who might click a consent button early on but then think twice about it later and decide to change their permissions.

When vendors do collect information about users they must tell them what they’re collecting, and why they’re using it. They should tell people what decisions they’re making with it, and how it affects their service. People should also be informed about how long the vendor will keep that data.

The company should also process user data fairly. That means only doing what people expect them to do with it, and not in ways that harm the user.

This is all good advice, and in keeping with existing privacy laws, but it means vendors will have a fine line to walk. Some of the requirements are nuanced. For example, the guidance asks companies to consider ways of making their privacy information easy to follow. That means giving them all the information they need without overloading them. It might require careful user interface design, along with collaboration between designers and privacy or compliance professionals.

Where appropriate, design choices like navigation panels, collapsible lists, large text, and diagrams will go a long way towards satisfying these requirements, the ICO says.

There’s an existing UK law for IoT security

There’s also a section outlining security for IoT devices and the data they collect. This points to an existing UK law called the Product and Telecommunications Infrastructure Regulations 2024 (PSTI Regulations), which came into effect last year. This calls for specific protections such as the use of unique passwords for devices, encryption of user data, and regular security updates.

The security aspect of IoT is perhaps one of the most important of all. Even companies with the best of intentions can make mistakes and leak customer data gathered by everything from connected chastity devices through to kids’ toys.

This guidance applies not just to smart connected objects but to the apps that vendors often provide with them. Those apps, which give you data about what your smart object is doing and allow you to control it, are great ways for vendors to harvest information about you.

You’re your own best protection

The document is still in draft form and open to consultation. Because it’s UK guidance it likely won’t protect people not in the UK. As always, the first line of defense is you.

So, when buying a smart home device, consider whether an app for it is necessary. Your smart fryer might have no way of phoning home without an app, but you might be able to just check whether your food is done without needing your phone to tell you.

In some cases, you might want to consider whether you really need a product to be connected at all. Connected devices are a great way for companies to nickel and dime you unexpectedly through subscription programs, or brick your product remotely when they decide it isn’t profitable for them any more.

Sometimes, all you want to do is cook up some hot fries without things getting too complicated, you know?


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Reddit’s new AI-powered tools scan your posts to serve you better ads

Reddit has introduced two Artificial Intelligence (AI) tools which will use Reddit comments, posts, and conversations to help sellers make the most of the community.

Reddit is a social media platform and online forum where users can share and discuss content across a wide range of topics. The platform’s structure divides it into communities known as “subreddits,” each focused on a specific subject or interest (from cars to movies to sports to knitting).

There are also promoted posts, which look like regular Reddit posts but are marked as sponsored. They can include text, images, videos, or carousels and often appear in users’ feeds or within specific subreddits. Due to its size, Reddit has evolved into a major digital platform for both advertising and AI-powered data analysis.

Reddit introduced its “Reddit Community Intelligence” at the Cannes Lions International Festival of Creativity 2025. It described this new addition as the collective knowledge of billions of conversations to help businesses and organizations make “smarter marketing decisions.”

Last year, Reddit launched AI-powered advertising such as an ads inspiration library, AI copywriter, and image auto-cropper to help small businesses create more effective, platform-specific ads.

The new tools, dubbed “Reddit Insights” and “Conversation Summary Add-ons” use AI to analyze conversations, summarize sentiment, and surface relevant user-generated content for advertisers.

Last year, the FTC advised Reddit that it would conduct a non-public inquiry focused on Reddit’s sale, licensing, or sharing of user-generated content with third parties to train AI models. This was before Reddit announced a partnership with OpenAI to bring Reddit content to ChatGPT.

When the FTC launched a Request for Information (RFI) to better understand how technology platforms deny or degrade users’ access to services based on the content of their speech or affiliations, and how this conduct may have violated the law, without specifically mentioning Reddit, the company saw a 9% drop in stock price.

The new tools will undoubtedly fuel the ongoing debates about the ethics of AI-driven analysis on Reddit, especially regarding user consent and the potential for privacy breaches.

In April, users of the r/ChangeMyView subreddit expressed outrage at the revelation that researchers at the University of Zurich were secretly using the site for an AI-powered experiment in persuasion, prompting the moderation team to explain that the experiment was conducted without authorization.

Careful what you share

Given the open nature of Reddit it’s important to keep in mind that anything you post can be found by anyone and everything, including AI. So, it’s important to hold yourself to the same standards you may use when posting on social media.

On June 28, 2025 a new Privacy Policy will go into effect. It stands to reason that you should be aware of the current policy and keep up with any changes.

A few general rules to help improve your privacy on the platform:

  • Anonymity: there is no reason to use your real name or any identifying information in your username or profile. Don’t share personal details like your location, workplace, or other identifiers in posts or comments unless they are relevant to the post.
  • Don’t link to other social media profiles in your profile or posts. Also don’t link your account to your Google or Apple account.
  • If you’re active in several Reddit communities, consider creating separate accounts for different interests or sensitive topics.
  • In your Reddit account settings, under Privacy you can turn off “Show up in search results” to prevent your posts and comments from being indexed by search engines or easily browsed by others.
  • You can also disable “Personalize ads on Reddit based on information and activity from our partners.”
  • Protect your account using a unique, complex password and enable two-factor authentication (2FA) for your Reddit account.
  • Regularly check your account activity for unauthorized access and report anything suspicious.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

The data on denying social media for kids (re-air) (Lock and Code S06E12)

This week on the Lock and Code podcast…

Complex problems often assume complex solutions, but recent observations about increased levels of anxiety and depression, increased reports of loneliness, and lower rates of in-person friendships for teens and children in America today have led some school districts across the country to take direct and simple action: Take away the access to smartphones in schools.

Not everyone is convinced.

When social psychologist and author Jonathan Haidt proposed five solutions to what he called an “epidemic of mental illness” for young adults in America, many balked at the simplicity.

Writing for the outlet Platformer, reporter Zoe Schiffer spoke with multiple behavioral psychologists who alleged that Haidt’s book cherry-picks survey data, ignores mental health crises amongst adults, and over-simplifies a complex problem with a blunt solution. And in speaking on the podcast Power User, educator Brandon Cardet-Hernandez argued that phone bans in schools would harm the students that need phones the most for things like translation services and coordinating rides back home from parents with varying schedules.

But Haidt isn’t alone in thinking that smartphones have done serious harm to teenagers and kids today, and many schools across America are taking up the mantle to at least remove their access in their own hallways. In February, Los Angeles Unified School District did just that, and a board member for the school district told the Lock and Code podcast that he believes the change has been for the better.

But for those still in doubt, there’s a good reason now to look back.

Today, on the Lock and Code podcast with host David Ruiz, we revisit a 2024 interview with Dr. Jean Twenge about her research into the differences in America between today’s teens and the many generations that came before. A psychologist and published author, Twenge believes she has found enough data tying increased smartphone use and social media engagement with higher strains on mental health. In today’s re-broadcast episode, Twenge explains where she believes there is a mental health crisis amongst today’s teens, where it is unique to their generation, and whether it can all be traced to smartphones and social media.

According to Dr. Twenge, the answer to all those questions is, pretty much, “Yes.” But, she said, there’s still some hope to be found.

“This is where the argument around smartphones and social media being behind the adolescent mental health crisis actually has, kind of paradoxically, some optimism to it. Because if that’s the cause, that means we can do something about it.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

A week in security (June 9 – June 15)

Your Meta AI chats might be public, and it’s not a bug

Conversations that people are having with the Meta AI app are being exposed publicly, often without the users realizing it, revealing a variety of medical, legal, and private matters. The standalone app and the company’s integrations with artificial intelligence (AI) across its platforms—Facebook, Instagram, and WhatsApp—are now facing significant scrutiny for such privacy lapses.

The past two years have seen an explosion in generative AI tools, such as ChatGPT, Anthropic’s Claude, Google Gemini, and more. But with new players entering the market almost daily, not all of them deserve the same level of trust.

With 1 billion active monthly users, Meta AI is one of the contenders aiming for ChatGPT’s crown. To monetize this success, Meta’s CEO Zuckerberg said “there will be opportunities to either insert paid recommendations” or offer “a subscription service so that people can pay to use more compute.”

Similar to ChatGPT, Meta AI can generate text, answer questions, and help users plan, brainstorm, and strategize on various issues. But when using the Meta AI app, after submitting a question, users can also press a “share” button, which directs them to a screen showing a preview of the post, which users can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world.

Other users can access “shared” conversations by using the apps Discover feed.

With only one glance at that feed, I found this example of a conversation the user might not have wanted to be publicly visible. A teacher shared an email thread regarding their upcoming arbitration of a job termination that felt unjust. The Meta AI app responded:

example of a teacher sharing a story of unfair termination

“Hello {name}, it’s great to hear from you again. I can sense the emotional weight of revisiting this experience, and I appreciate your trust in sharing it with me. It sounds like a really tough situation, and I’m here to listen and support you as you process it. No rush, take your time to gather your thoughts, and we can continue the conversation whenever you’re ready. How’s the preparation for the arbitration going so far?”

The emphasis is by me. If the AI realizes how much trust it takes to share that story with anyone, even if it’s only a bot, then how must it feel when the user finds out the whole world can find and read it. I didn’t even have to log in to find this conversation.

TechCrunch reported about people asking for help with tax evasion and other personal matters, calling Meta AI a “privacy disaster.” Many sources found threads about medical topics, and other conversations about delicate and private issues.

What people may not realize is that if they use the integrated versions on social media, they are logged in and their conversation can be tied to their social media profile.

How to not share your chats with Meta AI

Your best option would be not to use Meta AI at all. If you do, make sure you are not logged in and double check that fact if you are logged in on any of the Meta platforms on the same device. And never, ever use the Share button unless you mean to.

Meta spokesperson Daniel Roberts wrote in an emailed statement to WIRED that users’ chats with Meta AI are private unless users go through a multistep process to share them on the Discover feed.

When asked about these mishaps, the Meta AI chatbot said:

“Some users might unintentionally share sensitive info due to misunderstandings about platform defaults or changes in settings over time.”

However, Meta has not provided detailed, easily accessible in-app guidance at the moment of sharing, which contributes to user confusion. I decided to ask Meta AI.

Question: What do I need to do so you don’t share our conversation with the rest of the world? And is it true that the information I share with you might be used for advertising? Or, at least for “sponsored” recommendations?

Answer: Your conversations with me are used to improve the model and might be reviewed by our team to ensure quality. We may also use your conversations to train our models to be more helpful and accurate. However, we take steps to protect your privacy.

If you have concerns about how your data is used, you can visit Meta’s Privacy Center to learn more about how Meta handles data: https://www.facebook.com/privacy/center/.

Did that actually answer my question? No.

So, here are the steps you may want to take if you decide to engage with Meta AI for all the different platforms where you may encounter it.

Meta AI App

  • To ensure your prompts and conversations are only visible to you:
    • Tap your profile icon in the Meta AI app.
    • Go to Data & Privacy under App settings.
    • Select Manage your information.
    • Set Make all your prompts visible to only you.
  • Avoid using the Share button unless you are certain you want your conversation to appear publicly on the Discover feed.

WhatsApp, Facebook, and Instagram

Note: Conversations with Meta AI on WhatsApp are not protected by end-to-end encryption and may be used for AI training

To limit Meta’s use of your data for AI training:

  • Go to Settings & Privacy > Privacy Center.
  • Scroll to Privacy Topics and select AI at Meta.
  • Under Submit an objection request click Your messages with AIs on WhatsApp (or any of the other platforms you’re looking for) and fill out the form to request that Meta not use your data for AI training.

Deleting AI conversation data

Meta has introduced commands to delete information shared in any chat with an AI:

  • For example, type /reset-ai in a conversation on Messenger, Instagram, or WhatsApp to delete your AI messages.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

US airline industry quietly selling flight data to DHS

A data broker owned by some of America’s biggest airlines has been selling access to customer flight data to the US Department of Homeland Security (DHS).

The data, compiled by data broker Airlines Reporting Corporation (ARC), includes names, flight itineraries, and financial details. It also covers flights booked via US travel agencies.

ARC makes this data available to Customs and Border Protection (CBP), along with Immigration and Customs Enforcement (ICE), both of which were previously known as the US Customs Service until 2003, and both of which are offices under DHS.

ARC is owned and operated by eight major US airlines and is unique in being the only financial intermediary between the airline industry and US travel agencies, according to the data broker’s contract with ICE. ARC also provides payment settlement services for travel agencies and airlines, which has created a huge database of travel information that the data broker then makes available under its Travel Intelligence Program (TIP).

ARC’s most recently revealed contract, uncovered by tech news outlet by 404 Media, is with US Customs and Border Protection. A statement of work with that agency revealed that the TIP pilot program “generated meaningful results to current [redacted] cases and will continue to do so once fully accessible to [redacted] analysts across [redacted] Offices.”

The CBP contract mandates silence from DHS on where it got the data. The statement of work, which began in June 2024 and could optionally run until June 2029, states that the CBP will “not publicly identify vendor, or its employees, individually or collectively, as the source of the Reports unless the Customer is compelled to do so by a valid court order or subpoena and gives ARC immediate notice of same.”

ARC’s contract with ICE, meanwhile, provides a view into the data obligations from travel agencies. As the contract stated:

“Daily, travel agencies must submit ticket sales and funds for over 240 airlines worldwide to ARC. This process enables ARC’s TIP, an essential intelligence tool integrated into HSI INTEL’s investigative mission.”

HSI INTEL stands for the Homeland Security Investigations Office of Intelligence. It investigates criminal networks, and also any “individual or organization that threatens national security or seeks to exploit the customs and immigration laws of the United States,” per the DHS website.

Those with access to the TIP database can search across 39 months of flight booking data. Flight itineraries and passenger name records, along with travel dates, flight dates, and even credit card numbers are available from the database.

Other agencies that have purchased access to the database include The Secret Service, the Securities and Exchange Commission, the Drug Enforcement Administration, and the US Marshals Service, according to 404 Media.

Delta, Southwest, United, Lufthansa, Air France, American Airlines, Air Canada, Alaska Airlines, and JetBlue all have seats on the ARC board. The company also partners with hundreds of airlines and travel agencies around the world.