IT NEWS

Relationship broken up? Here’s how to separate your online accounts

Breaking up is hard to do. The internet has made it harder.

With couples today regularly sharing access to one another’s email accounts, streaming services, social media platforms, online photo albums, and more, the risk of a bad breakup isn’t just heartache. Equipped with unfettered access into sensitive, shared online accounts, a vindictive ex could track someone who is actively using services like DoorDash, Uber, or Airbnb, spy on someone through a Ring doorbell, raise the temperature on a Nest thermostat, or shout obscenities through a baby monitor.

As every relationship is different, there’s no one-size-fits-all solution to safely disentangling your digital life from your ex, but there are a few rules that can make the process easier.

And, because this can be a lot of work, here are a few things that can help you along the way:

  • A password manager that will help you create and store unique passwords for each online account.
  • The use of multifactor/two-factor authentication on every sensitive account that allows it.
  • A friend who can go through these exercises side-by-side with you.

Further down is a more comprehensive checklist of many considerations you can take in separating your digital life from your ex, but, here’s a quick, handy guide:

Digital breakup checklist

It’s important to remember that this work won’t be completed in a day. That’s entirely okay. Instead of trying to accomplish everything in one weekend, prioritize the most sensitive work—cutting off access to email accounts, online banking, shared photo albums, social media, and any services or apps that can reveal your location.

As Malwarebytes recently discovered in research conducted this year, 56% of people in committed relationships in the United States agreed that they “would like to see more guidance on how to handle shared logins, accounts, and apps in a relationship or during a breakup,” and 45% agreed that they “would have a hard time knowing where to begin if I no longer wanted to share location-based apps or services with my partner or in the event of a breakup.”

We hope this digital breakup checklist, which is not comprehensive, can provide some of that guidance.

Here is the Modern Love Digital Breakup Checklist.

1. Review shared devices

  • Log out of personal accounts on shared devices, including laptops, tablets, e-readers, smartphones, smart TVs, and Internet of Things devices. This includes:
    • Email, social media, and online banking accounts on shared tablets, computers, and smartphones.
    • Email, social media, and online banking accounts on the shared devices of children/the entire family.
    • Entertainment accounts (Hulu, Netflix, Disney+, Spotify, etc.) on smart TVs and streaming devices such as Roku, Google Chromecast, Apple TV, etc.
  • Remove your ex’s accounts from any device you share that you will maintain ownership of after the breakup. Here’s are guides on how to remove someone from:

2. Review shared accounts

  • For shared accounts where you and your ex had one set of login credentials, log out of those shared accounts on your own device.
  • If you want to continue using those services, create a new personal account with a unique password.

3. Review personal accounts

  • Before resetting passwords, check the recovery settings on your personal account to ensure that any attempts to reset your password will be sent to your personal email account and not to an email account owned by your ex.
  • Before resetting passwords, consider using a password manager to help create, store, and remember unique passwords for each account.
  • Reset and create unique passwords for sensitive accounts, including:
    • Email accounts
    • Online banking and financial accounts (Chase, Wells Fargo, Venmo, PayPal, Zelle, Cash App, etc.)
    • Online shopping accounts (Amazon, Etsy, Shein, Temu, etc.)
    • Social media accounts (TikTok, Instagram, Facebook, etc.)
    • Shared cloud accounts for photos (Google Photos, iCloud)
    • Shared cloud accounts for file storage (Dropbox, Box, etc.)
    • Streaming entertainment accounts (Netflix, Disney+, Hulu, Spotify, Apple Music, etc.)
    • Parental monitoring apps (Life360, Bark, Qustodio)
    • Online forums and chat services (Reddit, Discord, etc.)
  • Reset and create unique passwords for accounts that can expose your location to users who are logged into the same account, including:
    • Food and grocery delivery apps (Uber Eats, DoorDash, Postmates, etc.)
    • Ride-sharing apps (Uber, Lyft, etc.)Vacation rental apps (Airbnb, Vrbo, etc.)
    • Health and fitness tracking apps (FitBit, Strava, etc.)
    • Connected apps for modern cars with anti-theft location tracking
  • Enable multifactor authentication on sensitive accounts and accounts that can expose your location, when provided as an option.

4. Review/remove your signed-in devices

  • Check your security settings in your online accounts to review what devices are currently logged into the same account. If you see a device that does not belong to you, force that device to be logged out.
    • If you take this step after successfully resetting your password, those devices will be required to use the new password (which only you should know).
    • These settings can often be found in “security” or “privacy and security” tabs in most apps.

5. Review the location settings of your device

6. Review “Find My/Find My Device” settings

  • Modern devices come pre-installed with anti-theft services called “Find My” on iPhones and “Find My Device” on Android phones. These are the same services that many couples use to track one another’s location, and turning these services off will shut off access that other people (including exes, friends, and family) have to your location.

7. Review the location settings of individual apps

  • If you want to keep location sharing on for convenience, you can review individual apps on your device and select how you would like your location to be accessed by those apps.
    • iPhones allow you to choose one of several options for how frequently apps will access and use your location: Never, Ask Next Time Or When I Share, While Using the App, or Always. You can review location sharing settings on iPhone here.
    • Android phones allow you to choose one of several options for how frequently apps will access and use your location: Allowed all the time, Allowed only while in use, and Not allowed.
    • You can review location sharing settings on Android here.

8. Maintain your ongoing security and privacy

  • If you find it safe and necessary, block your ex on certain social media platforms, messaging apps, etc.
  • Review the privacy settings of social media apps to ensure that your posts are not inadvertently shared with an ex.
    • Consider whether your ex could see your posts because you have mutual friends who may reveal your posts to your ex.
  • Review automatic cloud backups for photos you take with your smartphone.
    • If your ex compromises your iCloud or Google Photos account—and your photos are automatically backed up to those accounts—they could retrieve sensitive photos that you want to keep private.
  • When entering a new relationship, have a conversation about consensually and safely sharing your location (or choosing not to).

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

San Francisco’s fight against deepfake porn, with City Attorney David Chiu (Lock and Code S05E20)

This week on the Lock and Code podcast…

On August 15, the city of San Francisco launched an entirely new fight against the world of deepfake porn—it sued the websites that make the abusive material so easy to create.

“Deepfakes,” as they’re often called, are fake images and videos that utilize artificial intelligence to swap the face of one person onto the body of another. The technology went viral in the late 2010s, as independent film editors would swap the actors of one film for another—replacing, say, Michael J. Fox in Back to the Future with Tom Holland.

But very soon into the technology’s debut, it began being used to create pornographic images of actresses, celebrities, and, more recently, everyday high schoolers and college students. Similar to the threat of “revenge porn,” in which abusive exes extort their past partners with the potential release of sexually explicit photos and videos, “deepfake porn” is sometimes used to tarnish someone’s reputation or to embarrass them amongst friends and family.

But deepfake porn is slightly different from the traditional understanding of “revenge porn” in that it can be created without any real relationship to the victim. Entire groups of strangers can take the image of one person and put it onto the body of a sex worker, or an adult film star, or another person who was filmed having sex or posing nude.

The technology to create deepfake porn is more accessible than ever, and it’s led to a global crisis for teenage girls.

In October of 2023, a reported group of more than 30 girls at a high school in New Jersey had their likenesses used by classmates to make sexually explicit and pornographic deepfakes. In March of this year, two teenage boys were arrested in Miami, Florida for allegedly creating deepfake nudes of male and female classmates who were between the ages of 12 and 13. And at the start of September, this month, the BBC reported that police in South Korea were investigating deepfake pornography rings at two major universities.

While individual schools and local police departments in the United States are tackling deepfake porn harassment as it arises—with suspensions, expulsions, and arrests—the process is slow and reactive.

Which is partly why San Francisco City Attorney David Chiu and his team took aim at not the individuals who create and spread deepfake porn, but at the websites that make it so easy to do so.

Today, on the Lock and Code podcast with host David Ruiz, we speak with San Francisco City Attorney David Chiu about his team’s lawsuit against 16 deepfake porn websites, the city’s history in protecting Californians, and the severity of abuse that these websites offer as a paid service.

“At least one of these websites specifically promotes the non-consensual nature of this. I’ll just quote: ‘Imagine wasting time taking her out on dates when you can just use website X to get her nudes.’”

David Chiu, San Francisco City Attorney

Tune in today for the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

SpaceX, CNN, and The White House internal data allegedly published online. Is it real?

A cybercriminal has released internal data online that they say has come from leaks at several high-profile sources, including SpaceX, CNN, and the White House.

However, there are some questions around the reliability and usefulness of the released data, so we took a closer look.

When it comes to the the SpaceX data set, the poster is apparently not a big fan of Elon Musk.

BreachForums post about SpaceX
BreachForums post about SpaceX data

Their post on data leak site BreachForums says:

“Today I present data from Spacex, because F*** you elon musk, thats why LOL

The leak contains, Emails, Hashes, Numbers, Hosts, IP’s”

But looking at the data we spotted some strange looking entries.

For example, by searching for Elon’s email address we found all these:

collection of possible email addresses for Elon Musk at SpaceX
Now I still don’t know where to send the pitch for my brilliant Mars colonization idea.

SpaceX has not acknowledged this data breach, and it doesn’t seem likely that it will.

Moving on to the White House data set, we also found something that looked odd while looking at the email addresses. A lot of them seem to be composed of German words followed by the @whitehouse.gov domain name.

fabricated whitehouse.gov email addresses
Potentially fabricated whitehouse.gov email addresses

Again, the breach claim has not been acknowledged, nor do we expect it to be.

The same poster claims to have breached another company, Up North Pride, by impersonating a police officer:

“I sent them a fake data request from a law enforcement email, and they handed over what they had and this is what they handed over”

In this case, looking at the data, the email addresses of the partnering organizations at least look real.

The motive of the cybercriminal for posting the way they did is unclear. Many of these posters are just looking for attention, potentially hoping to sell some of the data by getting their name out there. Or they are trying to annoy some of the people they don’t like.

For now, we wait and see, but it’s probably not worth giving it the time of day.

Check your digital footprint

If you want to find out if your personal data was exposed through a data breach, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you use most frequently to sign up for sites and services) and we’ll send you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (September 16 – September 22)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

“Simply staggering” surveillance conducted by social media and streaming services, FTC finds

The US Federal Trade Commission (FTC) released a report that examines the data collection and use practices of major social media and video streaming services, finding that—and this will not come as a surprise to our regular readers—the companies engaged in vast surveillance of consumers in order to monetize their personal information while failing to adequately protect users online, especially children and teens.

The report, called A Look Behind the Scenes: Examining the Data Practices of Social Media and Video Streaming Services, is based on responses from nine companies to questions about how the companies collect, track, and use personal and demographic information, how they determine which ads and other content are shown to consumers, whether and how they apply algorithms or data analytics to personal and demographic information, and how their practices impact children and teens.

The companies that were ordered to respond own some of the household social media and streaming service names. They are Amazon (Twitch), Meta (Facebook and Instagram), YouTube, X (Twitter), Snap (Snapchat), ByteDance (TikTok), Discord, Reddit, and WhatsApp.

Some of the specific information that the FTC was looking for included:

  • How social media and video streaming services collect, use, track, estimate, or derive personal and demographic information.
  • How they determine which ads and other content are shown to consumers.
  • Whether they apply algorithms or data analytics to personal information.
  • How they measure, promote, and research user engagement.
  • How their practices affect children and teens.

The conclusions seemed to upset the FTC, but we weren’t even mildly surprised:

“The amount of data collected by large tech companies is simply staggering. They track what we read, what websites we visit, whether we are married and have children, our educational level and income bracket, our location, our purchasing habits, our personal interests, and in some cases even our health conditions and religious faith. They track what we do on and off their platforms, often combining their own information with enormous data sets purchased through the largely unregulated consumer data market.”

The FTC also mentions that some of these companies increasingly rely on hidden pixels and other means of tracking visitors, not only on their own, but also on other websites, to track our behavior down to every click.

Some of the responders were even unable to identify all the data points they collected or all of the third parties they shared that data with.

The report comes to the conclusion that self-regulation is not the answer to these problems. We can see all around the news that with the rise of the artificial platforms that many of these companies are developing, the incentive to use our data for their own purposes is only growing.

“Predicting, shaping, and monetizing human behavior through commercial surveillance is extremely profitable.”

US Federal Trade Commission, “A Look Behind the Screens: Examining the Data Practices of Social Media and Video Streaming Services”

This has created a number of companies that have a huge influence on our economy, our democracy, and our society as a whole. Companies that, it appears, believe they can dodge the obligation to provide the Commission with complete answers while hiding their collection practices with limited, incomplete, or unhelpful responses that appear to have been carefully crafted to be self-serving, and to avoid revealing key pieces of information.

While their services provide us with the option to connect with the world from the palm of your hand, many of them have been at the forefront of building the infrastructure for mass commercial surveillance. They have access to information about every aspect of our lives and our behavior.

This comes not only with costs to our privacy, it harms our competitive landscape and affects the way we communicate and our well-being, especially the well-being of children and teens.

Some of the key findings of the report are:

  • Many of the companies collected and could indefinitely retain troves of data from and about users and non-users, and they did so in ways consumers might not expect.
  • Many of the responding companies relied on selling advertising services to other businesses based largely on using the personal information of their users. The technology powering this ecosystem took place behind the scenes and out of view to consumers, posing significant privacy risks.
  • Algorithms, data analytics, and/or AI were applied to users’ and non-users’ personal information. These technologies controlled everything from content recommendation to search, advertising, and inferring personal details about users, while the users lacked any meaningful control over how personal information was used for AI-fueled systems.
  • The trend among the responding companies was that they failed to adequately protect children, but especially teens, who are not covered by the Children’s Online Privacy Protection Rule (COPPA).

The recommendations of the FTC focus on legislation about the transparency of the data-usage, disclosure of sensitive personal data for advertising purposes, and the need to protect young users from the information-absorbing tech giants.

For more details and specific answers from each of the companies, you can check the 129 page report.

I want to close this off with a quote from the report that we whole-heartedly agree with:

“Our privacy cannot be the price we pay to accomplish ordinary basic daily activities”

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Cyrus, powered by Malwarebytes.

Tor anonymity compromised by law enforcement. Is it still safe to use?

Despite people generally considering the Tor network as an essential tool for anonymous browsing, german law enforcement agencies have managed to de-anonymize Tor users after putting surveillance on Tor servers for months.

Before we go into the what the agencies did, let’s take a look at some basics of Tor.

How Tor works

On a daily basis, millions of people use the Tor network to browse privately and visit websites on the dark web. Tor enhances privacy by directing internet traffic through a minimum of three randomly chosen routers, or nodes. During this process user data is encrypted before it reaches the destination via the exit node, ensuring a user’s activities and IP address remain confidential and secure.

Here’s a closer look at how this mechanism works:

  • Entry node: When you start browsing with Tor, your connection is first directed to an entry node, also known as a guard node. This is where your internet traffic enters the Tor network, with your IP address only visible to this node.
  • Middle nodes: After entering the Tor network, your traffic passes through one or more middle nodes. These nodes are randomly selected, and each one knows only the IP address of the previous relay and the next relay. This prevents any single relay from knowing the complete path of your internet activity.
  • Exit node: The last relay in the chain is the exit node. It decrypts the information from the middle relays and sends it out to the destination. Importantly, the exit node strips away layers of encryption to communicate with the target server but does not know the origin of the traffic, ensuring that your IP address remains hidden.

This layered security model, like peeling an onion, is where Tor gets its name. Tor is an acronym for The Onion Router. Each layer ensures that none of the nodes in the path knows where the traffic came from and where it is going, significantly increasing the user’s anonymity and making it exceedingly difficult for anyone to trace the full path of the data.

Although many researchers theoretically considered that de-anonymization was possible, in general it was thought practically unfeasible if a user followed all the necessary security measures.

How did the de-anonymization work?

German news outlet NDR reports that law enforcement agencies got hold of data while performing server surveillance which was processed in such a way that it completely cancelled Tor anonymity. The reporters saw documents that showed four successful measures in just one investigation.

After following up on a post on Reddit and two years of investigation, the reporters came to the conclusion that Tor users can be de-anonymized by correlating the timing patterns of network traffic entering and exiting the Tor network, combined with broad and long-term monitoring of Tor nodes in data centers.

If you can monitor the traffic at both the entry and the exit points of the Tor network, you may be able to correlate the timing of a user’s true IP address to the destination of their traffic. To do this, one typically needs to control or observe both the entry node and the exit node used in a Tor circuit. This does not work when connecting to onion sites however, because the traffic would never leave the Tor network in such a case.

The timing analysis uses the size of the data packets that are exchanged to link them to a user. You can imagine that with access to a middle node, you can tie the incoming and outgoing data packets to one user. While this doesn’t reveal any of the content of the messages, this could help in establishing who’s communicating with who.

Tor is still safe, says Tor

The problem that Tor faces lies in the fact that it was designed with hundreds of thousands of different nodes all over the world in mind. In reality, there are about 7,000 to 8,000 active nodes, and many of them are in data centers. As a consequence, the “minimum of three” often means “only three” which increases the potential effectiveness of timing attacks.

The Tor Project said:

“The Tor Project has not been granted access to supporting documents and has not been able to independently verify if this claim is true, if the attack took place, how it was carried out, and who was involved.”

Based on the information provided, the Tor Project concluded that one user of the long-retired application Ricochet was de-anonymized through a guard discovery attack. This was possible, at the time, because the user was using a version of the software that neither had Vanguards-lite, nor the Vanguards add on, which were introduced to protect users from this type of attack

Which means they feel confident to claim that Tor is still safe to use. However, we would like to add that users should be aware that several law enforcement agencies–and cybercriminals–run Tor nodes, which can pose risks.

If you use Tor, here are some basic rules to stay as anonymous as possible:

  • Always download Tor Browser from the official Tor Project website.
  • Keep Tor Browser updated to the latest version for security patches.
  • Use the default Tor Browser settings – don’t install add-ons or change the settings unless you know what you are doing and what the implications are.
  • Enable the “Safest” security level in Tor Browser settings.
  • Only visit HTTPS-encrypted websites.
  • Avoid logging into personal accounts or entering personal information. If you post your personal information somewhere that undermines the whole idea of staying anonymous.
  • Be extremely cautious about downloading files or clicking links, even more so on the Dark Web.
  • Disable JavaScript if possible although this may break some sites.
  • Clear cookies and local site data after each browsing session.
  • Use a reputable VPN in addition to Tor for an extra layer of encryption.
  • Run up-to-date antivirus/anti-malware software on your device.

We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Walmart customers scammed via fake shopping lists, threatened with arrest

Shopping online or attempting to get in touch with a store is a little bit like walking on a minefield: you might get lucky or take a wrong step and get scammed.

Case in point, a malicious ad campaign is abusing Walmart Lists, a kind of virtual shopping list customers can share with family and friends, by embedding rogue customer service phone numbers with the appearance and branding of the official Walmart site.

The scam ends in accusations of money laundering, threats of arrest warrant, and pressure to transfer money into a Bitcoin wallet.

In this blog, we walk through the different parts of this well executed scheme and provide helpful tips to avoid falling for this scam. We have already reported the malicious Google ads and informed Walmart of the abuse of its customer’s shopping lists.

Malicious Google ads

When searching for Walmart’s phone number, the top result on Google is for an ad (sponsored). Unless you manually checked “My Ad Center”, you would have no idea who the ad belongs to.

More importantly, because the ad snippet shows the https://www.walmart.com address, you might wrongly assume that it is a genuine advert from Walmart.

image 2aa181
Figure 1: A Google search for Walmart’s phone number on a mobile device
image 93c399
Figure 2: A Google search for Walmart’s phone number on a desktop computer

Walmart Lists

In previous cases, we have seen malicious advertisers impersonate brands by displaying their official website in the ad URL. However, this is a little bit different as the ad’s final URL actually belongs to Walmart.

On mobile, due to space limitations in the address bar, users will see walmart.com, while on desktop they will see the full URL. In both instances, this is a strong indicator of legitimacy, one which people have been trained to check for years. This is not an impostor website, it is the real one, so one might think that whatever is shown on the page must also be legitimate.

image e60b3f
Figure 3: A fake Walmart shopping list as seen on a phone
image 1f49ab
Figure 4: A fake Walmart shopping list as seen from a desktop computer

Lists is a feature that registered Walmart customers can use to add items they might be interested in purchasing. To create a list, you first need to register for an account, but it is free and does not require any form of authentication or payment method.

The scammers have created several accounts and fake lists where they can instead add custom text. Their goal is to trick people thinking this is a contact page for Walmart customer service. This is exactly what they do by using fake names like “Mr Walmart S.” and entering their own phone number in the page.

Finally, they can use a link to share this list with others, and this is the link they will use for the Google ads. As such, the ad actually does not violate Google’s policy per se since the branded ad does go to the brand’s website. But, as we know, this is all fake.

What happens next?

People who dial any of those supposed customer service phone numbers shown on the Walmart lists will be directed to a call center in Asia. On the other end of the line scammers impersonating Walmart will get their information (name, email address) before reviewing their details.

As it happens, victims will be told that a large purchase was recently made on their account. That’s the scare tactic that will allow scammers to request more personal information related to their banking, and even social security number.

The call centre uses several different people, all who play a different role to process victims:

  • the Walmart customer service representative
  • the higher authority or “supervisor”
  • a fake bank employee
  • a fake FTC investigator

When we called, the scammers claimed that our account had been used to transfer huge amounts of money to narco trafficking countries:

Now, all the banking found which was created using your personal information are transferring huge amounts of money to the narco trafficking countries such as Columbia, Mexico, some Saudi Arabia countries and Columbia.

As a result, we were told that there was an active arrest warrant against us:

Otherwise we have to take you under the custody for [inaudible] purpose, because there is an active arrest warrant also available on your name.

We were threatened several times and warned to go to our bank to withdraw as much money as the bank would allow in order to transferring those funds into a Bitcoin wallet. Oddly enough, the scammer mentions there won’t be any taxes on the transaction, which really would be the last concern on someone’s about to be arrested:

Yes, I know Sir, it’s not a checking account, it’s a Bitcoin wallet. The machines are… is installed by the [inaudible] for the anti money laundering charges. So you don’t, like, get any taxes on it as well as, the transactions done are anti money laundering. So you have to create your own wallet on that machine. How you can create it using your personal information, I will guide you step by step. I will be on the line with you all the time, you don’t need to worry about that. OK?

It’s quite scary to see how anyone can go from wanting to return an item or speak to a Walmart associate, to being falsely accused of crimes and pressured to transfer money. It’s also a reality check that scammers are constantly preying on the vulnerability of innocent people.

How to avoid falling for scams

In a fast paced world where technology can be abused, it is important to keep certain things in mind.

  • Sponsored results, or ads can be dangerous due to ongoing and relentless malvertising campaigns. Learn to spot a regular search result from an ad, and if possible avoid clicking on ads.
  • Even if you are on an official website, the content you see may not be legitimate. This is a particularly hard one because people will naturally trust that the brand’s own site will be safe. But scammers and spammers can inject content in comments, or custom pages.
  • Scare tactics and pressure to act quickly are almost always malicious. Unfortunately, most brands also have these promotions that expire soon and customers believe they need to buy the product now or they will lose on a deal. Having said that, your local store will never threaten you on the phone with an arrest warrant.
  • Scammers will often tell their victims to keep everything confidential and not discuss it with other family members or bank clerks. This is only in the scammers’ interest to not be exposed; by all means you should ask for clarification and seek help from others.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Snapchat wants to put your AI-generated face in its ads

Snapchat is reserving the right to use your selfie images to power Cameos, Generative AI, and other experiences on Snapchat, including ads, according to our friends at 404 Media,

The Snapchat Support page about its My Selfie feature says:

“You’ll take selfies with your Snap camera or select images from your camera roll. These images will be used to understand what you look like to enable you, Snap and your friends to generate novel images of you. If you’re uploading images from the camera roll, only add images of yourself.”

A Snapchat spokesperson told 404 Media:

“You are correct that our terms do reserve the right, in the future, to offer advertising based on My Selfies in which a Snapchatter can see themselves in a generated image delivered to them…“As explained in the onboarding modal, Snapchatters have full control over this, and can turn this on and off in My Selfie Settings at any time.”

However, according to 404 Media the “See My Selfie in Ads” feature is on by default, so you’d have to know about the feature in the first place in order to turn it off.

We also wonder how Snapchat plans to check whether the user is uploading real selfies and not pictures of someone else.

Once again, we see this assumption by a social media platform that it’s OK to use content posted on their platform for training Artificial Intelligence (AI). It isn’t!

It’s even worse to do it without explicit user consent. Hiding it somewhere deep down in a mountain of legalese called a privacy policy that nobody actually reads is not real consent. This lack of transparency and control over personal data is upsetting. The realization that some individuals may not want their likeness used for commercial purposes or to train systems they don’t support doesn’t seem to bother anyone at these social media giants.

How to change your My Selfie settings

You can change or clear your My Selfie in your Settings:

  1. Tap the gear icon ⚙ in My Profile to open Settings
  2. Tap My Selfie under My Account
  3. Tap Update My Selfie or Clear Selfie

Why AI training on your images is bad

We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.

  • Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
  • Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
  • Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
  • Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
  • Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
  • Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.

If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

iOS 18 is out. Here are the new privacy and security features

On September 16, 2024, Apple released iOS 18. Besides a lot of exciting new features, iOS 18 comes with some privacy and security enhancements.

One of the most promising new features is the new Passwords app. Built on the foundation of Apple’s password management system Keychain, Passwords makes it easier for users to access stored passwords and get an overview of their credentials.

Passwords App
Passwords App

One thing we often hear when we recommend the use of a password manager is that it’s too complicated. And, admittedly, many of them come with a learning curve. But Apple has made some steps in the right direction here.

Apple will also warn users if their credentials have been caught up in a data breach, so users can change their compromised password. In addition, users who have a weak password, or one that’s been used before, will be warned to pick a better one. Current users of the AutoFill function should notice how their passwords have automatically been added to the Passwords app.

iOS 18 also provides users with new tools to manage who can see their apps, how their contacts are shared, and how their iPhone connects to accessories. One of those tools allows users to adjust settings so that app notifications and content can’t inadvertently be seen by others. Another new feature is the ability to hide an app, which basically moves it to a locked, hidden apps folder that only the main user has access to. The basic functional apps can’t be hidden, but generally speaking if it’s on the App Store it can be hidden.

Hidden apps can be locked and unlocked with Face ID, Touch ID, or the device passcode, although there are a few exceptions. Account holders under age 13 can’t lock or hide an app so they can’t use it to dodge a parent’s watchful eye. Users between the ages of 13 and 18 can use these functions, but parents can still see what apps were downloaded and how much they are used.

Contact sharing is a lot more configurable, which makes life easier for those of us who use their device for work and private matters. Say, for example, a person uses an app solely for work, he might decide to share only work-related contacts with that app. Access can be updated as desired. Apple users can now see at a glance how many apps have access to data like location services, tracking, calendars, files and folders, contacts, and health information. When they tap on a particular category, users see a list of which apps have what level of access, such as limited or full.

apps with Location services access
Location services access overview

iOS 18 also prepares your device for Apple Intelligence which is expected next month.  

“Apple Intelligence, the personal intelligence system that combines the power of generative models with personal context to deliver intelligence that is incredibly useful and relevant while protecting users’ privacy and security.”

Apple Intelligence is an artificial intelligence (AI) platform developed by Apple. Its features include on-device processing so it’s aware of your personal data, but doesn’t require Apple to collect or store it, and a new complex system designed to draw on larger server-based models to handle more complex requests, while still protecting user privacy.

I realize this sounds a lot like Microsoft’s Recall feature which was delayed after privacy and security concerns. We haven’t seen any pushback of that magnitude for Apple Intelligence. The main difference here are the regular “screenshots” that Microsoft wanted to deploy to help users later.

The privacy protections Apple promises can be important to users who want to have access to AI but are concerned about having their private data used to train models, which is something even AI enthusiasts are worried about to some extent.

To take those worries away, Apple created Private Cloud Compute (PCC), a cloud intelligence system designed specifically for private AI processing, which Apple says extends the privacy and security of Apple devices into the cloud.

A handy change for some users might be the new guest access for the Home app, which makes it easier for other members of your household to use your device to control any accessories connected to the Home app.

One safety feature I’m not that thrilled about is the Activation Lock. The Activation Lock feature is intended to block unauthorized repairs with parts from other iPhones and deter the resale of stolen components. It will link key parts like batteries, cameras, and displays to the original owner’s Apple account, making it harder to use or sell stolen parts. I fear this will only make it harder for users to go outside the channels under Apple’s control to get their devices repaired.

More new features of iOS 18 are discussed at length in this Apple newsroom article.

To check if you’re using the latest software version, go to Settings > General > Software Update. You want to be on iOS 18.0 or iPadOS 18.0, so update now if you’re not. It’s also worth turning on Automatic Updates if you haven’t already. You can do that on the same screen.

iPadOS 18 uppdate available
Available update

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

23andMe to pay $30 million in settlement over 2023 data breach

Genetic testing company 23andMe will pay $30 million to settle a class action lawsuit over a 2023 data breach which ended in some customers having information like names, birth years, and ancestry information exposed.

In October 2023, we reported on how information belonging to as many as seven million 23andMe customers turned up for sale on criminal forums following a credential stuffing attack against 23andMe.

23andMe said that cybercriminals had stolen profile information that users had shared through its DNA Relatives feature, an optional service that lets customers find and connect with genetic relatives.

In December 2023, 23andMe admitted that some genetic and health data might have been accessed during that breach. To dodge responsibility, the company wrote a letter to legal representatives of those affected by the breach, laying the blame at the feet of victims themselves.

23andMe also neglected to tell customers with Chinese and Ashkenazi Jewish ancestry that the cybercriminal appeared to have specifically targeted them, posting their information for sale on the dark web.

In January 2024, customers filed a class action lawsuit against 23andMe in a San Francisco court, alleging the company failed to protect their privacy. The result of that lawsuit is the settlement.

What immediately jumped out in the settlement is the title of one of the chapters:

“THE SETTLEMENT IS THE RESULT OF ZEALOUS ADVOCACY AND SKILLFUL NEGOTIATION”

What does that mean? Well, the $30 million is apparently all that 23andMe can afford to pay. And that’s only because the expectation is that cyberinsurance will cover $25 million.

The market value of the company has plummeted, and revenue declined. This decline had already set in prior to the incident, but it definitely didn’t help to improve the situation.

The court has not yet approved the settlement, but it’s expected that 23andMe will pay $30 million into a fund for customers whose data was compromised, as well as provide them with identity and genetic monitoring.

Other countries, like Canada and the UK have announced they will undertake a joint investigation into the data breach.

According to Malwarebytes’ data, over 3 million people were affected by the data breach, so none of the victims should expect to get rich because of this settlement.

On the dark web, the data is offered for sale in three separate data sets. A general set that includes 2,763,569 records, one belonging to Ashkenazi-based users (835,708 records), and one allegedly belonging to China-based users of 23andMe (68,541 records).

Check your digital footprint

If you want to find out if your personal data was exposed through this breach, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you used to register and 23andMe) and we’ll send you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.