IT NEWS

Upload a video selfie to get your Facebook or Instagram account back

Meta, the company behind Facebook and Instagram says its testing new ways to use facial recognition—both to combat scams and to help restore access to compromised accounts.

The social media giant is testing the use of video selfies and facial recognition to help users get their hijacked accounts back. Social media accounts are often lost when users forget their password, switch devices, or when they inadvertently or even willingly give their credentials to a scammer.

Another reason for Meta to use facial recognition are what it calls “celeb-bait ads.” Scammers often try to use images of public figures to trick people into engaging with ads that lead to fraudulent websites.

Since it’s trivial to set up an account that looks like a celebrity, scammers use this to attract visitors for various reasons, ranging from like-farming (a method to raise the popularity of a site or domain) to imposter scams, where accounts that seem to belong to celebrities reach out to you in order to defraud you.

5 Jon Hamm Facebook accounts with a different selfie
Several accounts that seem to belong to the same actor

Meta’s existing ad review system uses machine learning to review the millions of ads that are run across Meta platforms every day. With a new facial recognition addition to that system, Meta can compare faces in the ad to the public figure’s Facebook and Instagram profile pictures, and then block them if it’s fake.

According to Meta:

“Early testing with a small group of celebrities and public figures shows promising results in increasing the speed and efficacy with which we can detect and enforce against this type of scam.”

Over the coming weeks, Meta intends to start informing a larger group of celebs who have been used in scam ads that they will be enrolled into the new scheme and allow them to opt out if that’s what they want.

The problem of celeb-bait ads is a big one and I applaud Meta for trying to do something about it. The account recovery by video selfie, however, is something I’m far less fond of.

The idea of using facial recognition on social media is not new. In 2021, Meta shut down the Face Recognition system on Facebook as part of a company-wide move to limit the use of facial recognition in their products.

In the newly-announced system, the user can upload a video selfie, and Meta will use facial recognition technology to compare the selfie to the profile pictures on the account they’re trying to access. This is similar to identity verification tools you might already use to unlock your phone or access other apps. 

I do have a few questions though:

  • With the current development of deepfakes, how long will it take for this technology to be used for the exact opposite? Stealing your account by showing the platform a deepfake video of your face.
  • Do I want to provide Meta with even more material that might end up getting used to train its Artificial Intelligence (AI) models? Although Meta claims to delete the facial data after comparison, there are concerns about the collection and temporary storage of biometric information.
  • People have a tendency to post their best pictures and not change them as they grow older. Is a comparison always possible?
  • Is normalizing the use of biometrics for something as trivial as social media really necessary? Right now I only use a video selfie to approve bank transfers of over 1000 Euro (US$ 1075).  

There are probably good reasons why Meta is not implementing this option in the UK or the EU, because it needs to “continue conversations with regulators” first. The same is true for Illinois and Texas, likely due to stricter privacy laws in these states.

Surely there are better ways to reclaim a stolen account. What do you think? Let us know in the comments.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

This industry profits from knowing you have cancer, explains Cody Venzke (Lock and Code S05E22)

This week on the Lock and Code podcast

On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer.

This is because of the largely unchecked “data broker” industry.

Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements.

Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you’re driving, your car’s gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another.

This is just a glimpse of what is happening to essentially every single adult who uses the Internet today.

So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising.

“We’re seeing data that’s been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Internet Archive attackers email support users: “Your data is now in the hands of some random guy”

Those who hacked the Internet Archive haven’t gone away. Users of the Internet Archive who have submitted helpdesk tickets are reporting replies to the tickets from the hackers themselves.

Internet Archive, most known for its Wayback Machine, is a digital library that allows users to look at website snapshots from the past. It is often used for academic research and data analysis. Earlier in October, the Internet Archive suffered from a data breach and DDoS attack.

During that breach the attackers were able to steal a user authentication database containing 31 million records.

While the Wayback Machine is almost fully functional again, in a recent turn of events the attackers have started replying to those users that have opened a support ticket with the Internet Archive.

This is one of the replies a user reported:

“It’s dispiriting to see that even after being made aware of the breach 2 weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets.

As demonstrated by this message, this includes a Zendesk token with perms to access 800K+ support tickets sent to info@archive.org since 2018.

Whether you were trying to ask a general question, or requesting the removal of your site from the Wayback Machine—your data is now in the hands of some random guy. If not me, it’d be someone else.

Here’s hoping that they’ll get their shit together now.”

An Application Programming Interface (API) token is like a special pass that allows a computer program or app to access and use services provided by another program or website. It is used as proof that the user or app has permission to access the service.

It appears as if the Internet Archive uses Zendesk to manage its support tickets. Having the Internet Archive’s Zendesk token would certainly explain why the hackers can reply to customer tickets.

Changing a Zendesk API token is not very hard, but it can have unexpected consequences, so it may require some advance planning to minimize potential disruptions. This could be why the Internet Archive may not have gotten round to it yet. But not changing API keys that would grant the attackers access to the organization’s important infrastructure like Zendesk would be a serious omission.

On October 18, 2024, Internet Archive founder Brewster Kahle, posted an update stating the stored data of the Internet Archive is safe and work on resuming services safely is in progress.

“We’re taking a cautious, deliberate approach to rebuild and strengthen our defenses. Our priority is ensuring the Internet Archive comes online stronger and more secure.”

So far, the Internet Archive has not responded to the new developments, and the motivation for the attacks on the Internet Archive remain unclear. We’ll keep you posted.

A week in security (October 14 – October 20)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Unauthorized data access vulnerability in macOS is detailed by Microsoft

The Microsoft Threat Intelligence team disclosed details about a macOS vulnerability, dubbed “HM Surf,” that could allow an attacker to gain access to the user’s data in Safari. The data the attacker could access without users’ consent includes browsed pages, along with the device’s camera, microphone, and location.

The vulnerability, tracked as CVE-2024-44133 was fixed in the September 16 update for Mac Studio (2022 and later), iMac (2019 and later), Mac Pro (2019 and later), Mac Mini (2018 and later), MacBook Air (2020 and later), MacBook Pro (2018 and later), and iMac Pro (2017 and later).

It is important to note that this vulnerability would only impact Mobile Device Management (MDM) managed devices. MDM managed devices are typically subject to centralized management and security policies set by the organization’s IT department.

Microsoft has dubbed the flaw “HM Surf.” By exploiting this vulnerability an attacker could bypass the macOS Transparency, Consent, and Control (TCC) technology and gain unauthorized access to a user’s protected data.

Users may notice Safari’s TCC in action when they browse a website that requires access to the camera or the microphone. They may see a prompt like this one:

Safari TCC prompt
Image courtesy of Microsoft

What Microsoft discovered was that Safari maintains its own separate TCC policy which it maintains in various local files.

At that point Microsoft figured out it was possible to modify the sensitive files, by swapping the home directory of the current user back and forth. The home directory is protected by the TCC, but by changing the home directory, then change the file, and then making it the home directory again, Safari will use the modified files.

The exploit only works on Safari because third-party browsers such as Google Chrome, Mozilla Firefox, or Microsoft Edge do not have the same private entitlements as Apple applications. Therefore, those apps can’t bypass the macOS TCC checks.

Microsoft noted that it observed suspicious activity in the wild associated with the Adload adware that might be exploiting this vulnerability. But it could not be entirely sure whether the exact same exploit was used.

“Since we weren’t able to observe the steps taken leading to the activity, we can’t fully determine if the Adload campaign is exploiting the HM surf vulnerability itself. Attackers using a similar method to deploy a prevalent threat raises the importance of having protection against attacks using this technique.”

We encourage macOS users to apply these security updates as soon as possible if they haven’t already.


Malwarebytes for Mac takes out malware, adware, spyware, and other threats before they can infect your machine and ruin your day. It’ll keep you safe online and your Mac running like it should.

23andMe will retain your genetic information, even if you delete the account

Deleting your personal data from 23andMe is proving to be hard.

There are good reasons for people wanting to delete their data from 23andMe: The DNA testing platform has a lot of problems, so let’s start with a recap.

A little over a year ago, cybercriminals put up information belonging to as many as seven million 23andMe customers for sale on criminal forums following a credential stuffing attack against the genomics company.

In December 2023, we learned that the attacker was able to directly access the accounts of roughly 0.1% of 23andMe’s users, which is about 14,000 of its 14 million customers. So far not too many people affected, but with the breached accounts at their disposal, the attacker used 23andMe’s opt-in DNA Relatives (DNAR) feature—which matches users with their genetic relatives—to access information about millions of other users.

For a subset of these accounts, the stolen data contained health-related information based upon the user’s genetics.

In January 2024, 23andMe had the audacity to lay the blame at the feet of victims themselves in a letter to legal representatives of victims. 23andMe reasoned that the customers whose data was directly accessed re-used their passwords, gave permission to share data with other users on 23andMe’s platform, and that the medical information was non-substantive.

And in September 2024, we found out that the company would pay $30 million to settle a class action lawsuit, as that was all that 23andMe could afford to pay. And that’s only because the expectation was that cyberinsurance would cover $25 million.

As a result, the value of 23andMe plummeted. And last month the company said goodbye to all its board members except for CEO Anne Wojcicki who stood by her plans to take the company private.

This uncertainty about the future of the company and, with that, who will be the future holder of all the customer personal information, has caused a surge of users looking to close their accounts and delete their data.

However, it turns out it’s not as easy as just asking for the data to be removed. You can delete your data from 23andMe , but 23andMe says it will retain some of that data (including genetic information) to comply with the company’s legal obligations, according to its privacy policy.

“23andMe and/or our contracted genotyping laboratory will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations, including the federal Clinical Laboratory Improvement Amendments of 1988 (CLIA), California Business and Professions Code Section 1265 and College of American Pathologists (CAP) accreditation requirements, even if you chose to delete your account. 23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.”

In addition, any information you previously provided and consented to be used in 23andMe research projects cannot be removed from ongoing or completed studies, although the company says it will not use it in any future ones.

This is unfortunate, and is yet another reminder about how once you give information away you cannot always get it back. Let’s hope the policy gets changed and customers are allowed to fully delete their data soon.

It’s still worth deleting as much as possible, though. So here’s how to do that.

How to delete (most of) your data from 23andMe

  • Log into your account and navigate to Settings.
  • Under Settings, scroll to the section titled 23andMe data. Select View.
  • It will ask you to enter your date of birth for extra security. 
  • In the next section, you’ll be asked which, if there is any, personal data you’d like to download from the company (onto a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  • You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically, and you’ll immediately lose access to your account. 

When you set up your 23andMe account, you had the options to either have the saliva sample that you sent to them securely destroyed or to have it stored for future testing. If you chose to store your sample but now want to delete your 23andMe account, the company says it will destroy the sample for you as part of the account deletion process.

Check your digital footprint

If you want to find out if your personal data was exposed through the 23andMe breach, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you used to register and 23andMe) and we’ll send you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

“Nudify” deepfake bots remove clothes from victims in minutes, and millions are using them

Millions of people are turning normal pictures into nude images, and it can be done in minutes.

Journalists at Wired found at least 50 “nudify” bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it’s appalling, and highly likely there are much more than those they found.

The history of nonconsensual intimate image (NCII) abuse—as the use of explicit deepfakes without consent is often called—started near the end of 2017. Motherboard (now Vice) found an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name “deepfake.”

Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like Generative Adversarial Networks (GANs) are available to the public.

However, most of the uncovered bots don’t use this advanced type of technology. Some of the bots on Telegram are “limited” to removing clothes from existing pictures, an extremely disturbing act for the victim.

These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of “tokens” to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.

Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when AI Girlfriend was breached), and you can even end up getting infected with malware.

The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with potential long-term psychological and emotional impacts on victims.

To combat this type of sexual abuse there have been several initiatives:

  • The US has proposed legislation in the form of the Deepfake Accountability Act. Combined with the recent policy change by Telegram to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.
  • Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).

However, so far these steps have shown no significant impact on the growth of the market for NCIIs.

Keep your children safe

We’re sometimes asked why it’s a problem to post pictures on social media that can be harvested to train AI models.

We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.

  • Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
  • Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
  • Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
  • Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
  • Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
  • Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.

If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Tor Browser and Firefox users should update to fix actively exploited vulnerability

Mozilla has announced a security fix for its Firefox browser which also impacts the closely related Tor Browser.

The new version fixes one critical security vulnerability which is reportedly under active exploitation. To address the flaw, both Mozilla and Tor recommend that users update their browsers to the most current versions available.

Firefox users that have automatic updates enabled should have the new version available as soon or shortly after they open the browser. Once you’re updated, your version number will be 131.0.3 or higher.

Other users can update their browser by following these instructions:

  • Click the menu button (3 horizontal stripes) at the right side of the Firefox toolbar, go to Help, and select About Firefox/Tor Browser. The About Mozilla Firefox/About Tor Browser window will open.
  • Firefox/Tor Browser will check for updates automatically. If an update is available, it will be downloaded.
  • You will be prompted when the download is complete, then click Restart to update Firefox/Tor Browser.

To update the Tor Browser you have to Connect first or it will fail to fetch the update. The latest version of Tor is 13.5.7.

Tor Browser is up to date
Version number should be 13.5.7 or higher

The vulnerability, tracked as CVE-2024-9680, allows attackers to execute malicious code within the browser’s content process, which is the environment where it loads and renders web content.

About the vulnerability, Mozilla said:

“An attacker was able to achieve code execution in the content process by exploiting a use-after-free in Animation timelines. We have had reports of this vulnerability being exploited in the wild.”

Use after free (UAF) is a type of vulnerability that is the result of the incorrect use of dynamic memory during a program’s operation. If, after freeing a memory location, a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program.

The Animation Timeline interface of the Web Animations Application Programming Interface (API) represents the timeline of an animation. Where the timeline is a source of time values for synchronization purposes.

Exploitation is said to be relatively easy, requires no user interaction, and can be executed over the network.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

AI scammers target Gmail accounts, say they have your death certificate

Several reputable sources are warning about a very sophisticated Artificial Intelligence (AI) supported type of scam that is bound to trick a lot of people into compromising their Gmail account.

The most recent warning comes from CEO of Y Combinator Garry Tan who posted on X, saying the scammers using AI voices tell you someone has issued a death certificate for you and is trying to recover your account.

The scammers claim to be checking that you are alive and whether they should disregard a filed death certificate. If you click “Yes, it’s me” on the fake account recovery screen then you’ll likely lose access to your Google account.

In another recent example, Windows expert Sam Mitrovic was targeted by a very similar AI recovery scam.

He explained how the scam unfolds: It starts when he receives a notification of an alleged Gmail account recovery attempt, followed 40 minutes later by a call. The first time Sam misses the call, but when they try the same thing a week later, Sam answers.

In both cases, the notifications come from the US but the calls show “Google Sydney” as the caller. A polite American voice claims there’s been suspicious activity on Sam’s Gmail account and asks whether Sam was travelling.

The caller says there’s been a login attempt from Germany which raises suspicions, given that Sam is at home in the US. The caller says the login has been successful, and that an attacker has had access to Sam’s account for a week and downloaded account data.

Sam remembers the email and missed call from last week, and has the presence of mind to quickly check the caller ID. It looks like a legitimate Google Assistant number.

But knowing how easy it is to spoof a telephone number and pretend to be calling from that number, Sam asks for an email to confirm that the caller actually works for Google. Some typing against the typical background noises of a call center and soon enough the email arrives.

confirmation mail sent by the attacker to prove they are working for the Google Account Secuirty Team
Image courtesy of Sam Mitrovic

The email looks convincing. It comes from a Google domain, has a case number, claims to be from the Google Account Security Team, and it confirms the phone number and the name the caller is using.

While Sam reviews the email, the caller repeatedly says “Hello”. From the pronunciation and the spacing Sam realizes it’s an AI voice and hangs up.

Inspecting the email Sam found that the scammers are using the legitimate Salesforce CRM (customer relationship management) tool which allows you to set the sender to whatever you like and send over Gmail/Google servers.

Other targets that took the scam a little further,  were asked to verify their 2FA, so it stands to reason that the scammers are looking to take over your Google account, but this time for real.

The need to confirm an account recovery, or a password reset, is a notorious method used in phishing attacks. They usually try to trick the target into opening a fake login portal where they need to enter their credentials to report the request as not initiated by them.

Is it you trying to recover your account?
Prompt asking: Is it you trying to recover your account?

How to stay safe

There are a few signs you can use to identify this type of scams.

The “To” field of the confirmation email Sam received contains an email address cleverly named GoogleMail[@]InternalCaseTracking[.] com, which is a non-Google domain.

Google Assistant calls usually come from an automated system and only in some cases, from a manual operator. Google Support on the other hand will not contact you unsolicited.

To verify if a security alert is from Google, users can check their Recent security activity:

  • Tap your Gmail profile photo in the top right corner
  • Tap Manage your Google Account
  • Select the Security tab
  • You will see something similar to this:
Review security activity
Here you can find the Review Security Activity button

Any messages claiming to be security alerts from Google that are not listed there will not be from Google.

Do not entertain these scammers for longer than necessary. It doesn’t take them very long to fingerprint your voice which would allow their AI to impersonate you by using your voice.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Cyrus, powered by Malwarebytes.

Election season raises fears for nearly a third of people who worry their vote could be leaked

As the United States enters full swing into its next presidential election, people are feeling worried, unsafe, and afraid.

And none of that has to do with who wins.

According to new research from Malwarebytes, people see this election season as a particularly risky time for their online privacy and cybersecurity. Political ads could be hiding online scams, many people feel, and the election, they say, will likely fall victim to some type of “cyber interference.” Amidst this broader turbulence, 32% are “concerned about who could learn [their] vote”—be they family, spouses, or cybercriminals.

For this research, Malwarebytes conducted a pulse survey of its newsletter readers between September 5 and 16, 2024, via the Alchemer Survey platform. In total, 1600 people across the globe responded.

Broadly, Malwarebytes found that:

  • 74% of people “consider US election season a risky time for personal information.”
  • Despite a tight presidential race, a shocking 3% of people said they will not vote because of “privacy or security concerns.”
  • Distrust in political ads is broad—62% said they “disagree” or “strongly disagree” that the information they receive in US election-related ads is trustworthy.
  • The fears around election ads are not just about trustworthiness, but about harm. 52% are “very concerned” or “concerned” about “falling prey to a scam when interacting with political messages.” 
  • 57% have responded to these concerns with action, taking several steps to protect their personal information during this election season.

The electoral process is (forgive us) a lot like cybersecurity: It scares people, it’s hopelessly baroque, and, through a lack of participation, it can produce unwanted results.

Here is what Malwarebytes discovered about the intersection of cybersecurity and elections, with additional guidance on how to protect personal information this season.

Open distrust

Getting more than 70% of people to agree on anything is remarkable. And yet, 74% of survey participants said that they “consider US election season a risky time for personal information.” Drilling further into the data, 56% said they were “extremely concerned” or “very concerned” about the security of their personal information during this election season.

The reasons could be obvious. Unlike any other season in America, election season might bring the highest volume of advertisements sent directly to people’s homes, phones, and email accounts—and the accuracy and speed at which they come can feel invasive. The network of data brokers that political campaigns rely on to target voters with ads is enormous, as one Washington Post reporter found in 2020, with “3,000 data points on every voter.”

Escaping this data collection regime has proven difficult for most people. Just 9.6% of survey participants said they “have not received any election related ads” this year.

Elsewhere, 60% had received election-related ads through emails, 58% through physical mailers, 55% through text messages, 40% through social media, and 29% through phone calls.

Those ads may be falling on deaf ears, though. When asked whether they trust the information they receive from US election-related ads, just a combined 5% said they “agree” or “strongly agree” with the sentiment.

A focus on cybercrime

While people hold a sense of distrust for election-related ads, they also revealed another emotion towards them: Fear.

That’s because the majority of survey participants said they were worried that these ads and other political messages could be hiding dangerous scams underneath. Most people (52%) said they were “very concerned” or “concerned” about “falling prey to a scam when interacting with political messages.” 

It’s a well-founded concern as, once again during this election season, cybercriminals are trying to lure Americans into online scams with messages about updated voter registrations, campaign donations, and more.

Survey participants also showed widespread fear about whether cybercriminals could reveal who they voted for.

Remember that 32% of participants said they were worried that someone “could learn about [their] vote.” When asked who, specifically, they were worried about, 73% said cybercriminals. A revealing 2% held fears around their votes being exposed to a family member or a spouse.

Finally, though Malwarebytes did not directly tie the concept of “cybercrime” to the election itself, survey participants were asked about “cyber interference.” When rating their own confidence level in whether the election process will be free from cyber interference, a combined 74% said they were “not very confident” or “not confident at all.”

This statistic should not be interpreted to mean that 74% of people believe the election will be “hacked” or that votes will be switched by an adversarial government—a scenario that has never provably occurred in the US. Instead, it may point to how people interpret “cyber interference. It could include, for example, the pilfering of personal data for political advertisements, or the wanton online distribution of political disinformation to sway voters.

Taking action

With distrust rampant and anxiety wide, people are refusing to enter this election season without some precautions.

Two thirds of survey participants (66%) have either taken steps or plan to take steps to secure their personal data during this election season. Malwarebytes asked about several cybersecurity and online privacy measures that, particularly when facing off against online scams, could protect people from having their accounts taken over, their identities stolen, or even their personal information exposed for marketing reasons.

Survey participants took on the following measures:

  • 77% enabled Two Factor Authentication (2FA) or Multi-Factor Authentication (MFA) across their accounts
  • 47% actively use a password manager
  • 41% purchased identity theft protection services
  • 31% researched the origins of the campaigns they engage with
  • 24% locked down their social media profiles
  • 12% used a data broker removal service

On the reverse, Malwarebytes found a small but critical number of people who will refuse to vote during this election “due to privacy or security concerns”—a combined 3% “agreed” or “strongly agreed” with this sentiment.

Staying safe

There’s good reason this election season for Americans to be concerned about their online privacy and security—but that doesn’t mean that Americans have to spend the next month riddled with anxiety. This month, people can take the following advice to secure their personal information, lock down their sensitive accounts, and, overall, stay safe from malicious scammers and cybercriminals.

  • Watch out for fake emails and text messages. Unless you directly reach out, avoid clicking on links or engaging with these political communications. Instead, go directly to the campaign’s website for information or links to donate.  
  • Be mindful of sharing personal information. As a general rule, don’t engage in surveys that ask for personal information. You can check what information is already available about you on the dark web with our free Digital Footprint scan or take the first step in removing your personal information from the network of data brokers online with our Personal Data Remover scan.  
  • Avoid robocalls and phone scams. Hackers can spoof phone numbers and impersonate official organizations. Be suspicious of unsolicited phone calls. Immediately hang up, don’t share personal information, and report the phone number.