IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Pinterest tracks users without consent, alleges complaint

Pinterest has received a complaint from privacy watchdog noyb (None of your business) over the unsolicited tracking of its users.

Pinterest allows you to pin images to virtual pinboards; useful for interior design, recipe ideas, party inspiration, and much more. It started as a virtual replacement for paper catalogs to share recipes, but has since grown into a visual search and e-commerce platform.

With the growth came the advertisers, and what their goals with the platform were. And as we are all undoubtedly aware, targeted and especially personalized advertising is much more effective than regular advertising.

So, like many other social media platforms before it, Pinterest claimed to have a legitimate interest in using personal data without asking for consent.

The “legitimate interest” argument comes from one of the six lawful bases granted in the European Union’s (EU’s) General Data Protection Regulation (GDPR) which states that processing of personal data is allowed if it is:

“…necessary for the purposes of the legitimate interests pursued by the controller or by a third party, except where such interests are overridden by the interests or fundamental rights and freedoms of the data subject.”

Social media platforms have a habit of claiming to need that ability for economic reasons, to improve their service, or to safeguard security of both users and the platform. But in every case I know of, the Court of Justice of the European Union (CJEU) has ruled against platforms using personal data without consent.

Pinterest users are not made aware of the fact that they can turn off “ads personalisation” under the “privacy and data” settings, according to the complaint. This setting is turned on by default, allowing Pinterest to use information from visited websites and from other third parties to show users personalized ads.

When a complainant filed an access request to find out what data Pinterest had about her, she received a copy of her data on the same day, but quickly realized that it didn’t include any information about the recipients of her data.

Two additional requests made her none the wiser about the categories of data that were shared with third parties, which means that Pinterest failed to adequately respond to the access request under Article 15(1)(c) of the GDPR.

Based on this, noyb has filed a complaint with the French data protection authority (CNIL). The grounds of that complaint are that Pinterest violated Article 6(1) GDPR by processing the complainant’s personal data for personalized advertising on the basis of legitimate interest, and violated Article 15(1)(c) GDPR by failing to provide access to the categories of data shared with third parties.

To turn off personalized ads on Pinterest:

  • Log in to your Pinterest account
  • Click the chevron-down icon at the top-right corner to open your menu
  • Click Settings
  • Select Privacy and data
  • Adjust your personalization settings
  • Click Save.

Pinterest reminds users that this setting does not apply to information about purchases you initiate on Pinterest. More information about this setting is available in Pinterest’s Help Center.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

After concerns of handing Facebook taxpayer info, four companies found to have improperly shared data

Four tax preparation software companies failed to comply with government rules that require the sharing of tax-related info to be done only with specific disclosures and full tax-payer consent, according to an audit released by the Treasure Inspector General for Tax Administration (TIGTA) in the United States.

“According to Treasury Regulation § 301.7216-3, tax return information may not be used or disclosed except as specifically permitted or when the taxpayer provides consent.”

The Internal Revenue Service (IRS) partners with tax professionals and other entities that assist taxpayers in meeting their tax obligations. Before partnering with these professionals and entities, the IRS conducts suitability checks. But the IRS does not have awareness of the full scope of information that an online provider routinely collects, beyond what is filed with the IRS, or shared with third parties.

Further, the guidance for obtaining taxpayer consent to use or disclose taxpayer information does not specifically address the use of pixels, such as those used by Facebook and Google to track information on a website.

These pixels are basically a piece of code that website owners can place on their website. The pixel collects data that helps businesses track conversions from ads, optimize ads, build target audiences for future ads, and re-market to people that have already taken some kind of action on their website. That’s nice for the advertisers, but the combined information of all these pixels potentially provides the recipients with an almost complete portrait of your browsing behavior.

The audit was performed after TIGTA received a congressional letter raising concerns about the data sharing practices of online tax filing companies. This letter spoke of data sharing methods that used a pixel to capture an individual’s entries on the online tax filing companies’ website, which then sent data entered for the preparation of online tax returns to a third party to focus marketing and advertisement efforts to each user.

In other words, information that is highly regulated was collected and shared outside the rules of those regulations, which could have allowed for invasions of privacy.

TIGTA acknowledged that it shared similar concerns and that it was in the process of conducting a separate but related review.

TIGTA did not disclose the names of the four companies that were the subject of these investigations, but in a follow-up letter from 3 senators and a member of congress they mention TaxSlayer, H&R Block, TaxAct, and Ramsey Solutions.

The review found that the audited companies’ consent statements did not comply with the requirements of Treasury Regulation § 301.7216. Specifically, the consent statements did not clearly identify the intended purpose of the disclosure and the specific recipient(s) of the tax return information.

Based on the results TIGTA advised the IRS to update their revenue procedure to include language that consent statements must identify the purpose of disclosure and specific recipient(s); evaluate whether any updates are needed to the guidance regarding data sharing practices, e.g., the use of pixels; and identify and implement potential solutions that will ensure that online providers comply with the regulatory requirements of taxpayer consent statements.

The IRS has taken actions to address the previously reported deficiencies with the suitability check processes and procedures for tax preparation companies. For example, the IRS:

  • Updated procedures to ensure consistency with initial and continuous suitability checks.
  • Established a consistent adjudication process for applicants with a criminal history.
  • Modified procedures to systemically create cases requiring research and resolution for tax compliance issues.
  • Modified procedures to accept only electronic fingerprint cards.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

LinkedIn bots and spear phishers target job seekers

Microsoft’s social network for professionals, LinkedIn, is an important platform for job recruiters and seekers alike. It’s also a place where criminals go to find new potential victims.

Like other social media platforms, LinkedIn is no stranger to bots attracted to special keywords and hashtags. Think “I was laid off”, “I’m #opentowork” and similar phrases that can wake up a swarm of bots hungry to scam someone new.

Bots are problematic as they not only create a poor user experience but also present real security risks. Perhaps even more insidious are customized phishing attempts, where fraudulent LinkedIn accounts directly reach out to their victim via the premium InMail feature. In this case, it’s all about harvesting personal information from targets of interest.

In this article, we review recent observations and provide tips for job seekers and users of the platform in general.

Hungry bots

Online bots are so common that they transcend every possible industry: advertising, music and concerts, social media, games, and more. There are even companies whose entire business model is to disrupt and contain bots. The impact of bots depends on which point of view you are taking, as it can range from simple nuisance, to opinion swaying, costly fraud, and a lot more in between.

We recently observed fake LinkedIn accounts that prey on those just laid off. Within minutes of a post, dozens of accounts start replying with links or requests to be added as a connection.

image fcdb62

The use of certain hashtags was already known to attract bots, and the #opentowork one is no different. Ironically, even a recruiter was previously swamped with similar messages and questioned whether they might have to refrain from ever having to use the hashtag again:

image 9344be

The battle between HR and the bots was featured in a Brian Krebs article from a couple of years ago. While the accounts we saw in the most recent campaign were not labeled as recruiters directly, they often pointed to other profiles that were. In the majority of cases, scammers used the name of real people and their pictures to create new accounts.

image f511d8

It appears their primary goal may be to gain connections by pretending to help a job seeker. This may increase the supposed authenticity of their profile and make it harder to shut them down.

LinkedIn did take action sometime after we witnessed the original spam wave. Many of the accounts indeed disappeared and comments were removed. It’s unclear whether this was a result of user reports, LinkedIn’s own algorithms, or a combination of both.

Fine tuning anti-fraud algorithms requires constant calibration, and isn’t without casualties. Some content creators have been banned due to “false positives”, eroding the trust and dedication they put into the platform.

You’ve got InMail

While bots are annoying, they are usually so predictable and noisy that they can be spotted from miles away, especially when they duplicate their own comments on the same post. More dangerous are personalized requests that come directly into a user’s inbox.

It’s the same idea of a fake recruiter, but the profile looks more credible and scammers are using paid accounts. In fact, the ability to send a message to a user who’s not in your circle of contacts, is one of LinkedIn’s feature for going premium, called InMail.

image 37e768

In the image seen above, an alleged Amazon recruiter going by the name “Kay Poppe”, sent a direct message about a unique job opportunity at Amazon Web Services. The so-called recruiter’s profile picture looks to be AI-generated, and the name Kay Poppe vaguely reminds us of “K-pop”, the Korean pop music phenomenon. Perhaps this is a bit of a stretch, but we couldn’t help but think of North Korea’s relentless phishing attempts.

This was not a standard, copy-paste message but rather a carefully crafted one based on the victim’s job profile. The link shortener they used was related to their current position and was the hook to get them to visit a fake LinkedIn page showing a number of documents related to that role. None of the links to the documents actually load what they claim to be, instead they are meant to be a segway to a page hosting a phishing kit.

image 481f0b

In this particular instance, this is the Rockstar2FA phishing-as-a-service toolkit used to harvest Google credentials. As more and more people are using two-factor authentication, criminals have come up with their own methods to bypass 2FA. While it is recommended to avoid SMS-based verification and instead use a one-time password (OTP) app, users can still get social engineered into entering the temporary code into a phishing page.

image 59e9b3

Stealing a Google account is usually only the first step in longer chain leading to a full compromise. Many people use their Google email as a recovery address for a number of other online accounts. This can allow a criminal to reset as many passwords as they can get their hands on before the victim even realizes what’s happened. It’s also not unusual to get locked out of your account and then struggle to regain control of it.

Fish in a larger pod

Scammers are notorious for targeting the vulnerable, and one could say that after losing a job you probably feel this way. All too eager to regain employment, you may jump at the first opportunity and engage in a conversation that could end badly.

Many of the bots spamming via comments tie back to some kind of fraud such as the advance-fee scam where you need to pay an up-front fee in order to receive goods or services. Some job offers are also too good to be true, and you could unknowingly participate in illegal activities by helping to funnel and launder money.

The more targeted phishing attempts are dangerous not only for the individual in question but also for the company they work for. This may be the case if you are not actively looking for a new job, and as such compromising you could in turn have further consequences, such as getting an entry point into an entire organization.

Whether you are looking for a job or already have one, you should expect to get contacted by some unknown third-party at some point. Treat every such inquiry with suspicion and caution. Remember that on the internet, not everyone is who they pretend to be.

Also, consider passkeys, a newer form of authentication that was specifically designed to move away from passwords and be less prone to phishing attacks. They rely on a private-public key exchange between a device and the service’s login page removing the need to enter passwords or codes.

If you ever fall victim to a scam, time is of the essence. Immediately:

  • be on the lookout for unusual account changes
  • proactively do a full password sweep
  • reach out to your bank and credit card company
  • inform your contacts who may receive fraudulent messages coming from you

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Upload a video selfie to get your Facebook or Instagram account back

Meta, the company behind Facebook and Instagram says its testing new ways to use facial recognition—both to combat scams and to help restore access to compromised accounts.

The social media giant is testing the use of video selfies and facial recognition to help users get their hijacked accounts back. Social media accounts are often lost when users forget their password, switch devices, or when they inadvertently or even willingly give their credentials to a scammer.

Another reason for Meta to use facial recognition are what it calls “celeb-bait ads.” Scammers often try to use images of public figures to trick people into engaging with ads that lead to fraudulent websites.

Since it’s trivial to set up an account that looks like a celebrity, scammers use this to attract visitors for various reasons, ranging from like-farming (a method to raise the popularity of a site or domain) to imposter scams, where accounts that seem to belong to celebrities reach out to you in order to defraud you.

5 Jon Hamm Facebook accounts with a different selfie
Several accounts that seem to belong to the same actor

Meta’s existing ad review system uses machine learning to review the millions of ads that are run across Meta platforms every day. With a new facial recognition addition to that system, Meta can compare faces in the ad to the public figure’s Facebook and Instagram profile pictures, and then block them if it’s fake.

According to Meta:

“Early testing with a small group of celebrities and public figures shows promising results in increasing the speed and efficacy with which we can detect and enforce against this type of scam.”

Over the coming weeks, Meta intends to start informing a larger group of celebs who have been used in scam ads that they will be enrolled into the new scheme and allow them to opt out if that’s what they want.

The problem of celeb-bait ads is a big one and I applaud Meta for trying to do something about it. The account recovery by video selfie, however, is something I’m far less fond of.

The idea of using facial recognition on social media is not new. In 2021, Meta shut down the Face Recognition system on Facebook as part of a company-wide move to limit the use of facial recognition in their products.

In the newly-announced system, the user can upload a video selfie, and Meta will use facial recognition technology to compare the selfie to the profile pictures on the account they’re trying to access. This is similar to identity verification tools you might already use to unlock your phone or access other apps. 

I do have a few questions though:

  • With the current development of deepfakes, how long will it take for this technology to be used for the exact opposite? Stealing your account by showing the platform a deepfake video of your face.
  • Do I want to provide Meta with even more material that might end up getting used to train its Artificial Intelligence (AI) models? Although Meta claims to delete the facial data after comparison, there are concerns about the collection and temporary storage of biometric information.
  • People have a tendency to post their best pictures and not change them as they grow older. Is a comparison always possible?
  • Is normalizing the use of biometrics for something as trivial as social media really necessary? Right now I only use a video selfie to approve bank transfers of over 1000 Euro (US$ 1075).  

There are probably good reasons why Meta is not implementing this option in the UK or the EU, because it needs to “continue conversations with regulators” first. The same is true for Illinois and Texas, likely due to stricter privacy laws in these states.

Surely there are better ways to reclaim a stolen account. What do you think? Let us know in the comments.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

This industry profits from knowing you have cancer, explains Cody Venzke (Lock and Code S05E22)

This week on the Lock and Code podcast

On the internet, you can be shown an online ad because of your age, your address, your purchase history, your politics, your religion, and even your likelihood of having cancer.

This is because of the largely unchecked “data broker” industry.

Data brokers are analytics and marketing companies that collect every conceivable data point that exists about you, packaging it all into profiles that other companies use when deciding who should see their advertisements.

Have a new mortgage? There are data brokers that collect that information and then sell it to advertisers who believe new homeowners are the perfect demographic to purchase, say, furniture, dining sets, or other home goods. Bought a new car? There are data brokers that collect all sorts of driving information directly from car manufacturers—including the direction you’re driving, your car’s gas tank status, its speed, and its location—because some unknown data model said somewhere that, perhaps, car drivers in certain states who are prone to speeding might be more likely to buy one type of product compared to another.

This is just a glimpse of what is happening to essentially every single adult who uses the Internet today.

So much of the information that people would never divulge to a stranger—like their addresses, phone numbers, criminal records, and mortgage payments—is collected away from view by thousands of data brokers. And while these companies know so much about people, the public at large likely know very little in return.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Cody Venzke, senior policy counsel with the ACLU, about how data brokers collect their information, what data points are off-limits (if any), and how people can protect their sensitive information, along with the harms that come from unchecked data broker activity—beyond just targeted advertising.

“We’re seeing data that’s been purchased from data brokers used to make decisions about who gets a house, who gets an employment opportunity, who is offered credit, who is considered for admission into a university.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

Internet Archive attackers email support users: “Your data is now in the hands of some random guy”

Those who hacked the Internet Archive haven’t gone away. Users of the Internet Archive who have submitted helpdesk tickets are reporting replies to the tickets from the hackers themselves.

Internet Archive, most known for its Wayback Machine, is a digital library that allows users to look at website snapshots from the past. It is often used for academic research and data analysis. Earlier in October, the Internet Archive suffered from a data breach and DDoS attack.

During that breach the attackers were able to steal a user authentication database containing 31 million records.

While the Wayback Machine is almost fully functional again, in a recent turn of events the attackers have started replying to those users that have opened a support ticket with the Internet Archive.

This is one of the replies a user reported:

“It’s dispiriting to see that even after being made aware of the breach 2 weeks ago, IA has still not done the due diligence of rotating many of the API keys that were exposed in their gitlab secrets.

As demonstrated by this message, this includes a Zendesk token with perms to access 800K+ support tickets sent to info@archive.org since 2018.

Whether you were trying to ask a general question, or requesting the removal of your site from the Wayback Machine—your data is now in the hands of some random guy. If not me, it’d be someone else.

Here’s hoping that they’ll get their shit together now.”

An Application Programming Interface (API) token is like a special pass that allows a computer program or app to access and use services provided by another program or website. It is used as proof that the user or app has permission to access the service.

It appears as if the Internet Archive uses Zendesk to manage its support tickets. Having the Internet Archive’s Zendesk token would certainly explain why the hackers can reply to customer tickets.

Changing a Zendesk API token is not very hard, but it can have unexpected consequences, so it may require some advance planning to minimize potential disruptions. This could be why the Internet Archive may not have gotten round to it yet. But not changing API keys that would grant the attackers access to the organization’s important infrastructure like Zendesk would be a serious omission.

On October 18, 2024, Internet Archive founder Brewster Kahle, posted an update stating the stored data of the Internet Archive is safe and work on resuming services safely is in progress.

“We’re taking a cautious, deliberate approach to rebuild and strengthen our defenses. Our priority is ensuring the Internet Archive comes online stronger and more secure.”

So far, the Internet Archive has not responded to the new developments, and the motivation for the attacks on the Internet Archive remain unclear. We’ll keep you posted.

A week in security (October 14 – October 20)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Unauthorized data access vulnerability in macOS is detailed by Microsoft

The Microsoft Threat Intelligence team disclosed details about a macOS vulnerability, dubbed “HM Surf,” that could allow an attacker to gain access to the user’s data in Safari. The data the attacker could access without users’ consent includes browsed pages, along with the device’s camera, microphone, and location.

The vulnerability, tracked as CVE-2024-44133 was fixed in the September 16 update for Mac Studio (2022 and later), iMac (2019 and later), Mac Pro (2019 and later), Mac Mini (2018 and later), MacBook Air (2020 and later), MacBook Pro (2018 and later), and iMac Pro (2017 and later).

It is important to note that this vulnerability would only impact Mobile Device Management (MDM) managed devices. MDM managed devices are typically subject to centralized management and security policies set by the organization’s IT department.

Microsoft has dubbed the flaw “HM Surf.” By exploiting this vulnerability an attacker could bypass the macOS Transparency, Consent, and Control (TCC) technology and gain unauthorized access to a user’s protected data.

Users may notice Safari’s TCC in action when they browse a website that requires access to the camera or the microphone. They may see a prompt like this one:

Safari TCC prompt
Image courtesy of Microsoft

What Microsoft discovered was that Safari maintains its own separate TCC policy which it maintains in various local files.

At that point Microsoft figured out it was possible to modify the sensitive files, by swapping the home directory of the current user back and forth. The home directory is protected by the TCC, but by changing the home directory, then change the file, and then making it the home directory again, Safari will use the modified files.

The exploit only works on Safari because third-party browsers such as Google Chrome, Mozilla Firefox, or Microsoft Edge do not have the same private entitlements as Apple applications. Therefore, those apps can’t bypass the macOS TCC checks.

Microsoft noted that it observed suspicious activity in the wild associated with the Adload adware that might be exploiting this vulnerability. But it could not be entirely sure whether the exact same exploit was used.

“Since we weren’t able to observe the steps taken leading to the activity, we can’t fully determine if the Adload campaign is exploiting the HM surf vulnerability itself. Attackers using a similar method to deploy a prevalent threat raises the importance of having protection against attacks using this technique.”

We encourage macOS users to apply these security updates as soon as possible if they haven’t already.


Malwarebytes for Mac takes out malware, adware, spyware, and other threats before they can infect your machine and ruin your day. It’ll keep you safe online and your Mac running like it should.

23andMe will retain your genetic information, even if you delete the account

Deleting your personal data from 23andMe is proving to be hard.

There are good reasons for people wanting to delete their data from 23andMe: The DNA testing platform has a lot of problems, so let’s start with a recap.

A little over a year ago, cybercriminals put up information belonging to as many as seven million 23andMe customers for sale on criminal forums following a credential stuffing attack against the genomics company.

In December 2023, we learned that the attacker was able to directly access the accounts of roughly 0.1% of 23andMe’s users, which is about 14,000 of its 14 million customers. So far not too many people affected, but with the breached accounts at their disposal, the attacker used 23andMe’s opt-in DNA Relatives (DNAR) feature—which matches users with their genetic relatives—to access information about millions of other users.

For a subset of these accounts, the stolen data contained health-related information based upon the user’s genetics.

In January 2024, 23andMe had the audacity to lay the blame at the feet of victims themselves in a letter to legal representatives of victims. 23andMe reasoned that the customers whose data was directly accessed re-used their passwords, gave permission to share data with other users on 23andMe’s platform, and that the medical information was non-substantive.

And in September 2024, we found out that the company would pay $30 million to settle a class action lawsuit, as that was all that 23andMe could afford to pay. And that’s only because the expectation was that cyberinsurance would cover $25 million.

As a result, the value of 23andMe plummeted. And last month the company said goodbye to all its board members except for CEO Anne Wojcicki who stood by her plans to take the company private.

This uncertainty about the future of the company and, with that, who will be the future holder of all the customer personal information, has caused a surge of users looking to close their accounts and delete their data.

However, it turns out it’s not as easy as just asking for the data to be removed. You can delete your data from 23andMe , but 23andMe says it will retain some of that data (including genetic information) to comply with the company’s legal obligations, according to its privacy policy.

“23andMe and/or our contracted genotyping laboratory will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations, including the federal Clinical Laboratory Improvement Amendments of 1988 (CLIA), California Business and Professions Code Section 1265 and College of American Pathologists (CAP) accreditation requirements, even if you chose to delete your account. 23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.”

In addition, any information you previously provided and consented to be used in 23andMe research projects cannot be removed from ongoing or completed studies, although the company says it will not use it in any future ones.

This is unfortunate, and is yet another reminder about how once you give information away you cannot always get it back. Let’s hope the policy gets changed and customers are allowed to fully delete their data soon.

It’s still worth deleting as much as possible, though. So here’s how to do that.

How to delete (most of) your data from 23andMe

  • Log into your account and navigate to Settings.
  • Under Settings, scroll to the section titled 23andMe data. Select View.
  • It will ask you to enter your date of birth for extra security. 
  • In the next section, you’ll be asked which, if there is any, personal data you’d like to download from the company (onto a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  • You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically, and you’ll immediately lose access to your account. 

When you set up your 23andMe account, you had the options to either have the saliva sample that you sent to them securely destroyed or to have it stored for future testing. If you chose to store your sample but now want to delete your 23andMe account, the company says it will destroy the sample for you as part of the account deletion process.

Check your digital footprint

If you want to find out if your personal data was exposed through the 23andMe breach, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you used to register and 23andMe) and we’ll send you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

“Nudify” deepfake bots remove clothes from victims in minutes, and millions are using them

Millions of people are turning normal pictures into nude images, and it can be done in minutes.

Journalists at Wired found at least 50 “nudify” bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it’s appalling, and highly likely there are much more than those they found.

The history of nonconsensual intimate image (NCII) abuse—as the use of explicit deepfakes without consent is often called—started near the end of 2017. Motherboard (now Vice) found an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name “deepfake.”

Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like Generative Adversarial Networks (GANs) are available to the public.

However, most of the uncovered bots don’t use this advanced type of technology. Some of the bots on Telegram are “limited” to removing clothes from existing pictures, an extremely disturbing act for the victim.

These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of “tokens” to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.

Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when AI Girlfriend was breached), and you can even end up getting infected with malware.

The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with potential long-term psychological and emotional impacts on victims.

To combat this type of sexual abuse there have been several initiatives:

  • The US has proposed legislation in the form of the Deepfake Accountability Act. Combined with the recent policy change by Telegram to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.
  • Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).

However, so far these steps have shown no significant impact on the growth of the market for NCIIs.

Keep your children safe

We’re sometimes asked why it’s a problem to post pictures on social media that can be harvested to train AI models.

We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.

  • Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
  • Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
  • Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
  • Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
  • Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
  • Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.

If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.