IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

260 romance scammers and sextortionists caught in huge Interpol sting

Online crime of all kinds is deplorable, but romance scammers and sextortionists who target the most vulnerable victims are among the worst. Now, there’s likely a place for 260 of them in jail, thanks to international law enforcement.

Interpol’s Operation Contender 3.0 targeted alleged criminals from several countries across Africa. It arrested 260 people and captured 1,235 electronic devices. Investigators linked 1,463 victims to the scams, and said their losses amounted to around $2.8 million.

The images from Interpol’s press release tell just as lurid a story as the numbers do. In one, over 30 phones lie on a table, each with a different case. These were the devices that the scammers likely used to carry out their crimes, which focused on romance scams and extortion.

Criminals lured victims with fake online identities built from stolen photos and forged documents, then exploited victims through romance scams that demanded bogus courier or customs fees. Others ran sextortion schemes, secretly recording explicit video chats to extort money.

What to watch for

Romance scams are all too familiar to those in the know, but still catch out plenty of lonely people looking for affection online. A criminal half a world away will get to know a victim, often beginning the relationship via an ‘accidental’ text message, or via a dating site or social media. A fake social media account, usually with a stolen photo, lends them credibility. They will gradually get to know the victim, luring them into what seems like a romantic relationship. If you’re talking to someone who claims to be in the military and therefore unable to travel, be very wary. This is a common scam tactic.

Eventually the request for money will come, in some form or other. In some scams, it’ll be a recommendation to invest in a fraudulent investment scheme (this used to be called ‘pig butchering’ but now Interpol prefers the more humane term ‘romance baiting’).

In other variations of the scam, there will be a plan to visit the victim – except, of course, there’s some financial hurdle that the perpetrator must overcome before they can travel. If the victim sends the money, the requests will keep coming, always with another excuse for why they can’t make the trip just yet.

Talking with someone you’ve never met who’s asking for financial help with a medical emergency, or to solve a legal or business issue? Think twice before sending the funds. Then think a third time. Then don’t do it.

A loneliness epidemic

In an era where people are increasingly lonely, romance scams are a surprisingly effective tactic. Americans lost $1.2 billion to romance scammers last year, with medium losses hitting $2,000.

The extortion side of things is even more horrid. People aren’t just lonely these days; they’re lusty. That leads to many people doing things online with strangers that they shouldn’t, including sharing intimate images or videos of themselves. Once a criminal has those assets, they can use them to extort the victims by threatening to send the material to their friends, family, and professional contacts.

Romance scams and other forms of financial fraud can come from anywhere, including in your own country. But Africa does seem to be a hotbed for it. Last year’s Interpol Africa Cyberthreat Assessment Report found that cybercrime accounted for 30% of all reported crime in Western and Eastern Africa. Criminals engage in many kinds of digital crime, according to the report, including business email compromise and banking malware, but online scams are especially popular—as is digital sextortion and harassment.

Interpol arrested eight people a year ago in Nigeria and Côte d’Ivoire for financial fraud including romance scams as part of its Contender 2.0 operation. And in 2022, it dismantled a South African gang for swindling companies, but also suspected it of being involved in romance scams.


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Amazon pays $2.5B settlement over deceptive Prime subscriptions

Another day, another settlement. Amazon has settled a lawsuit filed by the Federal Trade Commission (FTC) over misleading customers who signed up for Amazon Prime—though it claims it did nothing wrong.

The FTC alleged that Amazon used deceptive methods to sign up consumers for Prime subscriptions—and made it exceedingly difficult to cancel.

In the settlement, Amazon will be required to pay a $1 billion civil penalty, provide $1.5 billion in refunds back to consumers harmed by their deceptive Prime enrollment practices, and cease unlawful enrollment and cancellation practices for Prime.

The FTC claimed in its lawsuit that Amazon had used:

“manipulative, coercive, or deceptive user-interface designs known as ‘dark patterns’ to trick consumers into enrolling in automatically-renewing Prime subscriptions.” 

Dark patterns are tricks on websites or in apps to nudge or mislead people toward choices they wouldn’t normally make, like spending more money or signing up for recurring services without realizing it. Instead of helping users, these designs obscure, confuse, or pressure viewers to act quickly or accidentally.

Some common examples are:

  • Large, colorful “Yes” buttons, but almost hidden “No” options
  • Confusing cancellation steps with unclear language
  • Pre-checked boxes for paid extras
  • Endless popups urging one not to leave a page

Former FTC commissioner Alvaro Bedoya described Amazon’s “End Your Prime Membership” method as:

“a 4-page, 6-click, 15-option cancellation journey that Amazon itself compared to that slim airport read, Homer’s Iliad.”

Due to Amazon’s use of dark patterns, millions of people ended up signing up for Prime, some without realizing they’d agreed to recurring charges. Others gave up trying to cancel due to the exhausting steps.

The FTC found this to be a violation of the Restore Online Shoppers’ Confidence Act, which was signed into law in 2010 to prevent companies using deception to prompt or encourage online purchases.

Amazon issued a statement saying:

“Amazon and our executives have always followed the law and this settlement allows us to move forward and focus on innovating for customers. We work incredibly hard to make it clear and simple for customers to both sign up or cancel their Prime membership, and to offer substantial value for our many millions of loyal Prime members around the world. We will continue to do so, and look forward to what we’ll deliver for Prime members in the coming years.”

Customers who enrolled in Prime between June 23, 2019 and June 23, 2025 may be eligible for a refund. Those who rarely used Prime benefits will automatically get back their fees—capped at $51—while others who meet the criteria can apply for a refund of up to the same amount.

As we argued a few days ago, settlements like these highlight a worrying trend: big tech pays off privacy violations, class actions grab headlines, and lawyers collect fees—while consumers hand over personal details again for a token payout.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Sex offenders, terrorists, drug dealers, exposed in spyware breach

We’ve covered spyware and stalkerware leaks many times before, but we don’t often see such exposure in software used by law enforcement.

According to a report by Straight Arrow News (SAN), the hacker “wikkid” said the intrusion against RemoteCOM was “one of the easiest” they’d ever carried out.

RemoteCOM describes itself as “the premier computer, smartphone and tablet monitoring service for the management of pretrial, probation and parole clients”. According to a leaked training manual, its software, sold as “SCOUT”, says it can be used to track targets ranging from sex offenders, sex traffickers, and stalkers to terrorists, hackers, and gang members.

Behind its official branding, SCOUT behaves like spyware: it records keystrokes, captures screenshots, and even sends out an alert if the tracked person types certain keywords.

The hacker accessed two key files: “officers” (6,896 entries), containing the names, phone numbers, work addresses, email addresses, unique IDs, and job titles of people working in the criminal justice system who have used RemoteCOM’s services, and “clients” (around 14,000 entries), covering individuals currently or previously monitored by SCOUT; listing names, email addresses, IP addresses, home addresses, and phone numbers, alongside the names and emails of their probation officers.

The files also contained details of the offenses clients were charged with, ranging from sex offenses, weapons, and narcotics cases to terrorism, stalking, domestic violence, sex trafficking, fraud, violence, and hacking.

example client data fields RemoteCOM
Image courtesy of SAN

This type of data leak can be dangerous for both sides of the app. Clients tagged with the keyword “sex” are not necessarily convicted sex offenders—they could be suspects under surveillance or have not yet been to trial—but that distinction might not stop any vigilantes out there.

For officers, the leak of names, contact details, and workplaces could expose them and their families to threats of violence. One officer even had the app installed on the phones of their sister-in-law and fiancé, making the breach especially personal.

Speaking to SAN, a spokesperson for RemoteCOM said:

“We are assessing the situation currently along with your article that you posted.”

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (September 22 – September 28)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on scams—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Hackers threaten parents: Get nursery to pay ransom or we leak your child’s data

Just when you think extortionists can’t sink any lower, along comes a lowlife that manages to surprise you.

The BBC reported that a group calling itself “Radiant” claims to have stolen sensitive data related to around 8,000 children from nursery chain Kido, which operates in the UK, US, China, and India.

The data the group says it stole includes names, photos, addresses, dates of birth, and details about their parents or carers. The hack also reportedly exposed safeguarding notes and medical information.

To prove their possession of the data, the criminals posted samples, including pictures and profiles of ten children on their darknet website. They then issued a ransom demand to Kido, threatening to release more sensitive data unless they were paid.

When contacted by the BBC about their extortion attempt, the group defended their actions, claiming to:

“… deserve some compensation for our pentest.”

They should educate themselves before continuing. In most jurisdictions, to carry out this type of “penetration testing” legally, they need to get explicit permission from the company first (or choose a company that runs a bug bounty program).

As if stealing children’s data and publishing them on the dark web isn’t bad enough, Joe Tidy at the BBC reported that the group also called some of the children’s parents—telling them to put pressure on the nursery chain to pay the ransom demand, or they’ll leak their child’s data.

If history has taught us anything, the next step is that they will try to extort the parents individually, as happened in the case of the Finnish psychotherapy practice Vastaamo. Trust me, these things never end well. In Vastaamo’s case, the clinic went bankrupt, at least one suicide has been linked to the case, and the attackers have been sentenced to jail time.

Kido has not issued a public statement. Although the investigation is ongoing, it has contacted parents to confirm the incident and offer reassurance.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice it offers.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of 2FA can be phished just as easily as a password, but 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the company’s website to see if it’s contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring, which alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Google and Flo to pay $56 million after misusing users’ health data

Popular period-tracking app Flo Health shared users’ intimate health data—such as menstrual cycles and fertility information—with Google and Meta, allegedly for targeted advertising purposes, according to multiple class-action lawsuits filed in the US and Canada.

Between 2016 and 2019, the developers of Flo Health shared intimate user data with companies including Facebook and Google, mobile marketing firm AppsFlyer, and Yahoo!-owned mobile analytics platform Flurry. 

Google and Flo Health reached settlements with plaintiffs in July, just before the case went to trial. The terms, disclosed this week in San Francisco federal court, stipulate that Google will pay $48 million and Flo Health will pay $8 million to compensate users who entered information about menstruation or pregnancy between November 2016 and February 2019.

In an earlier trial, co-defendant Meta was found liable for violating the California Invasion of Privacy Act by collecting the information of Flo app users without their consent. Meta is expected to appeal the verdict.

The FTC investigated Flo Health and concluded in 2021 that the company misled users about its data privacy practices. This led to a class-action lawsuit which also involved the now-defunct analytics company Flurry, which settled separately for $3.5 million in March.

Flo and Google denied the allegations despite agreeing to pay settlements. Big tech companies have increasingly chosen to settle class action lawsuits while explicitly denying any wrongdoing or legal liability—a common trend in high-profile privacy, antitrust, and data breach cases.

It depicts a worrying trend where big tech pays off victims of privacy violations and other infractions. High-profile class-action lawsuits against, for example, GoogleMeta, and Amazon, grab headlines for holding tech giants accountable. But the only significant winners are often the lawyers, leaving victims to submit personal details yet again in exchange for, at best, a token payout.

By settling, companies can keep a grip on the potential damages and avoid the unpredictability of a jury verdict, which in large classes could reach into billions. Moreover, settlements often resolve legal uncertainty for these corporations without setting a legal precedent that could be used against them in future litigation or regulatory actions.

Looking at it from a cynical perspective, these companies treat such settlements as just another operational expense and continue with their usual practices.

In the long run, such agreements may undermine public trust and accountability, as affected consumers receive minimal compensation but never see a clear acknowledgment of harm or misconduct.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Neon App pays users to record their phone calls, sells data for AI training

TechCrunch reports about a “bizarre app” inviting you to record and share your audio calls so that it can sell the data to AI companies. And if that’s not weird enough on its own, it’s ranking No. 2 in Apple’s US app store at the time of writing.

The name of the app is Neon Mobile and it promises to pay users hundreds or even thousands of dollars per year. Why would you do it? Its reasoning is the old “they already know everything about you anyway” adage and “you might as well get paid for it then.”

Neon will sell the data collected by the app to “AI companies for the purpose of developing, training, testing, and improving machine learning models, artificial intelligence tools and systems, and related technologies.”

The payment is $0.15 per minute if you’re the only Neon Mobile user in the conversation, and this doubles when the person on the other end uses the app as well. With a maximum payout of $30 per day and the understanding that the call has to be made through the app, you’ll have to be on the phone a lot to make thousands of dollars, but we can see why this might be an attractive offer to some people.

Some people are even already planning to do it as a second job. One commenter says:

“They just want the voice data. So if you and a friend agree to talk about pretend situations for an hour a day to make $900 a month that seems pretty easy. If both parties are doing it that comes out to $18 an hour which is pretty good.”

Neon Mobile promises that:

  • It will never sell your personal data to any third party.
  • It does not knowingly or intentionally collect personal data about children under 16.
  • It will only record your side of the call unless it’s with another Neon user.

We have some doubts about how it will accomplish the one-sided recording technically, as well automatically filtering out names, numbers, and other personal details.

As always, there are some caveats. Looking at the Privacy Policy we noticed:

  • Neon collects a lot of personal and technical data about you, like identifiers, contact details, usage data, payment information, event participation, account activity, testimonials, and other data from third-party sources.
  • Any third-party integrations are your responsibility. These third-party integrations may ask for permissions to access your personal data, or send information to your Neon Account. It is the user’s responsibility to review any third-party integrations.
  • Your personal data is shared with others. Neon regularly passes personal data to service providers and “trusted partners” for things like hosting, marketing, sales support, and analytics. combined marketing, support, and analytics.
  • You have certain rights, but not absolute ones. You can request to access, delete, or correct the personal data Neon Mobile collects or maintains about you, but Neon may deny requests when the law allows.
  • You need to watch out for opt-outs. If Neon wants to use your data in a new way, or if it plans to disclose it to another third party not already covered in the Privacy Policy, it will give you the choice to refuse this new use or disclosure. But this an “opt out” opportunity, so you will have to pay close attention to every change in the Terms of service and the Privacy Policy.
  • The disclosure rights are quite broad. Neon reserves the right to disclose data to comply with legal obligations, protect rights and safety, investigate fraud, or respond to law enforcement requests, with broad latitude for “compelled disclosure”.

In other words, Neon gathers and combines a wide range of personal and usage data, shares it with partners and third parties, and reserves broad rights to repurpose or disclose it—leaving users to monitor policy changes and opt out if they don’t agree.

Given the breadth of the data collection and the numerous caveats (while framed as protections against abuse), I’d argue that Neon Mobile is paying a low price for users’ privacy.

It’s also worth noting that if you become disappointed with the app or its returns, it takes more than just deleting the app from your device .

“If you delete the Neon app (but do not close your Neon account), your calls can still be recorded when other Neon users who have the app call you. If you want to stop call recordings with other Neon users, close your account through your profile settings.”

I’d also advise anyone using the app to inform the person on the other end that the conversation will be recorded, since failing to do so may have legal implications.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

New SVG-based phishing campaign is a recipe for disaster

We’ve written in the past about cybercriminals using SVG files for phishing and for clickjack campaigns. We found a new, rather sophisticated example of an SVG involved in phishing.

For readers that missed the earlier posts, SVG files are not always simply image files. Because they are written in XML (eXtensible Markup Language), they can contain HTML and JavaScript code, which cybercriminals can exploit for malicious purposes.

Another advantage for phishers is that, on a Windows computer, SVG files get opened by Microsoft Edge, regardless of what your default browser is. Since most people prefer to use a different browser, such as Chrome, Edge can often be overlooked when it comes to adding protection like ad-blockers and web filters.

The malicious SVG we’ve found uses a rather unusual method to send targets to a phishing site.

Inside RECElPT.SVG we found a script containing a lot of food/recipe-related names (“menuIngredients”, “bakingRound”, “saladBowl”, etc.), which are all simply creative disguises for obfuscating the code’s malicious intentions.

This is the part of the code where the phishers hid a redirect:

function to define the ingredients

Upon close inspection, the illusion of an edible recipe quickly disappears. 141 cups of eggs, anyone?

But picking the code apart, we noticed that the decoder works like this:

  1. Search for data-ingredients=”…” in the given text.
  2. Split the string inside the attribute by commas to get a list. E.g., 219cups_flour, 205tbsp_eggs,…
  3. For each element, extract the leading numeric value (e.g., 219 from 219cups_flour).
  4. Subtract 100 from this value.
  5. If the result is an ASCII printable character (ranging from 32–126), then convert it to the character with that number.
  6. Join all characters together to form the final decoded string.

Using this method we arrived at window.location.replace("https://outuer.devconptytld[.]com.au/");

window.location.replace is a JavaScript method that replaces the current resource with the one at the provided URL. In other words, it redirects the target to that location if they open the SVG file.

When redirected, the user will see this prompt, which is basically intended to hide the real location of the server behind Cloudflare services, but also provides some sense of legitimacy for the visitor.

Verify you're not a robot

It doesn’t matter what the user does here, they will get forwarded again with the code passing the e parameter (the target’s email address) on to the next destination.

But this is where our adventure ended. For us, the next site was an empty one.

We couldn’t determine what conditions had to be met to get to the next stage of the phishing expedition. But it is highly likely it will display a fake login form (almost certainly Microsoft 365- or Outlook-themed), to capture the target’s username and password.

Microsoft flagged a similar campaign which was clearly obfuscated with AI assistance and appeared even more legitimate at first glance.

Some remarks we want to share about this campaign:

  • We found several versions of the SVG file dating back to August 26, 2025.
  • The attacks are very targeted with the target’s email address embedded in the SVG file.
  • The phishing domain could be a typosquat for the legitimate devconptyltd.com.au, so it could mean the targets were doing business with Devcon Pty Ltd who owns that domain. This is a tactic we often see in Business Email Compromise (BEC) attacks.
  • We found several subdomains of devconptytld[.]com.au associated with this campaign. The domain’s TLS certificate dates back to August 24, 2025 and is valid for 3 months.

How to stay safe from SVG phishing attacks

SVG files are an uncommon attachment to receive, so it’s good to keep in mind that:

  • They are not always “just” image files.
  • Several phishing and malware campaigns use SVG files, so they deserve the same treatment as any other attachment: don’t open until the trusted sender confirms sending you one.
  • Always check the address of a website asking for credentials. Or use a password manager, they will not auto-fill your details on a fake website.
  • Use real-time anti-malware protection, preferably with a web protection component. Malwarebytes blocks the domains associated with this campaign.
  • Use an email security solution that can detect and quarantine suspicious attachments.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

LinkedIn will use your data to train its AI unless you opt out now

LinkedIn plans to share user data with Microsoft and its affiliates for AI training. Framed as “legitimate interest”, it won’t ask for your permission—instead you’ll have to opt out before the deadline.

Microsoft has made major investments in ChatGPT’s creator OpenAI, and as we know, the more data we feed a Large Language Model (LLM) the more useful answers the AI chatbot can provide. This explains why LinkedIn wants your data, but not how it went about it.

The use of personal data for AI improvements and product personalization always raises privacy concerns and we would expect a much lower participation rate if users had to sign up for it. The problem in this case is that you were already opted-in by default and your data will be used up to the point where you opt out.

To opt out, you should go to your LinkedIn privacy settings:

  • Navigate to Settings & Privacy > Data privacy > Data for Generative AI Improvement.
    Data privacy settings LinkedIn
  • Toggle off Use my data for training content creation AI models.
    turned off
  • Optionally, file a Data Processing Objection request to formally object. To do this, access the Data Processing Objection Form, select Object to processing for training content-generating AI models, and send a request. Non-members can also file an objection if their personal data was shared on LinkedIn by a member.

You should also review and clean up older or sensitive posts, profiles, or resumes to reduce exposure. Again, opting out only stops future training on new data; it does not retract data already used.

The data LinkedIn might share is pretty extensive:

  • Profile data, which includes your name, photo, current position, past work experience, education, location, skills, publications, patents, endorsements, and recommendations.
  • Job-related data, such as resumes, responses to screening questions, and application details.
  • The content you posted, such as posts, articles, poll responses, contributions, and comments.
  • Feedback, including ratings and responses you provide.

Who is affected and how?

There are some contradicting statements going around about in which countries the new update to LinkedIn terms will apply. The official statement says members in the EU, EEA, Switzerland, Canada, and Hong Kong have until November 3, 2025, to opt out. (EEA is the EU plus Iceland, Liechtenstein, and Norway). Other sources say that UK users are affected as well. We’d advise anyone who has that setting and doesn’t want to participate to turn the “Use my data…” setting off.

Reportedly, a quarter of the over 1 billion LinkedIn users are in the US, so they can provide a lot of valuable data. In the terms update, users in the US are included in the part where it says:

“Starting November 3, 2025, we will share additional data about members in your region with our Affiliate Microsoft so that the Microsoft family of companies can show you more personalized and relevant ads. This data may include your LinkedIn profile data, feed activity data, and ad engagement data; it does not include any data that your settings you do not allow LinkedIn to use for ad purposes.”

You can review those settings and act as you prefer your data to be handled.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

TikTok is misusing kids’ data, says privacy watchdog

A group of privacy commissioners in Canada have accused TikTok of scooping up information about hundreds of thousands of children who shouldn’t have been on the platform.

The Chinese social media giant is also accused of collecting data on Canadian users without properly explaining what it does with that information, the watchdogs added.

In a report issued last week, the Federal Privacy Commissioner, along with commissioners in British Columbia, Québec and Alberta, accused the service of failing to keep children under 13 off its platform. The service’s terms and conditions prohibit people under that age from using TikTok. From the report:

“The tools implemented by TikTok to keep children off its platform were largely ineffective. This was particularly true in respect of the majority of users who are ‘lurkers’ or ‘passive users’, who view videos on the platform without posting video or text content.”

Inadequate age gates and inappropriate data collection

TikTok relied on a voluntary age gate to keep very young users off the platform. That system simply trusts a person to correctly enter their birth date, the report found.

It used stronger protection to stop those under 18 from using its TikTok LIVE live-streaming function, in the form of facial analytics. However when it did use facial analysis, the company didn’t explain to users that it would use that information to determine their age and gender for ads and content recommendations, the privacy commissioners added.

TikTok collects significant data on its users, explained the report. This includes their demographics, interests, and location. A demonstration of its advertising portal even highlighted the possibility of targeting people with ads based on their transgender status. The report said:

“TikTok claimed that this was not supposed to be possible but was unable to explain how or why this option had been available.”

The company also failed to adequately explain to young users about how it would use their data. It used the same messaging that it gave to adults, said the privacy commissioners, who added that even that messaging was inadequate. The report added,

“The investigation uncovered that TikTok removes approximately 500,000 underage users from the platform each year. Where these children were engaging with the platform before being removed, TikTok was already collecting, inferring and using information about them to serve them targeted ads and recommend tailored content to them.”

What TikTok has agreed to do

TikTok has disagreed with the commissioners’ findings, but will nevertheless build three new age assurance systems into its service that will be better at keeping underage users off the platform. It will also make its privacy policy clearer about how it targets advertising and recommends content, and how it uses biometric data, and it will publish a plain-language policy for teens.

Finally it will put a ‘Privacy Settings Checkup’ system in place, making it easier for Canadians to review and set their privacy choices.

This isn’t the first time that Canada’s government has clashed with TikTok. It had already ordered TikTok Technology Canada to wind down operations last November based on concerns about the national security of its owner ByteDance operating on Canadian soil. This didn’t affect people’s ability to use the software in Canada, though. The move prompted TikTok to challenge the order in federal court.

South of the border, a group of investors including Oracle chair Larry Ellison, Dell Technologies chair Michael Dell, and Rupert and Lachlan Murdoch are negotiating the acquisition of TikTok’s US operation. A successful bid would see the US data stored in Oracle’s Cloud system.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.