IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

TikTok: Major investigation launched into platform’s use of children’s data

TikTok is the subject of yet another major investigation, reports BBC News. This time around, the UK’s Information Commissioner’s Office (ICO) is going to look at how the data of 13 to 17-year-olds feeds the algorithm that decides what further content to show.

The ICO introduced a children’s code for online privacy in 2021, which requires companies to take steps to protect children’s personal information online. Social media platforms use complex algorithms to decide which content will keep users engaged. This method tends to deliver content that increases in intensity and could end up delivering content that is considered harmful for children.

TikTok has defended itself, saying its recommender systems operate under “strict and comprehensive measures that protect the privacy and safety of teens”. TikTok also said the platform has “robust restrictions on the content allowed in teens’ feeds”.

The ICO said it expects to find that there will be many benign and positive uses of children’s data in TikTok’s algorithm but is concerned about whether these are “sufficiently robust to prevent children being exposed to harm, either from addictive practices on the device or the platform, or from content that they see, or from other unhealthy practices.”

This isn’t TikTok’s first run in with the ICO. In 2023, the ICO fined TikTok to the tune of $15.6M (£12.7M) for failing to protect 1.4 million UK children under the age of 13 from accessing its platform in 2020. The ICO imposed the fine after finding the company used children’s data without parental consent.

Tik Tok has been under scrutiny for many reasons in many countries. In the US, the ownership by the Chinese company ByteDance has been a main factor. Many governments have banned TikTok from government devices for that reason.

But the EU has also fined TikTok in the past for violating children’s privacy.

Last year, the Federal Trade Commission (FTC) announced it had referred a complaint against TikTok and parent company ByteDance to the Department of Justice. One of the main issues in that case was TikTok’s failure to get parental consent before collecting personal information from children under 13.

TikTok is not the only platform under investigation by the ICO, it’s also looking at the forum site Reddit and the image-sharing site Imgur. For the last two, the ICO investigation will focus on the companies’ use of age assurance measures, such as how they estimate or verify a child’s age.

The ICO stated:

“If we find there is sufficient evidence that any of these companies have broken the law, we will put this to them and obtain their representations before reaching a final conclusion.”

Advice for parents

For parents whose children spend a lot of time on social media platforms like TikTok, here are some useful guidelines:

  • Establish rules and limits for social media use. This will be particular to your family and what you feel comfortable with.
  • Make use of built-in parental controls. TikTok for example offers Family Pairing which allows you to manage privacy settings, screen time, and set content restrictions.
  • Have regular, open conversations about your child’s online experiences. Show an interest in what they are sharing.
  • Teach your child about the importance of privacy settings and what you think is appropriate online behavior.
  • Teach you child to question sources, consider different perspectives, and be aware of potential biases in what they encounter online.
  • Talk to your child about what makes a good online citizen, including how they treat other people online.
  • Set a good example, so be mindful of your own screen time and online behavior.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

A week in security (February 24 – March 2)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Millions of stalkerware users exposed again

There are many reasons not to use stalkerware,  but the risk of getting exposed yourself seems to be a recurring deterrent, according to a new investigaton.

As we have reported many times before, stalkerware-type apps are coded so badly that it’s possible to gain access to the back-end databases and retrieve data about everyone that has the app on their device—and those are not just the victims.

By definition, stalkerware is a term used to describe the tools—software programs and mobile apps—that enable someone to secretly spy on another person’s private life via their mobile device. Many stalkerware-type applications market themselves as parental monitoring tools, but they can be and often are used to stalk and spy on a person. A commonly recorded use of stalkerware is in situations of domestic abuse, in which abusers will load these programs onto their partner’s computer or mobile device without their knowledge.

Stalkerware apps are notoriously badly coded and secured. In the past we have written about similar problems with:

  • mSpy, a mobile monitoring app which suffered multiple data breaches.
  • pcTattleTale, another stalkerware app that faced significant security issues. Among others, it was found to upload victim screenshots to an unsecured AWS server.
  • TheTruthSpy, exposed photographs of children the app took on the internet because of poor cybersecurity practices by the app vendor.

As reported by TechCrunch, researchers found a vulnerability in three very similar stalkerware apps called Spyzie, Cocospy, and Spyic. The bug not only exposes the data from the victim’s device like messages, photos, and location data, but also allowed the researcher to collect 518,643 unique email addresses of Spyzie customers, 1.81 million email addresses of Cocospy customers, and 880,167 email addresses of Spyic customers.

Apparently, the bug is so easy to exploit that TechCrunch and the researcher found it not advisable to reveal any details, since anyone would have been able to exploit it.

Our advice, don’t use stalkerware

If you are thinking about installing such an app, and you are reading this:

  1. Don’t!
  2. It definitely is illegal in almost every country, unless it’s done with government consent or to monitor your children (and even here, the rules can be murky).
  3. We have never heard of anyone who was able to solve a problem by using stalkerware. Usually resorting to stalkerware only makes the problems worse.
  4. Consider the consequences of someone finding out what you did and remember that is a very distinct possibility.
  5. Listen to this podcast.

Malwarebytes, as one of the founding members of the Coalition Against Stalkerware, makes it a priority to detect and remove stalkerware from your device. It is good to keep in mind however that by removing any stalkerware-type app, you will alert the person spying on you that you know the app is there. If you are facing domestic abuse, we recommend that you first develop a safety plan with an organization like National Network to End Domestic Violence before removing any stalkerware-type app from your device.

Stalkerware apps are usually hidden or camouflaged as other apps, so to find them on your phone, we recommend scanning with an anti-malware app that is able to identify stalkerware.

Malwarebytes also provides a free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

PayPal’s “no-code checkout” abused by scammers

We recently identified a new scam targeting PayPal customers with very convincing ads and pages. Crooks are abusing both Google and PayPal’s infrastructure in order to trick victims calling for assistance to speak with fraudsters instead.

Combining official-looking Google search ads with specially-crafted PayPal pay links, makes this scheme particularly dangerous on mobile devices due to their screen size limitation and likelihood of not having security software.

Overview

Scammers are creating ads impersonating PayPal from various advertiser accounts that may have been hacked. The ad displays the official website for PayPal, yet is completely fraudulent.

A weakness within Google’s policies for landing pages (also known as final URLs), allows anyone to impersonate popular websites so long as the landing page and display URL (the webpage shown in an ad) share the same domain.

image f4799b

The page victims are directed to is known as a “no-code checkout pay link”. This is a feature PayPal promotes to enable merchants to have a simple and yet secure option to take payments:

Small businesses that want to accept payments online or in person can set up pay links, buttons, and QR codes to accept payments on the website. You don’t need a developer, coding knowledge, or a website to accept payments

Essentially, crooks are abusing this feature to create a bogus pay link. They can customize the page by creating various fields with text designed to trick users, such as promoting a fraudulent phone number as “PayPal Assistance”.

Mobile experience

Phones are the best medium for this type of scams due to the device’s constraints, but more than anything because that’s how victims will get in touch with bogus tech support agents.

In the screenshot below taken on an iPhone, we can see the top sponsored result from a Google search is impersonating PayPal. During our investigation, we often encountered more than one malicious ad, although they redirected to different kinds of pages, not abusing the same scheme.

Due to the reduced screen size, it would require scrolling past the ads and the AI Overview to see organic search results. This is not a coincidence of course, and is why search advertising is worth billions of dollars.

image 1eec76

Screen size plays a factor again when users click on the ad and look at the browser’s address bar correctly identifying that the site is “paypal.com“. As we saw above, pay links are on the same domain as paypal.com, from which they inherit trust.

We did not follow-up with the provided phone number; however we believe it likely ends with victims handing over their personal information to scammers and getting fleeced.

Conclusion

Tech support scammers are like vultures circling above the most popular Google search terms, especially when it comes to any kind of online assistance or customer service.

We saw how easy it is to get an ad that mimics an official brand as long as the destination URL is on the same domain as the ad URL. The rest is just a matter of creativity on the part of scammers to forcefully inject their lure as spam, search queries, shopping lists, and more…

Whenever looking up an official phone number or website, it is safer to scroll past the ads and choose a more trusted organic link. There are also security solutions that can block ads and malicious links, such as Malwarebytes for mobile devices.

We have reported this campaign to Google and PayPal, but urge caution as new ads using the same trick are still appearing.


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Indicators of Compromise

Archived example:

https://urlscan.io/result/3ea0654e-b446-4947-b926-b549624aa8b0

Malicious pay links:

hxxps[://]www[.]paypal[.]com/ncp/payment/8X7JHDGLK9Z46
hxxps[://]www[.]paypal[.]com/ncp/payment/7QUEXNXR84X3L
hxxps[://]www[.]paypal[.]com/ncp/payment/BHR4AMJAPWNZW
hxxps[://]www[.]paypal[.]com/ncp/payment/FTJBPVUQFEJM6
hxxps[://]www[.]paypal[.]com/ncp/payment/2X92RZVSG8MUJ
hxxps[://]www[.]paypal[.]com/ncp/payment/D8X74WYAM3NJJ

Scammers’ phone numbers:

1-802[-]309-1950
1-855[-]659-2102
1-844[-]439-5160
1-800[-]782-3849

Countries and companies are fighting at the expense of our data privacy

Data privacy issues are a hot topic in a world where we apparently don’t know who to trust anymore.

A few weeks ago, we reported how the UK had secretly ordered Apple to provide blanket access to protected cloud backups around the world. This week, Apple decided to pull the plug on Advanced Data Protection (ADP) for UK users.

ADP is an opt-in data security tool designed to provide Apple users a more secure way to protect data stored in their iCloud accounts. Enabling ADP would ensure that even Apple could not access the data, which would mean that Apple was unable to hand over any information to law enforcement.

Something similar happened when Sweden’s law enforcement and security agencies started to push legislation which would force Signal and WhatsApp to create technical backdoors, allowing them to access communications sent over the encrypted messaging apps. The proposed bill prescribes that companies like Signal and WhatsApp need to store all messages sent using the apps.

President of the Signal Foundation, Meredith Whittaker, told Swedish SVT News that the company will leave Sweden if the bill becomes a reality.

So basically, by seeking to obtain encryption backdoors, which are not likely to remain exclusive, these governments are undermining the data privacy options of their citizens. A backdoor can and will eventually be found by those that we absolutely didn’t want to snoop around in our backups and chat logs.

It doesn’t just affect the countries at the heart of the request. The US director of national intelligence is reportedly going to investigate whether the UK broke a bilateral agreement by issuing the order that would allow the British to access backups of data in the company’s encrypted cloud storage systems,

The bilateral agreement in question is the Clarifying Lawful Overseas Use of Data Act (CLOUD Act) Agreement, which—among other things—bars the UK from issuing demands for the data of US citizens and vice versa. The CLOUD Act primarily allows federal law enforcement to compel US-based technology companies via warrant or subpoena to provide requested data stored on servers regardless of whether the data is stored in the US or on foreign soil. Provisions in the act state that the United Kingdom may not issue demands for data of US citizens, nationals, or lawful permanent residents, nor is it authorized to demand the data of persons located inside the US.

Globally, governments and law enforcement agencies continue to seek more control over data through new legislations and rules. States regulate data to protect national security and provide domestic firms access to user (and anonymous) data to boost competitiveness, ease law enforcement’s qualms when accessing data, and ward off foreign surveillance.

But at the end of the day, the criminals that law enforcement agencies are after, will end up using expensive private phone networks, and the general population will be left with tools that have been broken on purpose. Backdoors that have been created will be waiting for cybercriminals to find the cracks and access our “encrypted” data.

Meanwhile, privacy is recognized as a universal human right while data protection is not. And it should be. Even if we think we “have nothing to hide” cybercriminals will find a way to use that data against us, if only to make their phishing attempts more credible. Let alone, trade and economic secrets that could fall into the hands of competitors or “unfriendly” nations.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Roblox called “real-life nightmare for children” as Roblox and Discord sued

Last week it was reported that a lawsuit has been initiated against gaming giant Roblox and leading messaging platform Discord. 

The court action—charging them with the facilitation of child predators and misleading parents into believing the platforms are safe to use for their children—centers around a 13-year-old plaintiff who was targeted by a predator on these platforms.  

The papers filed at court in San Mateo outline that the plaintiff joined Roblox and Discord in 2023 after his father conducted thorough research and believed the platforms to be safe for children. 

Attorneys said the plaintiff’s parents “(…) learned the truth (of this) only after it was too late.”  

In 2024, the parents discovered their child had been groomed into sending explicit pictures and videos of himself to a 27-year-old man. The situation quickly escalated to the point that the plaintiff and his family had to move across the country after the predator discovered the boy’s location through Roblox. 

The suit alleges that both Roblox and Discord are aware of how easily predators can target children through their platforms by grooming and manipulating children into sending explicit material but have failed to provide adequate safety measures to protect minors from such exploitation.  

Anna Marie Murphy, an attorney with California-based law firm Cotchett, Pitre and McCarthy, said:

“Both Roblox and Discord, we allege, were negligent in the services they provided to our client, a 13-year-old boy. The predator used a function on Roblox called a whisper function that allowed him, as a complete stranger, to be in the game with our client and send a direct message. That was the first contact, and there was a request for a naked picture.”

Anapol Weiss, the other firm joining the case, explained more in a press release outlining its reasons for joining the suit:

“Roblox’s expansive and unsupervised ecosystem created an environment where predators and harmful content thrived and continue to thrive”.

The suit also alleges that the companies’ lax safety standards led to the plaintiff sending explicit material in exchange for “Robux” Roblox in-game currency, stating:

“Roblox’s success and continued growth has hinged on its constant assurances to parents that its app is safe for children. In reality, Roblox is a digital and real-life nightmare for children.” 

“What happened is far from an isolated event. The plaintiff is but one of countless children whose lives have been devastated because of Roblox and Discord’s misrepresentations and defectively designed apps.” Murphy commented.  

This is not the first time these companies have been sued for allegedly enabling predators to exploit minors on their platforms. In 2022, they were named in a lawsuit filed in San Francisco Superior Court, where the companies were accused of misrepresenting their platforms as safe for children while allowing predators to exploit a young girl.

A report by corporate investigation firm, Hindenburg Research, released in October 2024 and referenced in the statement released by Anapol Weiss, further alleged that Roblox has continued to prioritize profit and attractiveness to investors above online safety and protection of underage users, and was guilty of exposing children to an “x rated hellscape, grooming, pornography, violent content and abusive speech” despite claims to be working towards cleaning up the platform.

As this case unfolds, it highlights the ongoing concerns about child safety in online gaming and messaging platforms, emphasizing the need for stronger protective measures and parental vigilance in the digital age. 


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Android happy to check your nudes before you forward them

Sometimes the updates we install to keep our devices safe do a little bit more than we might suspect at first glance. Take the October 2024 Android Security Bulletin.

It included a new service called Android System SafetyCore. If you can find a mention of that in the security bulletin, you’re a better reader then I am. It wasn’t until a few weeks later, when a Google security blog titled 5 new protections on Google Messages to help keep you safe revealed that one of the new protections was designed to introduce Sensitive Content Warnings for Google Messages.

Sensitive Content Warnings is an optional feature that blurs images that may contain nudity before viewing, and when an image that may contain nudity is about to be sent or forwarded, it will remind users of the risks of sending nude imagery and preventing accidental shares.

Wait! What?

Yes, there is now a service on my phone that checks whether your pictures are “decent enough” to send or share? I’m not oblivious to the many users that would be better off with such a service, but I’m not so sure they’d appreciate it.

However, what really concerned me is the fact that Google is looking at my incoming and outgoing pictures. I use end-to-end-encrypted (E2EE) messaging for a reason. The content is for me and the receiver, and no-one else, and that definitely includes Google.

But, again, no mention of SafetyCore in that blog or what will provide the Sensitive Content Warnings feature with the necessary data.

So, you can imagine the surprise and outrage when users found this service which doesn’t show up on the regular list of running applications that has permissions to do almost anything on the device.

And by looking up what this app called SafetyCore was all about, all of the above starts to make sense.

Google PlayStore says:

“SafetyCore is a Google system service for Android 9+ devices. It provides the underlying technology for features like the upcoming Sensitive Content Warnings feature in Google Messages that helps users protect themselves when receiving potentially unwanted content. While SafetyCore started rolling out last year, the Sensitive Content Warnings feature in Google Messages is a separate, optional feature and will begin its gradual rollout in 2025. The processing for the Sensitive Content Warnings feature is done on-device and all of the images or specific results and warnings are private to the user.”

Google goes on to reassure:

  • The developer says that this app doesn’t collect or share any user data. 
  • The developer says that this app doesn’t share user data with other companies or organizations. 
  • The developer says that this app doesn’t collect user data.
  • The developer has committed to follow the Play Families policy for this app. 

Google promises that it only rates our pictures and does not collect or share them, but this feature has almost Artificial Intelligence (AI) written all over it. As we all know, an AI needs to be trained, and training an AI locally on your phone is hardly an option. I wish it had the necessary power, but it doesn’t.

I for one don’t see how the secretly installed service measures up to what the feature has to offer. But obviously everyone is entitled to their own opinion and the device is yours to do with as you please.

How to uninstall or disable SafetyCore

The good people at ZDNet provided instructions on how to get rid of SafetyCore or disable it if you would like to do so.

So, if you wish to uninstall or disable SafetyCore, take these steps:

  1. Open Settings: Go to your device’s Settings app
  2. Access Apps: Tap on ‘Apps’ or ‘Apps & Notifications’
  3. Show System Apps: Select ‘See all apps’ and then tap on the three-dot menu in the top-right corner to choose ‘Show system apps’
  4. Locate SafetyCore: Scroll through the list or search for ‘SafetyCore’ to find the app
  5. Uninstall or Disable: Tap on Android System SafetyCore, then select ‘Uninstall’ if available. If the uninstall option is grayed out, you may only be able to disable it
  6. Manage Permissions: If you choose not to uninstall the service, you can also check and try to revoke any SafetyCore permissions, especially internet access

Note: depending on the software version and manufacturer of your device, these instructions may be slightly off. I personally couldn’t test them because my Samsung has not received the October patch yet due to the patch gaps.

If you’d like to learn more about AI and encrypted messaging, we recommend listening to our podcast The new rules for AI and encrypted messaging, with Mallory Knodel (Lock and Code S06E01)


We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Background check provider data breach affects 3 million people who may not have heard of the company

Employment screening company DISA Global Solutions has filed a data breach notification after a cyber incident on their network.

DISA says a third party had access to its environment between February 9, 2024, and April 22, 2024. The attacker may have accessed over three million files containing personal information.

DISA is a third-party administrator of employment screening services, including drug and alcohol testing and background checks. DISA discovered the breach on April 22, 2024, and has since conducted an investigation with the help of third-party forensic experts.

This is one of these cases where a company most people have never heard of has amassed a mountain of information about many people. These data brokers gather information from several sources and sell them on to interested buyers. DISA provides these services to over 55,000 companies.

During the investigation, DISA was unable to determine the specifics of the stolen data, but everyone whose data may have been compromised will get a detailed breach notification letter, specifying the type of data.

This letter will also include details about free access to 12 months of credit monitoring and identity restoration services through Experian for which you must enrol by June 30, 2025.

Given the field that DISA is active in, that information could interest cybercriminals for use as background information for targeted phishing attempts or extortion. The Massachusetts breach report tracker that at least some Social Security Numbers were involved.

SSN Breached: yes
SSN Breached: yes

DISA states that it’s not aware of any attempts to abuse the stolen information:

“While we are unaware of any attempted or actual misuse of any information involved in this incident, we are providing you with information about the incident and steps you can take to protect yourself, should you feel it necessary.”

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

Malwarebytes has a free tool for you to check how much of your personal data has been exposed online. Submit your email address (it’s best to give the one you most frequently use) to our free Digital Footprint scan and we’ll give you a report and recommendations.

Predatory app downloaded 100,000 times from Google Play Store steals data, uses it for blackmail

A malicious app claiming to be a financial management tool has been downloaded 100,000 times from the Google Play Store. The app— known as “Finance Simplified”—belongs to the SpyLoan family which specializes in predatory lending.

Sometimes malware creators manage to get their apps listed in the official app store. This is a great benefit for them since it lends a sense of legitimacy to the app, and they don’t have to convince users to sideload the app from an unofficial site.

So, it gives them a much larger audience, they can lean on the trust we invest in the official app stores and users don’t have to do anything they might perceive as suspicious.

While Google has enhanced security measures in place—including AI-powered threat detection and real-time scanning— that are designed to detect and block malicious apps more effectively, the cat-and-mouse game between cybercriminals and security measures continues, with each side trying to outsmart the other.

In this case, the loan app evaded detection on Google Play, by loading a WebView to redirect users to an external website from where they could download the app hosted on an Amazon EC2 server.

Predatory lending is any lending practice where the borrower is taken advantage of by the lender. Predatory lenders impose lending terms that are unfair or abusive.

The apps in the SpyLoan family offer attractive loan terms with virtually no background checks. But when the apps are installed, they steal information from the victim’s device that can be used to blackmail the victim. Especially when they miss any payments on the loan.

Among the stolen information are listed contacts, call logs, text messages, photos, and the device’s location.

Although the app has now been removed from Google Play, it may continue to run on affected devices, collecting sensitive information in the background.

The researchers found that the app only targets users in India with the recommended loan applications and the redirect to an external website.

The information stolen from users could well be used for malicious purposes or sold to other cybercriminals.

Losing data related to a financial account can have severe consequences. If you find an app from this family or another information stealer on your device, there are a few guidelines to follow to limit the damage:

  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

Surveillance pricing is “evil and sinister,” explains Justin Kloczko (Lock and Code S06E04)

This week on the Lock and Code podcast…

Insurance pricing in America makes a lot of sense so long as you’re one of the insurance companies. Drivers are charged more for traveling long distances, having low credit, owning a two-seater instead of a four, being on the receiving end of a car crash, and—increasingly—for any number of non-determinative data points that insurance companies use to assume higher risk.

It’s a pricing model that most people find distasteful, but it’s also a pricing model that could become the norm if companies across the world begin implementing something called “surveillance pricing.”

Surveillance pricing is the term used to describe companies charging people different prices for the exact same goods. That 50-inch TV could be $800 for one person and $700 for someone else, even though the same model was bought from the same retail location on the exact same day. Or, airline tickets could be more expensive because they were purchased from a more expensive device—like a Mac laptop—and the company selling the airline ticket has decided that people with pricier computers can afford pricier tickets.

Surveillance pricing is only possible because companies can collect enormous arrays of data about their consumers and then use that data to charge individual prices. A test prep company was once caught charging customers more if they lived in a neighborhood with a higher concentration of Asians, and a retail company was caught charging customers more if they were looking at prices on the company’s app while physically located in a store’s parking lot.

This matter of data privacy isn’t some invisible invasion online, and it isn’t some esoteric framework of ad targeting, this is you paying the most that a company believes you will, for everything you buy.

And it’s happening right now.

Today, on the Lock and Code podcast with host David Ruiz, we speak with Consumer Watchdog Tech Privacy Advocate Justin Kloczko about where surveillance pricing is happening, what data is being used to determine prices, and why the practice is so nefarious.  

“It’s not like we’re all walking into a Starbucks and we’re seeing 12 different prices for a venti mocha latte,” said Kloczko, who recently authored a report on the same subject. “If that were the case, it’d be mayhem. There’d be a revolution.”

Instead, Kloczko said:

“Because we’re all buried in our own devices—and this is really happening on e-commerce websites and online, on your iPad, on your phone—you’re kind of siloed in your own world and companies can get away with this.”

Tune in today for the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.