IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

A week in security (October 14 – October 20)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Unauthorized data access vulnerability in macOS is detailed by Microsoft

The Microsoft Threat Intelligence team disclosed details about a macOS vulnerability, dubbed “HM Surf,” that could allow an attacker to gain access to the user’s data in Safari. The data the attacker could access without users’ consent includes browsed pages, along with the device’s camera, microphone, and location.

The vulnerability, tracked as CVE-2024-44133 was fixed in the September 16 update for Mac Studio (2022 and later), iMac (2019 and later), Mac Pro (2019 and later), Mac Mini (2018 and later), MacBook Air (2020 and later), MacBook Pro (2018 and later), and iMac Pro (2017 and later).

It is important to note that this vulnerability would only impact Mobile Device Management (MDM) managed devices. MDM managed devices are typically subject to centralized management and security policies set by the organization’s IT department.

Microsoft has dubbed the flaw “HM Surf.” By exploiting this vulnerability an attacker could bypass the macOS Transparency, Consent, and Control (TCC) technology and gain unauthorized access to a user’s protected data.

Users may notice Safari’s TCC in action when they browse a website that requires access to the camera or the microphone. They may see a prompt like this one:

Safari TCC prompt
Image courtesy of Microsoft

What Microsoft discovered was that Safari maintains its own separate TCC policy which it maintains in various local files.

At that point Microsoft figured out it was possible to modify the sensitive files, by swapping the home directory of the current user back and forth. The home directory is protected by the TCC, but by changing the home directory, then change the file, and then making it the home directory again, Safari will use the modified files.

The exploit only works on Safari because third-party browsers such as Google Chrome, Mozilla Firefox, or Microsoft Edge do not have the same private entitlements as Apple applications. Therefore, those apps can’t bypass the macOS TCC checks.

Microsoft noted that it observed suspicious activity in the wild associated with the Adload adware that might be exploiting this vulnerability. But it could not be entirely sure whether the exact same exploit was used.

“Since we weren’t able to observe the steps taken leading to the activity, we can’t fully determine if the Adload campaign is exploiting the HM surf vulnerability itself. Attackers using a similar method to deploy a prevalent threat raises the importance of having protection against attacks using this technique.”

We encourage macOS users to apply these security updates as soon as possible if they haven’t already.


Malwarebytes for Mac takes out malware, adware, spyware, and other threats before they can infect your machine and ruin your day. It’ll keep you safe online and your Mac running like it should.

23andMe will retain your genetic information, even if you delete the account

Deleting your personal data from 23andMe is proving to be hard.

There are good reasons for people wanting to delete their data from 23andMe: The DNA testing platform has a lot of problems, so let’s start with a recap.

A little over a year ago, cybercriminals put up information belonging to as many as seven million 23andMe customers for sale on criminal forums following a credential stuffing attack against the genomics company.

In December 2023, we learned that the attacker was able to directly access the accounts of roughly 0.1% of 23andMe’s users, which is about 14,000 of its 14 million customers. So far not too many people affected, but with the breached accounts at their disposal, the attacker used 23andMe’s opt-in DNA Relatives (DNAR) feature—which matches users with their genetic relatives—to access information about millions of other users.

For a subset of these accounts, the stolen data contained health-related information based upon the user’s genetics.

In January 2024, 23andMe had the audacity to lay the blame at the feet of victims themselves in a letter to legal representatives of victims. 23andMe reasoned that the customers whose data was directly accessed re-used their passwords, gave permission to share data with other users on 23andMe’s platform, and that the medical information was non-substantive.

And in September 2024, we found out that the company would pay $30 million to settle a class action lawsuit, as that was all that 23andMe could afford to pay. And that’s only because the expectation was that cyberinsurance would cover $25 million.

As a result, the value of 23andMe plummeted. And last month the company said goodbye to all its board members except for CEO Anne Wojcicki who stood by her plans to take the company private.

This uncertainty about the future of the company and, with that, who will be the future holder of all the customer personal information, has caused a surge of users looking to close their accounts and delete their data.

However, it turns out it’s not as easy as just asking for the data to be removed. You can delete your data from 23andMe , but 23andMe says it will retain some of that data (including genetic information) to comply with the company’s legal obligations, according to its privacy policy.

“23andMe and/or our contracted genotyping laboratory will retain your Genetic Information, date of birth, and sex as required for compliance with applicable legal obligations, including the federal Clinical Laboratory Improvement Amendments of 1988 (CLIA), California Business and Professions Code Section 1265 and College of American Pathologists (CAP) accreditation requirements, even if you chose to delete your account. 23andMe will also retain limited information related to your account and data deletion request, including but not limited to, your email address, account deletion request identifier, communications related to inquiries or complaints and legal agreements for a limited period of time as required by law, contractual obligations, and/or as necessary for the establishment, exercise or defense of legal claims and for audit and compliance purposes.”

In addition, any information you previously provided and consented to be used in 23andMe research projects cannot be removed from ongoing or completed studies, although the company says it will not use it in any future ones.

This is unfortunate, and is yet another reminder about how once you give information away you cannot always get it back. Let’s hope the policy gets changed and customers are allowed to fully delete their data soon.

It’s still worth deleting as much as possible, though. So here’s how to do that.

How to delete (most of) your data from 23andMe

  • Log into your account and navigate to Settings.
  • Under Settings, scroll to the section titled 23andMe data. Select View.
  • It will ask you to enter your date of birth for extra security. 
  • In the next section, you’ll be asked which, if there is any, personal data you’d like to download from the company (onto a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  • You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically, and you’ll immediately lose access to your account. 

When you set up your 23andMe account, you had the options to either have the saliva sample that you sent to them securely destroyed or to have it stored for future testing. If you chose to store your sample but now want to delete your 23andMe account, the company says it will destroy the sample for you as part of the account deletion process.

Check your digital footprint

If you want to find out if your personal data was exposed through the 23andMe breach, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you used to register and 23andMe) and we’ll send you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

“Nudify” deepfake bots remove clothes from victims in minutes, and millions are using them

Millions of people are turning normal pictures into nude images, and it can be done in minutes.

Journalists at Wired found at least 50 “nudify” bots on Telegram that claim to create explicit photos or videos of people with only a couple of clicks. Combined, these bots have millions of monthly users. Although there is no sure way to find out how many unique users that are, it’s appalling, and highly likely there are much more than those they found.

The history of nonconsensual intimate image (NCII) abuse—as the use of explicit deepfakes without consent is often called—started near the end of 2017. Motherboard (now Vice) found an online video in which the face of Gal Gadot had been superimposed on an existing pornographic video to make it appear that the actress was engaged in the acts depicted. The username of the person that claimed to be responsible for this video resulted in the name “deepfake.”

Since then, deepfakes have gone through many developments. It all started with face swaps, where users put the face of one person onto the body of another person. Now, with the advancement of AI, more sophisticated methods like Generative Adversarial Networks (GANs) are available to the public.

However, most of the uncovered bots don’t use this advanced type of technology. Some of the bots on Telegram are “limited” to removing clothes from existing pictures, an extremely disturbing act for the victim.

These bots have become a lucrative source of income. The use of such a Telegram bot usually requires a certain number of “tokens” to create images. Of course, cybercriminals have also spotted opportunities in this emerging market and are operating non-functional or bots that render low-quality images.

Besides disturbing, the use of AI to generate explicit content is costly, there are no guarantees of privacy (as we saw the other day when AI Girlfriend was breached), and you can even end up getting infected with malware.

The creation and distribution of explicit nonconsensual deepfakes raises serious ethical issues around consent, privacy, and the objectification of women, let alone the creation of sexual child abuse material. Italian scientists found explicit nonconsensual deepfakes to be a new form of sexual violence, with potential long-term psychological and emotional impacts on victims.

To combat this type of sexual abuse there have been several initiatives:

  • The US has proposed legislation in the form of the Deepfake Accountability Act. Combined with the recent policy change by Telegram to hand over user details to law enforcement in cases where users are suspected of committing a crime, this could slow down the use of the bots, at least on Telegram.
  • Some platform policies (e.g. Google banned involuntary synthetic pornographic footage from search results).

However, so far these steps have shown no significant impact on the growth of the market for NCIIs.

Keep your children safe

We’re sometimes asked why it’s a problem to post pictures on social media that can be harvested to train AI models.

We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.

  • Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
  • Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
  • Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
  • Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
  • Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
  • Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.

If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Tor Browser and Firefox users should update to fix actively exploited vulnerability

Mozilla has announced a security fix for its Firefox browser which also impacts the closely related Tor Browser.

The new version fixes one critical security vulnerability which is reportedly under active exploitation. To address the flaw, both Mozilla and Tor recommend that users update their browsers to the most current versions available.

Firefox users that have automatic updates enabled should have the new version available as soon or shortly after they open the browser. Once you’re updated, your version number will be 131.0.3 or higher.

Other users can update their browser by following these instructions:

  • Click the menu button (3 horizontal stripes) at the right side of the Firefox toolbar, go to Help, and select About Firefox/Tor Browser. The About Mozilla Firefox/About Tor Browser window will open.
  • Firefox/Tor Browser will check for updates automatically. If an update is available, it will be downloaded.
  • You will be prompted when the download is complete, then click Restart to update Firefox/Tor Browser.

To update the Tor Browser you have to Connect first or it will fail to fetch the update. The latest version of Tor is 13.5.7.

Tor Browser is up to date
Version number should be 13.5.7 or higher

The vulnerability, tracked as CVE-2024-9680, allows attackers to execute malicious code within the browser’s content process, which is the environment where it loads and renders web content.

About the vulnerability, Mozilla said:

“An attacker was able to achieve code execution in the content process by exploiting a use-after-free in Animation timelines. We have had reports of this vulnerability being exploited in the wild.”

Use after free (UAF) is a type of vulnerability that is the result of the incorrect use of dynamic memory during a program’s operation. If, after freeing a memory location, a program does not clear the pointer to that memory, an attacker can use the error to manipulate the program.

The Animation Timeline interface of the Web Animations Application Programming Interface (API) represents the timeline of an animation. Where the timeline is a source of time values for synchronization purposes.

Exploitation is said to be relatively easy, requires no user interaction, and can be executed over the network.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

AI scammers target Gmail accounts, say they have your death certificate

Several reputable sources are warning about a very sophisticated Artificial Intelligence (AI) supported type of scam that is bound to trick a lot of people into compromising their Gmail account.

The most recent warning comes from CEO of Y Combinator Garry Tan who posted on X, saying the scammers using AI voices tell you someone has issued a death certificate for you and is trying to recover your account.

The scammers claim to be checking that you are alive and whether they should disregard a filed death certificate. If you click “Yes, it’s me” on the fake account recovery screen then you’ll likely lose access to your Google account.

In another recent example, Windows expert Sam Mitrovic was targeted by a very similar AI recovery scam.

He explained how the scam unfolds: It starts when he receives a notification of an alleged Gmail account recovery attempt, followed 40 minutes later by a call. The first time Sam misses the call, but when they try the same thing a week later, Sam answers.

In both cases, the notifications come from the US but the calls show “Google Sydney” as the caller. A polite American voice claims there’s been suspicious activity on Sam’s Gmail account and asks whether Sam was travelling.

The caller says there’s been a login attempt from Germany which raises suspicions, given that Sam is at home in the US. The caller says the login has been successful, and that an attacker has had access to Sam’s account for a week and downloaded account data.

Sam remembers the email and missed call from last week, and has the presence of mind to quickly check the caller ID. It looks like a legitimate Google Assistant number.

But knowing how easy it is to spoof a telephone number and pretend to be calling from that number, Sam asks for an email to confirm that the caller actually works for Google. Some typing against the typical background noises of a call center and soon enough the email arrives.

confirmation mail sent by the attacker to prove they are working for the Google Account Secuirty Team
Image courtesy of Sam Mitrovic

The email looks convincing. It comes from a Google domain, has a case number, claims to be from the Google Account Security Team, and it confirms the phone number and the name the caller is using.

While Sam reviews the email, the caller repeatedly says “Hello”. From the pronunciation and the spacing Sam realizes it’s an AI voice and hangs up.

Inspecting the email Sam found that the scammers are using the legitimate Salesforce CRM (customer relationship management) tool which allows you to set the sender to whatever you like and send over Gmail/Google servers.

Other targets that took the scam a little further,  were asked to verify their 2FA, so it stands to reason that the scammers are looking to take over your Google account, but this time for real.

The need to confirm an account recovery, or a password reset, is a notorious method used in phishing attacks. They usually try to trick the target into opening a fake login portal where they need to enter their credentials to report the request as not initiated by them.

Is it you trying to recover your account?
Prompt asking: Is it you trying to recover your account?

How to stay safe

There are a few signs you can use to identify this type of scams.

The “To” field of the confirmation email Sam received contains an email address cleverly named GoogleMail[@]InternalCaseTracking[.] com, which is a non-Google domain.

Google Assistant calls usually come from an automated system and only in some cases, from a manual operator. Google Support on the other hand will not contact you unsolicited.

To verify if a security alert is from Google, users can check their Recent security activity:

  • Tap your Gmail profile photo in the top right corner
  • Tap Manage your Google Account
  • Select the Security tab
  • You will see something similar to this:
Review security activity
Here you can find the Review Security Activity button

Any messages claiming to be security alerts from Google that are not listed there will not be from Google.

Do not entertain these scammers for longer than necessary. It doesn’t take them very long to fingerprint your voice which would allow their AI to impersonate you by using your voice.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Cyrus, powered by Malwarebytes.

Election season raises fears for nearly a third of people who worry their vote could be leaked

As the United States enters full swing into its next presidential election, people are feeling worried, unsafe, and afraid.

And none of that has to do with who wins.

According to new research from Malwarebytes, people see this election season as a particularly risky time for their online privacy and cybersecurity. Political ads could be hiding online scams, many people feel, and the election, they say, will likely fall victim to some type of “cyber interference.” Amidst this broader turbulence, 32% are “concerned about who could learn [their] vote”—be they family, spouses, or cybercriminals.

For this research, Malwarebytes conducted a pulse survey of its newsletter readers between September 5 and 16, 2024, via the Alchemer Survey platform. In total, 1600 people across the globe responded.

Broadly, Malwarebytes found that:

  • 74% of people “consider US election season a risky time for personal information.”
  • Despite a tight presidential race, a shocking 3% of people said they will not vote because of “privacy or security concerns.”
  • Distrust in political ads is broad—62% said they “disagree” or “strongly disagree” that the information they receive in US election-related ads is trustworthy.
  • The fears around election ads are not just about trustworthiness, but about harm. 52% are “very concerned” or “concerned” about “falling prey to a scam when interacting with political messages.” 
  • 57% have responded to these concerns with action, taking several steps to protect their personal information during this election season.

The electoral process is (forgive us) a lot like cybersecurity: It scares people, it’s hopelessly baroque, and, through a lack of participation, it can produce unwanted results.

Here is what Malwarebytes discovered about the intersection of cybersecurity and elections, with additional guidance on how to protect personal information this season.

Open distrust

Getting more than 70% of people to agree on anything is remarkable. And yet, 74% of survey participants said that they “consider US election season a risky time for personal information.” Drilling further into the data, 56% said they were “extremely concerned” or “very concerned” about the security of their personal information during this election season.

The reasons could be obvious. Unlike any other season in America, election season might bring the highest volume of advertisements sent directly to people’s homes, phones, and email accounts—and the accuracy and speed at which they come can feel invasive. The network of data brokers that political campaigns rely on to target voters with ads is enormous, as one Washington Post reporter found in 2020, with “3,000 data points on every voter.”

Escaping this data collection regime has proven difficult for most people. Just 9.6% of survey participants said they “have not received any election related ads” this year.

Elsewhere, 60% had received election-related ads through emails, 58% through physical mailers, 55% through text messages, 40% through social media, and 29% through phone calls.

Those ads may be falling on deaf ears, though. When asked whether they trust the information they receive from US election-related ads, just a combined 5% said they “agree” or “strongly agree” with the sentiment.

A focus on cybercrime

While people hold a sense of distrust for election-related ads, they also revealed another emotion towards them: Fear.

That’s because the majority of survey participants said they were worried that these ads and other political messages could be hiding dangerous scams underneath. Most people (52%) said they were “very concerned” or “concerned” about “falling prey to a scam when interacting with political messages.” 

It’s a well-founded concern as, once again during this election season, cybercriminals are trying to lure Americans into online scams with messages about updated voter registrations, campaign donations, and more.

Survey participants also showed widespread fear about whether cybercriminals could reveal who they voted for.

Remember that 32% of participants said they were worried that someone “could learn about [their] vote.” When asked who, specifically, they were worried about, 73% said cybercriminals. A revealing 2% held fears around their votes being exposed to a family member or a spouse.

Finally, though Malwarebytes did not directly tie the concept of “cybercrime” to the election itself, survey participants were asked about “cyber interference.” When rating their own confidence level in whether the election process will be free from cyber interference, a combined 74% said they were “not very confident” or “not confident at all.”

This statistic should not be interpreted to mean that 74% of people believe the election will be “hacked” or that votes will be switched by an adversarial government—a scenario that has never provably occurred in the US. Instead, it may point to how people interpret “cyber interference. It could include, for example, the pilfering of personal data for political advertisements, or the wanton online distribution of political disinformation to sway voters.

Taking action

With distrust rampant and anxiety wide, people are refusing to enter this election season without some precautions.

Two thirds of survey participants (66%) have either taken steps or plan to take steps to secure their personal data during this election season. Malwarebytes asked about several cybersecurity and online privacy measures that, particularly when facing off against online scams, could protect people from having their accounts taken over, their identities stolen, or even their personal information exposed for marketing reasons.

Survey participants took on the following measures:

  • 77% enabled Two Factor Authentication (2FA) or Multi-Factor Authentication (MFA) across their accounts
  • 47% actively use a password manager
  • 41% purchased identity theft protection services
  • 31% researched the origins of the campaigns they engage with
  • 24% locked down their social media profiles
  • 12% used a data broker removal service

On the reverse, Malwarebytes found a small but critical number of people who will refuse to vote during this election “due to privacy or security concerns”—a combined 3% “agreed” or “strongly agreed” with this sentiment.

Staying safe

There’s good reason this election season for Americans to be concerned about their online privacy and security—but that doesn’t mean that Americans have to spend the next month riddled with anxiety. This month, people can take the following advice to secure their personal information, lock down their sensitive accounts, and, overall, stay safe from malicious scammers and cybercriminals.

  • Watch out for fake emails and text messages. Unless you directly reach out, avoid clicking on links or engaging with these political communications. Instead, go directly to the campaign’s website for information or links to donate.  
  • Be mindful of sharing personal information. As a general rule, don’t engage in surveys that ask for personal information. You can check what information is already available about you on the dark web with our free Digital Footprint scan or take the first step in removing your personal information from the network of data brokers online with our Personal Data Remover scan.  
  • Avoid robocalls and phone scams. Hackers can spoof phone numbers and impersonate official organizations. Be suspicious of unsolicited phone calls. Immediately hang up, don’t share personal information, and report the phone number.  

Robot vacuum cleaners hacked to spy on, insult owners

Multiple robot vacuum cleaners in the US were hacked to yell obscenities and insults through the onboard speakers.

ABC news was able to confirm reports of this hack in robot vacuum cleaners of the type Ecovacs Deebot X2, which are manufactured in China. Ecovacs is considered the leading service robotics brand, and is a market leader in robot vacuums.

One of the victims, Minnesota lawyer Daniel Swenson, said he heard sound snippets that seemed similar to a voice coming from his vacuum cleaner. Through the Ecovacs app, he then saw someone not in his household accessing the live camera feed of the vacuum, as well as the remote control feature.

Thinking it was a glitch, he rebooted the vacuum cleaner and reset the password, just to be on the safe side. But that didn’t help for long. Almost instantly, the vacuum cleaner started to move again.

Only this time, the voice coming from the vacuum cleaner was loud and clear, and it was yelling racist obscenities at Swenson and his family. The voice sounded like a teenager according to Swenson.

Swenson said he turned off the vacuum and dumped it in the garage, never to be turned on again.

While this may seem bad enough as it is, it could have been much worse. What if the hackers had decided to keep quiet and just spy on the victim’s family? In 2020 we talked about such an occurrence in our Lock & Code podcast, where a photo taken by a Roomba vacuum cleaner of a woman sitting on a toilet was shared on Facebook.

Within a few days, various similar incidents involving the Ecovacs Deebot X2 were reported in the US. And, even though Swenson had several communications with a US representative of Ecovacs, the response didn’t explain what had happened.

The Ecovacs representative claimed the victim’s credentials must have been acquired by the hacker and used in a credential stuffing attack, where the attacker uses login information obtained in breaches on other sites to login to another one—in this case Ecovacs.

But that did not make sense, because even with a valid password the attacker shouldn’t have been able to access the video feed or to control the robot remotely. These features are supposed to be protected by a four-digit pin number.

In 2023, however, two security researchers showed a method to bypass that protection. The weakness of the pin protection is that the app is the only place where the PIN is checked, not on the server or by the robot itself. So, if you have control of the device with the app on it and the necessary technical knowledge, you can have the device send a signal to the server which claims that you have entered the correct pin.

And though Ecovacs claimed to have fixed this flaw, one of the hackers that disclosed the flaw said it had been fixed insufficiently.

The same Ecovacs spokesperson said the company “sent a prompt email” instructing customers to change their passwords following the incident. However, Swenson says he never received any communication about the issue with the pin codes, even though he specifically asked if it had happened to other people.

Ecovacs told ABC news it would issue a security upgrade for owners of its X2 series in November. Until that happens you might want to do the same as Swenson and turn the vacuum off.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

A week in security (October 7 – October 13)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

Modern TVs have “unprecedented capabilities for surveillance and manipulation,” group reveals

Your television is debuting the latest, most captivating program: You.

In a report titled “How TV Watches Us: Commercial Surveillance in the Streaming Era,” the Center for Digital Democracy (CDD) spotlighted a massive data-driven surveillance apparatus that ensnares the public through modern television sets.

“The widespread technological and business developments that have taken place during the last five years have created a connected television media and marketing system with unprecedented capabilities for surveillance and manipulation.”

In cooperation with data brokers, streaming video programming networks, Connected Television (CTV) device companies, and smart TV manufacturers are creating detailed digital dossiers about viewers, based on a person’s identity information, viewing choices, purchasing patterns, and thousands of online and offline behaviors.

Because of their findings, the CDD has called on the Federal Trade Commission (FTC), the Federal Communications Commission (FCC), and California Regulators to investigate connected TV practices.

The report provides a detailed overview of all the different ways in which streaming services and streaming hardware target viewers in ways that are severe privacy infringements.

Earlier, we read a paper by researchers of the Cornell University about a tracking approach called Automatic Content Recognition (ACR). ACR is a technology that periodically captures the content displayed on a TV’s screen and matches it against a content library to detect what content is being displayed at any given point in time.

The researchers found that ACR is functional even when the smart TV is used as a “dumb” external display. There are two types of ACR fingerprinting: one to process acoustic (ACR audio) media, and one for video content (ACR Video).

Brands utilize ACR TV for multiple reasons. The most obvious are frequency optimization, unique reach abilities, and improved targeting. With the advent of CTV, more and more people are opting out of cable television, which opens the opportunity of more targeted advertising to reach a specific audience.

Free Advertiser-Supported TV (FAST channels) such as Tubi, Pluto TV, and many others are commonplace, and present advertisers with a key opportunity to monetize viewer data and target them with sophisticated new forms of interactive marketing.

CTV has unleashed a powerful arsenal of interactive advertising techniques, including virtual product placement inserted into programming and altered in real time. CTV companies operate cutting-edge advertising technologies that gather, analyze, and then target consumers with ads, delivering them to households in the blink of an eye. These can be hyper targeted advertisements which are personalized for individual viewers.

The report profiles major players in the connected TV industry, along with the wide range of technologies they use to monitor and target viewers. Some household names you might be interested in include:

  • Disney(+)
  • Netflix
  • Amazon
  • Roku
  • Vizio
  • Comcast (NBCU)
  • LG
  • Samsung
  • Google (YouTube)

“Many of these entities offer misleading and disingenuous ‘privacy policies’ and self-serving descriptions of their systems that fail to explain the complex processes they use to extract data from consumers, track viewing and other behaviors, and facilitate targeted marketing.”

Combine the data these companies are gathering about us with other information that data brokers possess, and you are way past anything we should find acceptable.

Experian offers “over 240 politically relevant audience” segments for sale, based on a detailed set of criteria, including “audience interactions, preferences, demographics, behaviors, location, income and more.”

The US market, which is one of only two that allow direct-to-consumer advertising of pharmaceutical products, is seeing marketers for pharmaceutical products that are heavily invested in connected TV advertising.

Industry research shows that families with young children tend to watch more streaming TV content. Children and teens play a powerful role in determining the viewing patterns of their families, serving as decision-makers when it comes to streaming content. Disney Advertising even calls the cohorts of children, teens and adults viewing its Disney+ and other content “Generation Stream.”

Report co-author Kathryn C. Montgomery, Ph.D. stated:

“Policy makers, scholars, and advocates need to pay close attention to the changes taking place in today’s 21st century television industry. In addition to calling for strong consumer and privacy safeguards, we should seize this opportunity to re-envision the power and potential of the television medium and to create a policy framework for connected TV that will enable it to do more than serve the needs of advertisers. Our future television system in the United States should support and sustain a healthy news and information sector, promote civic engagement, and enable a diversity of creative expression to flourish.”


Personal Data Remover

It may feel like keeping your sensitive data away from data brokers is a losing fight, but there are ways to stop those data brokers from collecting new information and, where possible, to have it deleted from their rosters. For people in the United States, Malwarebytes Personal Data Remover provides:   

  • Immediate, deep scans across roughly 175 databases to find your personal data. 
  • Personalized, in-depth reports on what data is being sold and who is selling it.  
  • Automatic data removal requests for subscribers, which can save 300+ hours of manual work in wiping sensitive details off the internet, along with free DIY guides to tackle each site individually.  
  • Recurring scans and data removal requests that will make it harder for invasive websites to rebuild their digital portraits of you.