IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

Not Black Mirror: Meta’s smart glasses used to reveal someone’s identity just by looking at them

Like something out of Black Mirror, two students have demonstrated a way to use smart glasses and facial recognition technology to immediately reveal people’s names, phone numbers, and addresses.

The Harvard students have dubbed the system I-XRAY and it works like this: When you look at someone’s face through the glasses—they used Ray-Ban Meta smart glasses—a connected Artificial Intelligence (AI) platform will look up that face on the internet and pull up all the information it can find about the person.

The Ray-Ban Meta glasses have the ability to livestream video to Instagram. A program monitors that stream and uses the AI to identify faces. It extracts a picture which is then fed into public databases. Depending on the online presence of the person, this can reveal their name, address, phone number, and even relatives.

And as if it wasn’t creepy enough already, it only takes a few seconds before that information shows up on the user’s phone.

If you’d like to see this system in action, one of the students posted a tweet on X that shows you pretty much how effective it can be.

Facial recognition is a technology that has quickly evolved. That’s not always a bad thing, but it poses a privacy issue when the consensus from the person in the database is missing. Many people have become used to being monitored a lot of the time that they spend outside, especially in large cities. But when facial recognition adds an extra layer of tracking, or immediate recognition, it becomes worrying.

In 2021 we wrote:

“For an individual to identify another individual would require access to a large database or an enormous amount of luck.”

But, thanks to the advancement of AI, this is no longer true. Identification can be done in seconds, for almost everybody that has an online presence, and just from public databases.

In the demo, the students claim they were able to identify dozens of people without their knowledge, although in some cases the system gave the wrong name.

It’s quite obvious that in the wrong hands this could be used to defraud or track people. The students have no intention of sharing their code, but they are not the first ones to come up with the idea or even make it work.

In 2022, a company called Clearview AI was permanently banned from selling its faceprint database within the United States. The facial recognition software and surveillance company was known for scraping images of people from social networking sites, particularly Facebook, YouTube, Venmo, and other websites. Clearview’s app was able to show you additional photos of a person—after taking a snap of them—along with links to where these appeared. Now, Clearview sells its product to law enforcement, and it’s also explored a pair of smart glasses that would run its facial recognition technology.

Also in 2022, a company called PimEyes was accused of “surveillance and stalking on a scale previously unimaginable.” PimEyes is an online face search engine that searches the internet to find pictures of particular faces. The search engine uses Artificial Intelligence (AI) for facial recognition combined with reverse image search technology to find other photos of a person published online, based on a picture submitted by the user.

In 2023, the New York Times published a story about “the technology Facebook and Google didn’t dare release” about how the two companies stopped development of technology that used facial recognition to identify people.

What’s changed since then:

  • The glasses look like any other Ray Ban so you’ll be clueless about getting identified
  • Facial recognition has been perfected even more
  • AI can be used to quickly gather and analyze data.

Sadly, there’s not a huge amount you can do to stop someone looking you up in this way. However, there are ways to limit how much information is out there about you. Be careful about how much information you post about yourself online, and as much as possible make sure social media posts aren’t publicly accessible.

You can also check and remove yourself from people databases. The students suggested a few that you can opt-out of.

Remove yourself from Reverse Face Search Engines

The major, most accurate reverse face search engines, Pimeyes and Facecheck.id, offer free services to remove yourself. 

Remove yourself from People Search Engines

Most people don’t realize that from just a name, one can often identify the person’s home address, phone number, and relatives’ names. Here are some of the major people search engines:

Scrub your data

If you’re in the US, you can also use Malwarebytes Personal Data Remover to help find and remove your personal information from data broker sites.

Radiology provider exposed tens of thousands of patient files

An anonymous person has disclosed that they gained online access to a radiologist’s platform that hosted patient information using stolen credentials.

I-MED Radiology is Australia’s leading medical imaging provider. Their clinics offer a range of imaging procedures including MRI, CT, x-ray, ultrasound, and nuclear medicine. The person said they found the credentials in a data set that came from another breach, meaning it’s highly likely that the account holder used the same credentials for more than one service.

Cybercriminals often use leaked credentials and try them out on other websites and services. This type of attack is called credential stuffing. Criminals with access to the credentials from Site A will then try them on sites B and C, often in automated attacks. If the user has reused their password, the accounts on those additional sites will also be compromised.

The whistleblower told Crikey they found log-in details for three accounts in the data that belonged to a hospital. The credentials gave them access to I-MED’s radiology patient portal, and with that, to files showing patients’ full names, dates of birth, sex, which scans they received, and dates of the scans.

The credentials had been available online to cybercriminals for over a year. And to make things worse the accounts had passwords three to five letters in length and were not protected by two-factor authentication (2FA). It also seemed as if these accounts were shared among several people.

This level of authentication is below par by any standard, but it’s especially unacceptable when it concerns sensitive patient data.

When queried, I-Med said:

“We have… further strengthened our system surveillance and are working with cyber experts to respond.”

The news about the leak comes at a bad time for I-MED, following recent accusations that it allowed a startup to use patient data to train an Artificial Intelligence (AI) without consent.

Protecting yourself after a data breach

There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened, and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims, and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online, and helps you recover after.

Check your digital footprint

If you want to find out what personal data of yours has been exposed online, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you most frequently use) and we’ll send you a free report.

Fake Disney+ activation page redirects to pornographic scam

A common way to activate digital subscriptions such as Netflix, Prime or Disney+ on a new TV is to visit a website and enter the code seen on your screen. It’s much easier than having to authenticate using a remote and typing a username and password.

Scammers are creating fake activation pages that they get indexed in Google to lure in victims. Once someone goes to one of these pages, they are redirected to a fake Microsoft scanner that claims child pornography was found on their computer.

Getting from the family-friendly Disney activation page to a very graphic alert is sure to get many victims to panic, even if they have done absolutely nothing wrong. You can see what this scheme looks like in the animation below:

disney

Malicious Google search results

The scammers are using Search Engine Optimization (SEO) techniques to place their fraudulent sites on Google’s search results page. Unlike what we have seen before, these are not malicious ads but rather organic search results.

One of the fake websites, disneyplusbegins[.]com, is a play off the official website, which can be seen when you do a Google search for ‘disney plus begin’:

image 7924ef

Clicking on the link will take you to the aforementioned fake site that appears to prompt users to enter their code:

image b7ccec

When interacting with the page, victims are automatically redirected to another site hosted on Microsoft Azure. A fake Windows Defender scanner claims that “Access to this PC has been blocked for security reasons. Alureon Spyware With Child Pornography Download Detected“:

image afcb1c

The page contains a background image with pornographic material, as if it were from sites victims may have visited:

image 3b4e94

Despite the scary warning page, this is all a scam and you do not need to call the phone number shown on screen. Scammers are waiting for people to call in so they can impersonate Microsoft, remotely log into your computer and either make you send them money or steal directly from your bank account.

Safety tips

Visiting a website to activate a new product or service is something we all do at some point. It is easier to quickly type a few keywords into Google rather than entering the full website URL.

However, Google search results can be laced with malicious ads or links to fraudulent pages. If there is a QR code to scan on your TV, you may want to use that instead (with caution) or maybe spend the extra few seconds it takes to type the full URL (making sure you don’t typo it!).

Finally, just know that these fake warning pages are just that, fake. You can simply close them down by clicking on the ‘X’ at the top right. One thing to be careful about is avoiding clicking anywhere else on the page, in particular buttons or images that may say something like “return to safety”. For more practical tips, check out this article on CNBC, in particular the “How to click without getting into online trouble” part.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Android users targeted on Facebook and porn sites, served adware

Android users, be on your guard against adware trying to infect your device.

The adware—known as MobiDash—is spreading via several channels, according to ThreatDown research.

One of the characteristics that makes MobiDash stand out is that it can be added to legitimate apps without changing how the original app functions. Say, for example, you install a calculator app: You still get the calculator, but you get adware served to you on the side.

Another devious feature is that MobiDash often waits for a few days before it becomes active, making it harder for the user to work out where the ads are coming from. The app they downloaded works, and because there’s no immediate sign of infection there is no reason to suspect that app.

The ThreatDown investigation started by researching a domain that recently popped up in a phishing campaign. We found that besides the phishing campaign, links to this domain were being spread on Facebook.

Link in Facebook post
Link in Facebook post

But not just Facebook, we found that MobiDash was also being spread on certain sites that specialize in explicit content.

link on site with explicit content

When victims click the link, it starts a chain of redirects (lookebonyhill.com > apkretro.com > 3-dl-app.com) that ends in the automatic download of an .apk file, although some users reportedly had to use the Download button.

Download website

Within a few days, the user will start to see ads pop up out of nowhere, until the app is uninstalled.

How to avoid/remove adware

  1. Be careful what you click on: In the Facebook example above, you can see there is an unusual looking link. Don’t be tempted to click on a site you don’t know.
  2. Don’t install apps from unknown sources: Use the Google Play Store as much as you can.
  3. Look out for the Download website we posted a screenshot of above: The fact that the site displays no name for the apk you just downloaded should be a red flag that it’s not be the one you wanted or that it has extra adware attached to it.
  4. Use Malwarebytes for Android. We’ll detect and remove MobiDash from your device, as well as block the start of the redirect chain.
Malwarebytes blocks lookebonyhill.com
Malwarebytes blocks lookebonyhill[.]com

Facebook and Instagram passwords were stored in plaintext, Meta fined

Ireland’s privacy watchdog Data Protection Commission (DPC) has fined Meta €91M ($101M) after the discovery in 2019 that Meta had stored 600 million Facebook and Instagram passwords in plaintext.

The DPC ruled that Meta was in violation of GDPR on several occasions related to this breach. It determined that the company failed to “notify the DPC of a personal data breach concerning storage of user passwords in plaintext” without delay, and failed to “document personal data breaches concerning the storage of user passwords in plaintext.”

The DPC also said that Meta violated GDPR by not using appropriate technical measures to ensure the security of users’ passwords against unauthorized processing.

While the DPC does not disclose the number of passwords, several sources at the time quoted internal sources at Facebook who said 600 million password were freely accessible to employees. Most of these passwords belonged to Facebook Lite users, but it affected other Facebook and Instagram users as well.

Facebook found out that it logged the passwords in plaintext by mistake during a code review.

An ongoing issue

Over the years, several data sets belonging to Facebook users have circulated on Dark Web marketplaces. We’ve seen country-specific sets for Iran, Sudan, and Hong Kong. The largest data set that is still publicly accessible contains 303,081,505 records and was shared on a Telegram channel in February 2022. The data contains email addresses, names, phone numbers and additional personal information.

In April 2021, a cybercriminal posted over half a billion scraped Facebook profiles for free on a hacking forum. The data encompassed profiles from over 100 countries and included emails, Facebook IDs, birthdays, phone numbers, and other Personally Identifiable Information (PII). Several other forums mirrored this data set.

Last February, we reported how personal data belonging to Facebook Marketplace users was published online. That leak consisted of around 200,000 records that contained names, phone numbers, email addresses, Facebook IDs, and Facebook profile information.

In 2019, a private security researcher reported finding a database with the names, phone numbers, and unique user IDs of over 267 million Facebook users. The hosting company took the database offline after a tip off from the security researcher.

Social media accounts container a lot of personal information which combined with our email addresses provides cybercriminals with information they can use to add credibility to their phishing attempts.

It’s a good idea to check what personal information of yours is out there, and for that you can use our free Digital Footprint scan. Fill in the email address you use most frequently to sign up for sites and services, and we’ll give you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (September 23 – September 29)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Millions of Kia vehicles were vulnerable to remote attacks with just a license plate number

In June of 2024 security researchers uncovered a set of vulnerabilities in the Kia dealer portal that allowed them to remotely take over any Kia vehicle built after 2013—and all they needed was a license plate number.

According to the researchers:

“These attacks could be executed remotely on any hardware-equipped vehicle in about 30 seconds, regardless of whether it had an active Kia Connect subscription.”

How was this possible?

First, it’s important to understand that the Kia “dealer portal” is where authorized Kia dealers can match customer accounts with the VIN number of their new car. For the customer accounts, Kia would ask the buyer for their email address at the dealership and send a registration link to that address where the customer could either set up a new Kia account or add their newly purchased vehicle to an existing Kia account.

The researchers found out that by sending a specially crafted request they could create a dealer account for themselves. After some more manipulation they were able to access all dealer endpoints which gave them access to customer data like names, phone numbers, and email addresses.

As the new “dealer,” the security researchers were also able to search by Vehicle Identification Number (VIN) number, which is a unique identifier for a vehicle. With the VIN number and the email address of the rightful owner, the researchers were able to demote the owner of the vehicle so that they could add themselves as the primary account holders.

Unfortunately, the rightful owner would not receive any notification that their vehicle had been accessed nor their access permissions modified.

But to find the VIN number of a car you’ll need physical access to the vehicle, right? Not entirely.

In several countries, including the US and the UK, there are vehicle databases that you can query to provide you with a VIN number based on the license plate number. The researchers used a third-party API to convert the license plate number to a VIN.

Depending on the vehicle and whether Kia Connect was active, the primary account holder is able to remotely lock/unlock, start/stop, honk, and locate the vehicle.

The researchers created a proof-of-concept tool where they could enter the license plate and in two steps they could retrieve the owner’s personal information, and then execute remote commands on the vehicle.

The tool the researchers created to demonstrate their findings
Demonstration tool created by the researchers

The researchers responsibly disclosed their findings to Kia, which has since remediated the vulnerabilities found by the researchers. Kia assured that the vulnerabilities have not been exploited maliciously.

Vulnerabilities in cars are not new. In fact, the researchers that found these vulnerabilities did that as a follow-up to their earlier research. And too often we find that car makers are more interested in adding new features than securing their existing ones. So, we can expect that vulnerabilities like these will continue to be uncovered and we should be glad that these researchers chose to disclose their findings and give Kia a chance to fix the vulnerabilities before disclosing them.

Privacy watchdog files complaint over Firefox quietly enabling its Privacy Preserving Attribution

A European privacy watchdog has filed a complaint against Mozilla for quietly enabling Privacy Preserving Attribution (PPA) in its Firefox browser.

Noyb (none of your business) argues that despite its reassuring name, the feature allows the browser to track your online behavior. By design, Privacy Preserving attribution shifts the tracking from the websites to the browser.

With this shift it seems that Mozilla is following Google’s example. Google is focusing on Privacy Sandbox to replace the despised third party tracking cookies. This also puts the browser (Chrome and Chromium based) in charge of the tracking.

The problem noyb has with PPA is not so much the tracking which is less invasive than what we are used to, but the fact that it was introduced without giving users a chance to think about it. Mozilla simply turned it on by default after a recent update, which noyb says is disappointing coming from a company that is supposed to be privacy friendly.

And, even though the Firefox PPA offers more privacy than third-party cookies, noyb says this move means that Mozilla is caving in to advertisers.

Felix Mikolasch, data protection lawyer at noyb, said:

“Mozilla has just bought into the narrative that the advertising industry has a right to track users by turning Firefox into an ad measurement tool.”

Mozilla says that PPA allows advertisers to measure the effectiveness of their advertising without compromising the user’s privacy. Admittedly the user’s benefit indirectly, as the sites they visit are often supported by advertising. Making advertising better also makes it possible for more sites to function using the support that advertising provides.

The costs of getting rid of third-party cookies by using PPA are small, Mozilla says:

  • CPU, network, and battery costs for generating and submitting reports. Here, this cost is negligible, particularly relative to what sites are already able to use. This design could replace some of those costs, which might lead to improvements in some cases.
  • Privacy loss from use of their information. Attribution information will be aggregated and will include noise that protects the contribution that each person makes. This design is structured so that advertisers learn about what many people do as a group, not what any single person does.

If this is the price we must pay to get rid of third-party cookies and some degree of targeted advertising, is that worth it to you? Let us know in the comments.

Noyb has asked the Austrian data protection authority (DSB) to investigate Mozilla’s behavior. They say Mozilla should properly inform everyone about Firefox’s data processing activities and effectively switch to an opt-in system, as well as delete all unlawfully processed data.

How can I disable PPA?

If you want to disable PPA, this is what you need to do:

  1. Click the menu button and select Settings.
  2. In the Privacy & Security panel, find the Website Advertising Preferences section.
  3. Uncheck the box labeled Allow websites to perform privacy-preserving ad measurement.

Protection, in the browser

Malwarebytes’ free Browser Guard extension can help you block ads and other unwanted content in Firefox.

Telegram will hand over user details to law enforcement

Last month we reported how Telegram CEO Pavel Durov was indicted on charges of complicity in the distribution of child sex abuse images, aiding organized crime, drug trafficking, fraud, and refusing lawful orders to give information to law enforcement.

Now, in a potentially related development, chat app Telegram has changed its privacy policy to reflect that it will share user’s IP addresses and telephone numbers if they are suspected of committing a crime.

“8.3. Law Enforcement Authorities

If Telegram receives a valid order from the relevant judicial authorities that confirms you’re a suspect in a case involving criminal activities that violate the Telegram Terms of Service, we will perform a legal analysis of the request and may disclose your IP address and phone number to the relevant authorities. If any data is shared, we will include such occurrences in a quarterly transparency report published at: https://t.me/transparency.”

Durov said the changes were made to discourage the criminal abuse of Telegram Search, a feature that is known to be used for buying and selling illegal goods. A dedicated team of moderators will use Artificial Intelligence to make the search safer. These moderators will also go over reports submitted by users through the @SearchReport bot about search terms that can be used to find illegal content.

All these measures together should discourage criminals. Telegram was set up to find friends and news, not to trade illegal goods, Durov emphasized:

“We won’t let bad actors jeopardize the integrity of our platform for almost a billion users.”

It should be clear that this is all a work in progress. The bot for the transparency reports is not yet ready for action, for example.

Transparency report bot is not ready yet
Telgram transparency report is not ready yet

“This bot can give you a Telegram transparency report as per section 8.3 of the Telegram Privacy Policy.

We are updating this bot with current data. Please come back within the next few days.”

All in all, the future will show how adequate the moderators can act on reports and how easy, or difficult, it will be for law enforcement to submit a “valid order.”

But criminals are probably already looking for alternatives as we speak.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Don’t share the viral Instagram Meta AI “legal” post

A new variation of a hoax that has been doing the rounds on Facebook for years has crossed over to Instagram.

We’re seeing this post on Instagram Stories a lot suddenly over the last few days. The post is usually posted as a shareable screenshot on Instagram Stories, but it’s also been spotted on Facebook and Threads as a copy-and-paste text.

IG

“Repub

Goodbye Meta AI. Please note an attorney has advised us to put this on, failure to do so may result in legal consequences. As Meta is now a public entity all members must post a similar statement. If you do not post at least once it will be assumed you are okay with them using your information and photos. I do not give Met or anyone else permission to use any of my personal data, profile information or photos.”

The fact that this post has been shared by some celebrities is a possible explanation for the sudden popularity. And, as is often the case, true stories about Facebook scraping photos to train its Artificial Intelligence (AI) can rekindle the popularity and urgency to post this type of useless notifications.

Instagram has started to flag versions of the post as false information which means people need to click ‘see post’ to view it. But what happens often is that somebody will start fresh with a slightly revamped version that will not be flagged.

While some may think it doesn’t hurt to share these posts just to be sure, it really isn’t a good idea. It spreads the false posts further, and people may feel they have opted out of their images being used after posting this, when in reality they haven’t. In many cases it would even be contradictory to the terms and conditions they agreed on.

Meta, the company that owns Facebook and Instagram published new terms and conditions, effective on June 26, 2024, which specifically allow it to use posts, images and online tracking data to train its AI large language model called LLaMa 3.

On inspection of the links in the notification, it became clear that the company will use years of personal posts, private images or online tracking data for a “AI technology” that can ingest personal data from any source and share any information with undefined “third parties.”

European and UK users can opt out of this. For others, sadly, it’s not so easy.

How to opt out of Meta using your images for AI training

Logged in citizens of the EU and the UK can visit the Meta Privacy Center from either their Facebook account or their Instagram account.

How to opt out of Meta using your data to train AI on Facebook

  • Tap on your profile picture after logging in
  • Tap Settings and Privacy
  • Scroll down to the Privacy Center
  • Under Privacy topics, tap AI at Meta
  • Tap Information you’ve shared on Meta products and services
  • From there you’ll be presented with a form to fill out and Submit when you’re done.
AI at Meta in Privacy Center
AI at Meta in the Privacy Center

How to opt out of Meta using your data to train AI on Instagram

  • Tap on the hamburger menu from your profile (three stacked lines)
  • Scroll down to the Privacy Center
  • Under Privacy topics, tap AI at Meta
  • Tap Information you’ve shared on Meta products and services
  • From there you’ll be presented with a form to fill out and Submit when you’re done.

Whether you use Facebook or Instagram to opt out, you should then receive both an email and a notification on your account confirming whether your request has been successful.

Users in the US or other countries without national data privacy laws don’t have any foolproof ways to prevent Meta from using their data to train AI.

My advice: insist that your politicians make some noise and get you similar opt-out options.