Archive for author: makoadmin

US Cyber Trust Mark logo for smart devices is coming

The White House announced the launch of the US Cyber Trust Mark which aims to help buyers make an informed choice about the purchase of wireless internet-connected devices, such as baby monitors, doorbells, thermostats, and more.

The cybersecurity labeling program for wireless consumer Internet of Things (IoT) products is voluntary but the participants include several major manufacturers, retailers, and trade associations for popular electronics, appliances, and consumer products. The companies and groups said they are committed to increase cybersecurity for the products they sell.

Justin Brookman, director of technology policy at the consumer watchdog organization Consumer Reports, lauded the government effort and the companies that have already pledged their participation.

“Consumer Reports is eager to see this program deliver a meaningful U.S. Cyber Trust Mark that lets consumers know their connected devices meet fundamental cybersecurity standards,” Brookman said in a news release. “The mark will also inform consumers whether or not a company plans to stand behind the product with software updates and for how long.”

The Federal Communications Commission (FCC) proposed and created the labelling program and hopes it will raise the bar for cybersecurity across common devices, including smart refrigerators, smart microwaves, smart televisions, smart climate control systems, smart fitness trackers, and more.

The idea is that the Cyber Trust Mark logo will be accompanied by a QR code that consumers can scan for easy-to-understand details about the security of the product, such as the support period for the product and whether software patches and security updates are automatic.

The program is challenging because of the wide variety of consumer IoT products on the market that communicate over wireless networks. These products are built on different technologies, each with their own security pitfalls, so it will be hard to compare them, but at least the consumer will be able to find some basic—but important—information.

Even though participation is voluntary, manufacturers will be incentivized to make their smart devices more secure to keep the business of consumers who will choose products that only have the Cyber Trust Mark.

As we explained recently, the “Internet of Things” is the now-accepted term to describe countless home products that connect to the internet so that they can be controlled and monitored from a mobile app or from a web browser on your computer. The benefits are obvious for shoppers. Thermostats can be turned off during vacation, home doorbells can be answered while at work, and gaming consoles can download videogames as children sleep.”

And in 2024 we saw several mishaps ranging from privacy risks to downright unacceptable abuse. So, if we can avoid these incidents from happening again, then it surely is worth the trouble.

The testing of whether a product deserves the Cyber Trust Mark will be done by accredited labs and against established cybersecurity criteria from the National Institute of Standards and Technology (NIST).

The Cyber Trust Mark has been under constructions for quite a while was approved in a bipartisan unanimous vote last March, but we can expect the first logos to show up this year. And Anne Neuberger, deputy national security adviser for cyber, revealed that there are plans to release another executive order saying that, beginning in 2027, the Federal government will only buy devices that have the Cyber Trust Mark label on them.

For now, the program does not apply to personal computers, smartphones, and routers.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

AI-supported spear phishing fools more than 50% of targets

One of the first things everyone predicted when artificial intelligence (AI) became more commonplace was that it would assist cybercriminals in making their phishing campaigns more effective.

Now, researchers have conducted a scientific study into the effectiveness of AI supported spear phishing, and the results line up with everyone’s expectations: AI is making it easier to do crimes.

The study, titled Evaluating Large Language Models’ Capability to Launch Fully Automated Spear Phishing Campaigns: Validated on Human Subjects, evaluates the capability of large language models (LLMs) to conduct personalized phishing attacks and compares their performance with human experts and AI models from last year.

To this end the researchers developed and tested an AI-powered tool to automate spear phishing campaigns. They used AI agents based on GPT-4o and Claude 3.5 Sonnet to search the web for available information on a target and use this for highly personalized phishing messages.

With these tools, the researchers achieved a click-through rate (CTR) that marketing departments can only dream of, at 54%. The control group received arbitrary phishing emails and achieved a CTR of 12% (roughly 1 in 8 people clicked the link).

Another group was tested against an email generated by human experts which proved to be just as effective as the fully AI automated emails and got a 54% CTR. But the human experts did this at 30 times the cost of the AI automated tools.

The AI tools with human assistance outperformed the CTR of these groups by scoring 56% at 4 times the cost of the AI automated tools. This means that some (expert) human input can improve the CTR, but is it enough to invest the time? Cybercriminals are proverbially lazy, which means they often exhibit a preference for efficiency and minimal effort in their operations, so we don’t expect them to think the extra 2% to be worth the investment.

The research also showed a significant improvement of the deceptive capabilities of AI models compared to last year, where studies found that AI models needed human assistance to perform on par with human experts.

The key to the success of a phishing email is the level of personalization that can be achieved by the AI assisted method and the base for that personalization can be provided by an AI web-browsing agent that crawls publicly available information.

Example from the paper showing how collected information is used to write a spear phishing email
Example from the paper showing how collected information is used to write a spear phishing email

Based on information found online about the target, they are invited to participate in a project that aligns with their interest and presented with a link to a site where they can find more details.

The AI-gathered information was accurate and useful in 88% of cases and only produced inaccurate profiles for 4% of the participants.

Other bad news is that the researchers found that the guardrails which are supposed to stop AI models from assisting cybercriminals are not a noteworthy barrier for creating phishing mails with any of the tested models.

The good news is that LLMs are also getting better at recognizing phishing emails. Claude 3.5 Sonnet scored well above 90% with only a few false alarms and detected several emails that passed human detection. Although it struggles with some phishing emails that are clearly suspicious to most humans.

If you’re looking for some guidance how to recognize AI assisted phishing emails, we’d like you to read: How to recognize AI-generated phishing mails. But the best way is to always remember the general advice not to click on any links in unsolicited emails.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Dental group lied through teeth about data breach, fined $350,000

A US chain of dental offices known as Westend Dental LLC denied a 2020 ransomware attack and its associated data breach, instead telling their customers that data was lost due to an “accidentally formatted hard drive.”

Unfortunately for the organization, the truth was found out. Westend Dental agreed to settle several violations of the Health Insurance Portability and Accountability Act (HIPAA) in a penalty of $350,000.

In October 2020, Westend Dental was attacked by the Medusa Locker ransomware group. Medusa Locker is a type of ransomware that operates under a Ransomware-as-a-Service (RaaS) model, primarily targeting large enterprises in sectors such as healthcare and education. This ransomware is known for employing double extortion tactics, which means they encrypt victims’ data while also threatening to release sensitive information unless a ransom is paid.

Westend Dental decided not to submit the mandatory notification within 60 days, waiting until October 28, 2022—two years later—to submit a data breach notification form to the State of Indiana.

The Indiana Office of Inspector General (OIG) later uncovered evidence that Westend Dental had experienced a ransomware attack on or around October 20, 2020, involving state residents’ protected health information, but Westend Dental still denied there had been a data breach. The investigation was prompted by a consumer complaint from a Westend Dental patient regarding an unfulfilled request for dental records.

In January 2023 a witness confirmed there had been a data breach, which prompted the Indiana OIG to initiate a wider investigation to assess compliance with the HIPAA rules and state laws. This investigation revealed extensive HIPAA violations.

A selection of the other violations that were found during the investigation include:

  • HIPAA policies and procedures were not given to or made readily available to employees.
  • The company provided no HIPAA training for employees prior to November 2023.
  • Nothing showed evidence that a HIPAA-compliant risk analysis had ever been conducted (lists of usernames and passwords in plain text on the compromised server).
  • There were no password policies until at least January 2024 (the same username and password were used for all Westend Dental servers that contained protected health information).
  • No physical safeguards were implemented to limit access to servers containing patient data. (Some servers were located, unprotected, in employee break rooms and bathrooms.)

Court documents also reveal that because Westend Dental did not conduct a forensic investigation, the exact number of people affected by the breach is unknown. We do know that Westend Dental served around 17,000 patients across all companies and practices at the time of the ransomware attack.

The attackers initially gained access to at least one server, but since there was no monitoring software in place, it is unknown how far the attackers were able to infiltrate other systems. And since the backups that were made by a third party turned out to be incomplete, they were also unable to inform affected patients.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Some weeks in security (December 16 – January 5)

During the holiday period on Malwarebytes Labs we covered:

And on the ThreatDown blog we covered:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

“Can you try a game I made?” Fake game sites lead to information stealers

The background and the IOCs for this blog were gathered by an Expert helper on our forums and Malwarebytes researchers. Our thanks go out to them.

A new, malicious campaign is making the rounds online and it starts simple: Unwitting targets receive a direct message (DM) on a Discord server asking about their interest in beta testing a new videogame (targets can also receive a text message or an email). Often, the message comes from the “developer” themselves, as asking whether you can try a game that they personally made is a common method to lure victims.

If interested, the victim will receive a download link and a password for the archive containing the promised installer.

The archives are offered for download on various locations like Dropbox, Catbox, and often on the Discord content delivery network (CDN), by using compromised accounts which add extra credibility.

What the target will actually download and install is in reality an information stealing Trojan.

There are several variations going around. Some use NSIS installers, but we have also seen MSI installers. There are also various information stealers being spread through these channels like the Nova Stealer, Ageo Stealer, or the Hexon Stealer.

The Nova Stealer and the Ageo Stealer are a Malware-as-a-Service (MaaS) stealer where criminals rent out the malware and the infrastructure to other criminals. It specializes in stealing credentials stored in most browsers, session cookie theft for platforms like Discord and Steam, and information theft related to cryptocurrency wallets.

Part of the Nova Stealer’s infrastructure is a Discord webhook which allows the criminals to have the server send data to the client whenever a certain event occurs. So they don’t have to check regularly for information, they will be alerted as soon as it gets in.

The Hexon stealer is relatively new, but we know it is based on Stealit Stealer code and capable of exfiltrating Discord tokens, 2FA backup codes, browser cookies, autofill data, saved passwords, credit card details, and even cryptocurrency wallet information.

One of the main interests for the stealers seem to be Discord credentials which can be used to expand the network of compromised accounts. This also helps them because some of the stolen information includes friends accounts of the victims. By compromising an increasing number of Discord accounts, criminals can fool other Discord users into believing that their everyday friends and contacts are speaking with them, emotionally manipulating those users into falling for even more scams and malware campaigns.

But the end goal to this scam, and most others, is monetary gain. So keep an eye on your digital and flat currency if you’ve fallen for one of these scams.

How to recognize the fake game sites

There is one very active campaign that uses a standard template for the website. This makes it easier for the cybercriminals to change name and location, but also for us to recognize them.

Standard layout of the fake game websites
Example of the templated fake website

The websites are hosted by various companies that are very unresponsive to take down requests and usually protected by Cloudflare which adds an extra layer of difficulty for researchers looking to get the sites taken down. At which point they will easily set up a new one.

Another campaign uses blogspot to host their malware. These sites on blogspot have a different, but also standard, design.

fake game site on blogspot
blogspot template

Other effective measures to stay safe from these threats include:

  • Having an up-to-date and active anti-malware solution on your computer.
  • Verifying invitations from “friends” through a different channel, such as texting them directly or contacting them on another social media platform. Remember, their current account may have been compromised.
  • Remembering to not act upon unsolicited messages and emails, especially when they want you to download and install something.

IOCs

Download sites:

dualcorps[.]fr

leyamor[.]com

crystalsiege[.]com

crystalsiege[.]online

dungeonofdestiny[.]pages.dev

mazenugame[.]blogspot.com

mazenugames.[]blogspot.com

yemozagame[.]blogspot.com

domenubeta[.]blogspot.com

domenugame[.]blogspot.com

The known download sites will be blocked by Malwarebytes/ThreatDown products which will also detect the information stealers.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

Connected contraptions cause conniption for 2024

The holidays are upon us, which means now is the perfect time for gratitude, warmth, and—because modern society has thrust it upon us—gift buying.

It’s Bluey and dig kits and LEGOs for kids, Fortnite and AirPods and backpacks for tweens, and, for an adult you particularly love, it’s televisions, air fryers, e-readers, vacuums, dog-feeders, and more, which all seemingly require a mobile app to function.

“The Internet of Things” is the now-accepted term to describe countless home products that connect to the internet so that they can be controlled and monitored from a mobile app or from a web browser on your computer. The benefits are obvious for shoppers. Thermostats can be turned off during vacation, home doorbells can be answered while at work, and gaming consoles can download videogames as children sleep on Christmas Eve.

And while some of these internet-connected smart devices can sound a little dumb—cue the $400 “smart juicer” that a Bloomberg reporter outperformed with her bare hands—there are legitimate cybersecurity risks at hand. Capable of collecting data and connecting to the internet, smart devices can also fall victim to leaking that data to hackers online.

As we enter the new year, Malwarebytes Labs is looking back at some of the strangest, scariest, and most dumbfounding smart device stories this past year. Read on and enjoy.

The million-car track hack

The latest vehicles filling up the local car lot might not strike you as “smart devices,” but upon closer inspection, these cars, SUVs, trucks, and minivans absolutely are. Replete with cameras and sensors that can track location, speed, distance travelled, seatbelt use, and even a driver’s eye movements, modern vehicles are, as many have described them, “smartphones on wheels.”

For years, the cybersecurity of these particularly mobile mobiles (sorry) was passable.

When a group of security researchers hacked into a Jeep Cherokee’s steering and braking systems in 2014, they were building off earlier research that required the hacker’s laptop to be physically connected to the Jeep itself—a comforting reality when imagining shadowy cybercriminals wresting control from a driver while sat behind a computer console hundreds of miles away. And while the security researchers were able to eventually hack the Jeep Cherokee without a physical connection, the work involved was still robust.

But early this year, a separate group of security researchers revealed that they could remotely lock, unlock, start, turn off, and geolocate more than a million Kia vehicles by knowing nothing more than the vehicle’s license plate number. The researchers could also activate a vehicle’s horn, lights, and camera.

The vulnerability wasn’t in the vehicles themselves, but in Kia’s online infrastructure (Kia fixed the vulnerability before the researchers published their findings).

In short, the researchers posed as Kia dealers when using an online Kia web portal and, by entering the Vehicle Identification Number—which could be revealed separately through license plate numbers—they could assign certain features like remote start and geolocation to a new account, which the security researchers controlled.

While the vulnerability did not provide access to any of the vehicles’ steering systems or acceleration or braking, it still posed an enormous privacy and security risk, said researcher Sam Curry in speaking to the outlet Wired:

“If someone cut you off in traffic, you could scan their license plate and then know where they were whenever you wanted and break into their car. If we hadn’t brought this to Kia’s attention, anybody who could query someone’s license plate could essentially stalk them.”

Whispering sweet nothings into your air fryer

There’s a modern urban legend that our smartphones listen to our in-person conversations to deliver eerily relevant ads, and while there’s no proof this is true, there are certainly a few stories that raise nearby suspicions.

Take, for instance, the case of the nosy air fryers.

On November 5, a UK consumer rights group named “Which?” published research into several categories of smart devices—TVs, smart watches, etc.—and what they discovered about three models of air fryers surprised many.

In testing the Xiaomi Mi Smart Air Fryer, the Cosori CAF-LI401S, and the self-titled Aigostar, the researchers discovered that the associated air fryer mobile apps for Android requested a startling amount of information:

“[As] well as knowing customers’ precise location, all three products wanted permission to record audio on the user’s phone, for no specified reason.”

The connected app for the Xiaomi air fryer “connected to trackers from Facebook, Pangle (the ad network of TikTok for Business), and Chinese tech giant Tencent (depending on the location of the user).” When creating an account for the Aigostar air fryer, the connected app asked users to divulge their gender and date of birth, without stating why that was necessary. The request, however, could be declined. The Aigostar app, like the Xiaomi app, sent personal data to servers in China, but this data sharing practice was clarified in the companies’ privacy policies, the researchers wrote.

The companies in the report pushed back on the characterizations from Which?, with a Xiaomi representative clarifying that “the permission to record audio on Xiaomi Home app is not applicable to Xiaomi Smart Air Fryer which does not operate directly through voice commands and video chat.” A representative for Cosori expressed frustration that the researchers at Which? did not share “specific test reports” with the company, as well.

The truth, then, may be somewhere in the middle. The air fryer mobile apps in question may request more information than necessary to function, but that could also be because the apps are trying to service the needs of many different types of products all at once.

A toothbrush tall tale

Let’s play a game of telephone: A Swiss newspaper interviews a cybersecurity expert about a hypothetical cyberattack and, days later, dozens of news outlets report that hypothetical as the truth.

This story did happen, and it would be far more disturbing if it wasn’t also tinged with a hint of humor, and that’s because the tall-tale-turned-“truth” involved a massive cyberattack launched by… toothbrushes.

In February, a Swiss newspaper article included an anecdote about a “Distributed Denial-of-Service” attack, or DDoS attack. DDoS attacks involve sending loads of connection requests (like regular internet traffic) to a certain webpage as a way to briefly overwhelm that web page and take it down. But the interesting thing about DDoS attacks is that they don’t require a laptop or a smartphone to make those requests—all they need is a device that can connect to the internet.

In the Swiss newspaper’s account, those devices were 3 million toothbrushes. Translated from the article’s original language in German, it read:

“She’s at home in the bathroom, but she’s part of a large-scale cyberattack. The electric toothbrush is programmed with Java, and criminals have, unnoticed, installed malware on it—like on 3 million other toothbrushes. One command is enough and the remote-controlled toothbrushes simultaneously access the website of a Swiss company. The site collapses and is paralyzed for four hours. Millions of dollars in damage is caused.”

The article, which noted that the whole scenario could’ve been ripped out of “Hollywood,” contextualized why the attack was so scary: It “actually happened.”

But it most assuredly had not.

The article had no details about the attack, so there was no company named, no toothbrush model specified, and no response from any organization. But none of that mattered as countless news outlets aggregated the original article, because what did matter was the virality of the whole affair. There was a device that seemingly everyone agreed had no reason to connect to the internet, and—would you look at that—it led to major consequences.

In reality, the consequences were to the truth.

This holiday season, let’s do better than the silliest, most needless IoT devices, and let’s connect to what matters instead.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Data breaches in 2024: Could it get any worse?

It may sound weird when I say that I would like to remember 2024 as the year of the biggest breaches. That’s mainly because that would mean we’ll never see another year like it.

To support this nomination, I will remind you of several high-profile breaches, some of a size almost beyond imagination, some that really left us worried because of the type of data that was stolen, and a few duds.

Huge increase in numbers

As we reported in July, the number of data breach victims went up 1,170% in Q2 2024, compared to Q2 2023 (from 81,958,874 victims to 1,041,312,601).

The huge increase is no big surprise if you look at the size of some of these breaches. Remember these headlines?

5. Dell notifies customers about data breach (49 million customers)

4. “Nearly all” AT&T customers had phone records stolen in new data breach disclosure (73 million people).

3. 100 million US citizens officially impacted by Change Healthcare data breach.

2. Ticketmaster confirms customer data breach (560 million customers).

1. Stolen data from scraping service National Public Data leaked online (somewhere between 2.9 billion people (unconfirmed) and 272 million unique social security numbers).

The reason why I counted down to the biggest one, is because the first 4 are household names and people will know whether they might be affected since they are customers of the company. But National Public Data is a company that most people had never heard of before they read about the data breach.

The data gathered by National Public Data was “scraped,” meaning it was pulled from various sources and then combined in a large database. This also made it hard to get an exact number of affected people. The initially reported 2.9 billion people seemed a stretch, so we looked into that, and the estimates from our researchers say that it contains 272 million unique social security numbers. That could mean that the majority of US citizens were affected, although numerous people confirmed that it also included information about deceased relatives.

Sensitive data

Some of the huge breaches we listed contained Social Security Numbers (SSNs) which are a challenging process to be changed, but other breaches revealed all kinds of sensitive information.

Financial information was leaked by MoneyGram. Slim CD, Evolve Bank, Truist Bank, Prudential, and American Express.

Medical information was leaked by the earlier mentioned Change Healthcare breach, but we saw several smaller incidents at providers in the healthcare industry like Australia’s leading medical imaging provider I-MED Radiology, US and UK based healthcare provider DocGo that offers mobile health services, ambulance services, and remote monitoring for patients, nonprofit, outpatient provider of treatment for Opioid Use Disorder (OUD) CODAC Behavioral Healthcare, and DNA testing companies.

Ransomware incidents are also a big source of data breaches. When victims refuse to pay, the ransomware groups publish stolen data, as we saw with pharmacy chain Rite Aid.

Other sensitive data might have surfaced in hacktivist breaches at the Heritage Foundation, The Real World, and the Internet Archive. And sometimes it may be hard to not feel a bit of schadenfreude, as in the breach of the userbase of mobile monitoring app mSpy.

Anticlimaxes

In a few cases, there was a lot to do about something that turned out not to be so bad after all.

In February, a cybercriminal offered a business contact information database containing 132.8 million records for sale. It turned out to be a two-year-old third-party database which showed around 122 million unique business email addresses. That would have made it into our top 5, but the information in the database ages rather quickly. As soon as you move to a new job, that email address gets decommissioned and becomes worthless to phishers and other cybercriminals.

In July, a user leaked a file containing 9,948,575,739 unique plaintext passwords. The list was referred to as RockYou2024 because of its filename, rockyou.txt. However, without the associated user names or email, the list would have been of limited use to cybercriminals. If you don’t reuse passwords and never use “simple” passwords, like single words, then this release should not concern you.

If you were in any way affected by a data breach, we encourage you to have a look at our guide: Involved in a data breach? Here’s what you need to know.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Is nowhere safe from AI slop? (Lock and Code S05E27)

This week on the Lock and Code podcast…

You can see it on X. You can see on Instagram. It’s flooding community pages on Facebook and filling up channels on YouTube. It’s called “AI slop” and it’s the fastest, laziest way to drive engagement.

Like “click bait” before it (“You won’t believe what happens next,” reads the trickster headline), AI slop can be understood as the latest online tactic in getting eyeballs, clicks, shares, comments, and views. With this go-around, however, the methodology is turbocharged with generative AI tools like ChatGPT, Midjourney, and MetaAI, which can all churn out endless waves of images and text with little restrictions.

To rack up millions of views, a “fall aesthetic” account on X might post an AI-generated image of a candle-lit café table overlooking a rainy, romantic street. Or, perhaps, to make a quick buck, an author might “write” and publish an entirely AI generated crockpot cookbook—they may even use AI to write the glowing reviews on Amazon. Or, to sway public opinion, a social media account may post an AI-generated image of a child stranded during a flood with the caption “Our government has failed us again.”

There is, currently, another key characteristic to AI slop online, and that is its low quality. The dreamy, Vaseline sheen produced by many AI image generators is easy (for most people) to spot, and common mistakes in small details abound: stoves have nine burners, curtains hang on nothing, and human hands sometimes come with extra fingers.

But little of that has mattered, as AI slop has continued to slosh about online.

There are AI-generated children’s books being advertised relentlessly on the Amazon Kindle store. There are unachievable AI-generated crochet designs flooding Reddit. There is an Instagram account described as “Austin’s #1 restaurant” that only posts AI-generated images of fanciful food, like Moo Deng croissants, and Pikachu ravioli, and Obi-Wan Canoli. There’s the entire phenomenon on Facebook that is now known only as “Shrimp Jesus.”

If none of this is making much sense, you’ve come to the right place.

Today, on the Lock and Code podcast with host David Ruiz, we’re speaking with Malwarebytes Labs Editor-in-Chief Anna Brading and ThreatDown Cybersecurity Evangelist Mark Stockley about AI slop—where it’s headed, what the consequences are, and whether anywhere is safe from its influence.

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

2024 in AI: It’s changed the world, but it’s not all good

A popular saying is: “To err is human, but to really foul things up you need a computer.”

Even though the saying is older than you might think, it did not come about earlier than the concept of artificial intelligence (AI).

And as long as we have been waiting for AI technology to become commonplace, if AI has taught us one thing this year, then it’s that when humans and AI cooperate, amazing things can happen. But amazing is not always positive.

There have been some incidents in the past year that have made many people even more afraid of AI than they already were.

We started off 2024 with a warning from the British National Cyber Security Centre (NCSC) telling us it expects AI to heighten the global ransomware threat.

A lot of AI related stories this year dealt with social media and other public sources that were scraped to train an AI model.

For example, X was accused of unlawfully using the personal data of 60 million+ users to train its AI called Grok. Underlining that fear, we saw a hoax go viral on Instagram Stories that told people they could stop Meta from harvesting content by copying and pasting some text.

Facebook had to admit that it scrapes the public photos, posts and other data from the accounts of Australian adult users to train its AI models, which no doubt contributed to Australia’s ban on social media for children under the age of 16.

As with many developing technologies, sometimes the race to stay ahead is more important than security. This was best demonstrated when an AI companion site called Muah.ai got breached and details of all its users’ fantasies were stolen. The hacker described the platform as “a handful of open-source projects duct-taped together.”

We also saw an AI supply-chain breach when a chatbot provider exposed 346,000 customer files, including ID documents, resumes, and medical records.

And if the accidents didn’t scare people off, there were also some outright scams targeting people that were eager to use some of the popular applications of AI. A free AI editor lured victims into installing an information stealer which came in both Windows and MacOS flavors.

We saw further refinement of an ongoing type of AI-supported scam known as deepfakes. Deepfakes are AI-generated realistic media, created with the aim of tricking people into thinking the content of the video or image actually happened. Deepfakes can be used in scams and in disinformation campaigns.

A deepfake of Elon Musk was named the internet’s biggest scammer as it tricked an 82-year-old into paying $690,000 through a series of transactions. And AI-generated deepfakes of celebrities, including Taylor Swift, led to calls for laws to make the creation of such images illegal.

Video aside, we reported on scammers using AI to fake the voices of loved ones to tell them they’ve been in an accident. Reportedly, with the advancements in technology, only one or two minutes of audio—perhaps taken from social media or other online sources—are needed to generate a convincing deepfake recording.

Voice recognition doesn’t always work the other way around though. Some AI models have trouble understanding spoken words. McDonalds ended its AI drive through order taker experiment with IBM after too many incidents, including customers getting 260 Chicken McNuggets or bacon added to their ice cream.

To sign off on a positive note, a mobile network operator is using AI in the battle against phone scammers. AI Granny Daisy uses several AI models that work together listening to what scammers have to say, and then responding in a lifelike manner to give the scammers the idea they are working on an “easy” target. Playing on the scammers’ biases about older people, Daisy usually acts as a chatty granny while at the same time wasting the scammers’ time that they can’t use to work on real victims.

What do you think? Do the negatives outweigh the positives when it comes to AI, or is it the other way round? Let us know in the comments section.

Our Santa wishlist: Stronger identity security for kids

Sorry for the headline, but we have to get creative to get anyone to read an article on a Friday like this one, even if it is an important story.

As we enter the holidays and parents begin to rest after another hectic year of shopping for their kids, Malwarebytes Labs wants to draw some attention to a part of most children’s lives that deeply affects their online security: The education system.

Although children in the US can’t take out loans or get credit cards on their own, they can end up as victims of identity theft, which can end up being a lifelong burden in the form of bad credit ratings and even criminal records.

An old study by Experian estimated that 25 percent of children will be victims of identity fraud or theft by the time they are 18 years old. In the current system it’s even possible that a newborn gets assigned a Social Security Number (SSN) which has already been used by a criminal.

The Social Security Administration (SSA) has already assigned more than half of all available SSNs and because there is no check before the number gets issued, a baby could end up getting one with a bad history.

But usually, it happens later on. According to Javelin’s 2022 Child Identity Fraud Study, approximately 1.7 million US children had their personal information exposed and potentially compromised due to data breaches in 2021.

Many of the leaked information about children comes, unsurprisingly, from the educational institutions that they visit.

Breaches in education don’t follow a pattern or affect only children. They range from leaky school apps to ransomware and from childcare to the teacher’s retirement system.

Even though the Taxpayer First Act of 2019 mandates that the IRS notify taxpayers, including parents and guardians, when there is suspected identity theft, it has been criticized for not complying with this obligation.

So, parents and guardians need to be vigilant themselves.

How to keep an eye on your children’s identities

There are a few things you can do.

  • Contact the three major credit bureaus (Equifax, Experian, and TransUnion) to check if your child has a credit report. Generally, children under 18 should not have a credit report. If a report exists, it may indicate identity theft. If a credit report is found, inform the credit bureau it may be fraudulent. You may need to provide documents to credit bureaus to verify your child’s identity and your own.
  • If your child is under 16, you can request a free credit freeze to prevent new accounts from being opened in their name. This freeze remains in place until you request it to be removed. The process for getting a freeze for a minor is different than getting one for an adult. The credit bureaus give specific instructions at these three sites: Experian, Equifax, and TransUnion.
  • Limit who you share your child’s Social Security number with and only provide it when absolutely necessary. Don’t be afraid to ask why the information is needed and how it will be protected.
  • If you suspect your child’s identity has been stolen, report it to the Federal Trade Commission (FTC) at IdentityTheft.gov. Also, contact your local law enforcement to get a police report and notify the fraud departments of companies where fraudulent accounts were opened in your child’s name.

Check your digital footprint

If you want to find out what personal data of yours has been exposed online, you can use our free Digital Footprint scan. Fill in the email address you’re curious about (it’s best to submit the one you most frequently use) and we’ll send you a free report.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.