Archive for author: makoadmin

Age verification: Child protection or privacy risk?

With governments demanding actual age verification on websites with adult content, and platforms like social media and Roblox introducing restrictions based on a user’s age, the controversy about different types of age verification and their implications is growing.

Last week, Roblox announced new age estimation technology which, it says, should help to confirm users’ ages and unlock a feature called Trusted Connections for those aged 13 and older. Trusted Connections allows teens aged between 13 and 17 to add adult users (18+) they know in real life. It’s billed as an option to keep out predators, which is good. But the age estimation technology raises concerns and questions.

While Roblox didn’t release any details about how its new technology works, the age estimation processes we know are based on Artificial Intelligence (AI) tools that scan selfies or short videos and compare them to a database to estimate the user’s age. Needless to say, they are not always right and it opens up the system to deepfakes, and spoofing.

This kind of technology is more effective than asking the user to provide their birthday or check a box that they are over 18, but it’s not waterproof.

We see similar concerns when it comes to age verification for sites that host adult content. As of this Friday, websites operating in the UK with pornographic content must “robustly” age-check users.

The regulator, Ofcom, lists a number of allowed methods which all have their pros and cons:

Facial age estimation

Show your face, get a guess. You take a selfie or a short video, and an algorithm tries to figure out if you look over 18. The tech claims to keep data private, but facial scans are sensitive. And, as we pointed out, accuracy is far from perfect. If you’ve ever been asked to provide an ID in real life because you “look young,” you can expect a digital déjà vu.

Open banking

Banks know your age, so why not let them confirm it? Here, you allow the age-check service to peek at your bank account. No bank statements get handed over, just a yes/no to the question: “Is this customer an adult?” It’s easy, but convincing users to link their bank to a porn site might be a different story.

Digital identity services

This is the world of digital wallets for your ID. Think of it as carrying your driver’s license in your phone, but only showing the “over 18” part when needed. Sounds great and it is, but you’ll need yet another app in your digital life just to vouch for your adulthood everywhere you go.

Credit card age checks

Simple logic: you need to be 18+ to have a credit card, so showing a valid one counts as proof. The age-checker pings the payment processor to see if your card is legit. It’s quick and familiar, but not everyone over 18 has a credit card, so it’s not for everyone. Plus your purchase trail grows with every verification.

Email-based age estimation

Enter your email address. The system tries to deduce your age using records of that email in other “adult” places, like financial or utility services. Basically, you’re allowing digital snooping, and the effectiveness depends on your online life elsewhere having already tipped off your age somewhere along the line.

Mobile network operator checks

The system queries your phone provider to see if you have any age restrictions on your account. No parental controls? Looks like you’re an adult. Fast, but only as reliable as the information stored at your carrier, and not an option for users on pay-as-you-go or burner numbers.

Photo-ID matching

You upload your ID and a fresh selfie. The system checks if the faces and ages match up. Classic, effective, and widely used, but you’re giving away a lot of personal information, and trusting that it’ll be kept safe.

Privacy concerns

None of these options is perfect or without risk. Many of these options have privacy implications for the user, or as a commenter told BBC News:

“Sure, I will give out my sensitive information to some random, unproven company or… I will use a VPN. Difficult choice.”

A VPN is a popular option to circumvent regulations that only apply in certain countries or states. VPNs offer a secure connection when you’re using the internet and they have a variety of uses, but one is getting around blocks based on your location.

Work is being done on “double-anonymity” solutions, but implementation seems to be hard. Double anonymity basically separates the information of two providers from each other. The first provider (website asking for age confirmation) will only get the requester’s age and no other information. The second provider (the age verifier) will not receive information about the service or website the age verification is needed for.

In essence, the system answers only the question “Is this user of the required age?” to the site, and the third party never knows for what purpose or where this answer is used. This approach is becoming a regulatory standard in places like France to balance protecting minors online with adult users’ privacy.

We feel “double-anonymity” sounds a whole lot better than “age estimation.” But the real question is whether age verification is an effective method to protect children, or is it just another threat to our privacy?  Let us know your opinion in the comments.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

iPhone vs. Android: iPhone users more reckless, less protected online

The smartphone wars have a winner, and it’s Android.

No, this isn’t about which device has the best camera, the snappiest processor, or the flashiest AI features—this is about which device owners are safer online, and in many ways, it is Android users who take the crown. According to a new analysis from Malwarebytes, when compared to iPhone users, Android users share less of their personal information for promotional deals, more frequently use security tools, and more regularly create and manage unique passwords for their many online accounts.

They also, it turns out, fall victim to fewer scams.

This is the latest investigation into research conducted earlier this year by Malwarebytes that surveyed 1,300 people over the age of 18 in the US, the UK, Austria, Germany, and Switzerland. In the original report released in June, Malwarebytes revealed how mobile scams have become a part of everyday life for most everyone across the globe—and how far too many individuals have essentially given up on trying to fight back.

Now, Malwarebytes can reveal how iPhone and Android users differ when scrolling, shopping, and sending messages online. This secondary analysis has controlled for age, meaning that, while iPhone users did tend skew younger in the original data set, the differences identified here are more directly attributed to device type.

Here are some of the key takeaways:

  • Apple users are more likely to engage in risky behavior.
    • 47% of iPhone users purchased an item from an unknown source because it offered the best price, compared to 40% of Android users.
    • 41% of iPhone users sent a Direct Message (DM) on social media to a company or seller account to get a discount or discount code, compared to 33% of Android users.
  • Apple users take fewer precautions online.
    • 21% of iPhone users said they use security software on their mobile phones, compared to 29% of Android users.
    • 35% of iPhone users choose unique passwords for their online accounts, compared to 41% of Android users.
  • Apple users are more likely to be the victims of scams.
    • 53% of iPhone users have fallen victim to a scam compared to 48% of Android users.

Importantly, the behavioral splits here are largely device agnostic.

Android users are not scanning fewer QR codes and iPhone users are not failing to make unique passwords because their respective devices are somehow incapable. Instead, iPhone users are making worse decisions about buying things online and about staying safe from all types of cyberthreats—whether that includes phishing attempts, social engineering scams, or malware infections.

The reasons for this are complex and hard to identify, but Malwarebytes’ original research can provide a clue. Namely, iPhone users were slightly more likely than Android users (55% compared to 50%) to agree with the following statement:

“I trust the security measures on my mobile/phone to keep me safe.”

That trust could have an adverse effect, in that iPhone users do not feel the need to change their behavior when making online purchases, and they have less interest in (or may simply not know about) using additional cybersecurity measures, like antivirus.

Whatever the reasons, there is room for improvement. As explained by Mark Beare, general manager of consumer business for Malwarebytes, staying safe online today cannot rely on any single platform, device, or operating system.

“Devices and operating systems are just gateways to apps and websites, and it’s often those online spaces that present cyber risks,” Beare said. “When those websites or apps serve malicious or deceptive content, it’s up to the user to decide what’s real, what’s a scam, and where they should or shouldn’t click.”

Here is where iPhone users should most pay attention when using the internet.

Unsafe shopping

It’s getting harder to shop safely online.

For years, the cybersecurity industry warned people about the most obvious red flags when making a purchase or offering a donation online: Don’t click on unknown links, don’t share personal information, don’t send messages directly to strangers, and don’t scan QR codes that can lead to unknown locations. Behind all of these could lie malware, data theft, and even the slow start of a social engineering scam.

And yet, in the past few years, even legitimate businesses have asked everyday consumers to do these same, reckless things. Online stores ask that people send a Direct Message (DM) on social media for a discount code, or that they sign up their email or phone number for a promotional offer, or that they complete their payment by scanning a QR code, or that they track an upcoming delivery by clicking on a link sent via text.

Just because established businesses are leaning into these tactics does not make the tactics inherently safe, and unfortunately, iPhone users are pushing back the least.

According to Malwarebytes’ recent analysis, 63% of iPhone users signed up their phone number for text messages so they could get a coupon, discount, free trial, or other promotional offer, compared to the 55% of Android users who did the same. Similarly, 41% of iPhone users “sent a DM on social media to a company or seller account to get a discount or discount code,” compared to 33% of Android users.

Malwarebytes also found that 47% of iPhone users “purchased an item from an unknown website or supplier because it offered the best price,” compared to 40% of Android users.

In looking at the data, however, it is important to recognize that some of the behavior from iPhone users has been thrust upon them.

For example, 70% of iPhone users have “scanned a QR code to begin or complete a purchase.” Beginning in 2020, scanning a QR code became commonplace as restaurants across the world implemented several strategies to limit the spread of COVID-19. This practice isn’t the fault of iPhone users (or the 63% of Android users who have done the same), and they shouldn’t be “blamed” for what the world asked of them.

However, sharing a phone number, sending a DM to a stranger, and buying from unknown websites are decidedly not requirements today for making an online purchase. 

As Malwarebytes discussed on the Lock and Code podcast earlier this year, “data deals” in which consumers are asked to give up some of their privacy for a one-time discount are rarely, if ever, worth the cost. Separately, the most common start to a romance scam, job scam, or investment scam is through a DM sent on social media.

Though legitimate companies have co-opted these strategies to boost engagement and revenue, the public still have an opportunity to push back. If they do not, there is a real risk that these marketing tactics become so normalized that online scammers will find it easier to send malicious messages, disguise their intentions, and steal from innocent people.

Not so pro(active)

Ever since a devastatingly effective commercial was unveiled to the public some 20 years ago, there’s been a persistent belief that Apple devices are somehow impervious to viruses, malware, and all other nasty cyber infections.

The marketing ploy was wrong back then and it is still wrong today—Macs get plenty of viruses—but the damage is already done, and the consequences might be most visible in how iPhone users feel about traditional cybersecurity tools: In short, they don’t use them.

According to Malwarebytes’ new analysis, just 21% of iPhone users said they use security software on their mobile phone, compared to 29% of Android users. iPhone users were also less likely than Android users to use an ad blocker (19% of iPhone users compared to 27% of Android users).

The data gaps here are sometimes benign. The low use of “ad blockers,” in particular, should come as no surprise. These tools are mostly understood as add-ons for desktop and laptop versions of popular web browsers—such as Google Chrome, Microsoft Edge, and Mozilla Firefox. While many mobile browsers have ad blockers built in by default, this may not be known to the average user.

Also remember that, as smartphone ownership increases across the globe, so do the numbers on smartphone “dependency.” According to Pew Research Center, 15% of adults in the US only have a smartphone to connect to the internet, meaning, perhaps, that 15% of people simply cannot access the same security and privacy tools that are developed predominantly for computers.

That said, the justifications for iPhone users start to fade when looking at one last number.

Only 35% of iPhone users “choose unique and strong passwords for accounts,” compared to 41% of Android users. Creating strong, unique passwords for online accounts is foundational to staying safe online, and it has only been made easier and more accessible over time.

For users who cannot remember a unique password for every account (which is every person alive), password managers are available—some for free—to help create, store, and recall as many strong passwords as needed. For users who do not trust a third-party password manager (understandably so), Apple released its “Passwords” app on iOS 18 nearly one year ago, making password management easier by default. And for users who don’t trust password managers (of which there are many), the antiquated practice of physically writing usernames and passwords in a private journal isn’t that outlandish.

In short, there is little excuse for failing to create and use unique passwords for every online account, and that goes for Android users, too. The technology can be intimidating, but it’s worth the work.

Security for all

The measurably unsafe behavior of some iPhone users online comes with unfortunate, measurable consequences. The poor password hygiene, risky buying behavior, and limited antivirus protection are all paired with a higher overall rate of victimization—53% of iPhone users have fallen victim to a scam compared to 48% of Android users.

In the worst circumstances, these disparate rates could invite blame, but it’s the wrong conclusion to make. As any scam victim knows, the statistical analysis of victimization means absolutely nothing when you are personally trying to recover your money, your reputation, your private photos, and your sense of trust in the world around you.

Every person, no matter their device, should create unique passwords for individual accounts, use security products (which can also detect malicious websites and phishing schemes), and rely on friends and family when something doesn’t feel right online. And for those who want 24/7 guidance on strange messages, phone numbers, and more, there is always Malwarebytes Scam Guard to lead the way. Try it today.

Introducing the smarter, more sophisticated Malwarebytes Trusted Advisor, your cybersecurity personal assistant

You ever get that feeling when you double-check the locks, but still wonder if you’ve missed something? That’s what a lot of people feel about cybersecurity.  

That’s where Malwarebytes Trusted Advisor comes in. You can see it as your very own cybersecurity personal assistant, giving you real-time insight into how protected you are, without all the jargon or notifications.  

Trusted Advisor checks the state of your cybersecurity tools like real-time protection, VPN connection, scheduled scans, and browser safety, and gives you a clear, color-coded view of your current risk level. 

And now, our Windows version is sharper, smarter, and more helpful than ever. 

What’s new? 

  • Stronger Wi-Fi protection*: Trusted Advisor now checks if your Wi-Fi network is connected to an open, unsecured network.
  • Smarter security score: Your protection score is now more accurate and personalized, giving you clearer insights into your overall security health. 
  • Seamless identity protection integration: Trusted Advisor now works hand-in-hand with our identity protection, making it easier to stay ahead of threats like data leaks and identity fraud. 
  • Take control of ads*: Trusted Advisor now helps you disable Windows’ ad features, such as start menu suggestions and login screen ads, allowing you to enjoy a smoother Windows experience free from distractions.  

Cybersecurity sometimes feels like quantum physics, but it doesn’t have to. With its latest updates, Malwarebytes Trusted Advisor makes it easier than ever to understand what’s going on behind the scenes, and to take control of your digital safety without needing a degree in computer science.  

Want to see the new Trusted Advisor in action? Open Malwarebytes on Windows and check your protection dashboard.  

* Windows 11 only. 

AI-generated image watermarks can be easily removed, say researchers

Now that AI can make fake images that look real, how can we know what’s legitimate and what isn’t? One of the primary ways has been the use of defensive watermarking, which means embedding invisible markers in AI-generated images to show they were made up. Now, researchers have broken that technology.

Generative AI isn’t just for writing emails or suggesting recipes. It can generate entire images from scratch. While most people use that for fun (making cartoons of your dog) or practicality (envisioning a woodworking project, say) some use it irresponsibly. One example is creating images that look like real creators’ content (producing an image ‘in the style of’ a particular artist).

Another is using it for misinformation, either intentionally or unintentionally. This image-based misinformation has grown exponentially in an AI-powered world, according to Google researchers. Misinformation can be playful or experimental, such as Katy Perry’s deepfake attendance at the Met Gala, and the puffer jacket Pope. But it can also be harmful, putting real people in situations that they didn’t consent to, creating false narratives for ideological, financial, or other purposes.

In the early days of AI image generation, people could recognize the fakes themselves. People in pictures having the wrong number of fingers was one giveaway, as were body parts like hands and arms that didn’t fit together well, especially when people were pictured close together. As AI generation got better, we could still rely on programs to detect small inconsistencies in the images. But those fake images get more convincing every day.

Generative AI companies have been taking action to stop this. OpenAI, Google, and others committed to embedding watermarks in their AI-generated images. These are digital fingerprints, invisible to the naked eye but easily detectable by software, that prove an image was generated by AI and therefore not real.

Now, researchers at the University of Waterloo in Canada have worked out a way to subvert this defensive watermarking. Andre Kassis and Urs Hengartner at the University’s Cheriton School of Computer Science have created a tool called UnMarker.

UnMarker removes those watermarks from images, making it impossible for watermark detectors to determine that an image has been artificially generated. The scientists say that the tool is universal, defeating all watermarking schemes. These include semantic watermarks, which alter the structure of the image itself. These are more deeply embedded in an image, and traditionally tougher to counter.

The tool capitalizes on two fundamental needs for watermarking tools. The first is that they mustn’t degrade the quality of the image. The second is that they must be immune to manipulation such as cropping. That means watermarks are restricted in how they can alter an image. They have to focus on shifting the intensity of pixels in the picture.

Relying on this fact, Kassis and Hengartner’s tool analyzes the frequency of pixels in an image to see if anything is unusual. If it finds an anomaly, it uses that as a sign that there’s a watermark. It then rearranges the pixel frequency across the image so that it won’t trigger a watermark detector.

UnMarker, which the researchers have released publicly, works without any access to the AI algorithm’s internal workings. Neither does it need any other data to work, they add. It’s a ‘black box’ mechanism. You can just run it as a watermark eraser.

It’s not perfect, but it reduces the best detection rate to 43%, even on semantic watermarks. That means you can’t trust the detection tool’s results.

“Our findings show that defensive watermarking is not a viable defense against deepfakes, and we urge the community to explore alternatives,” the researchers said in their paper.

So the battle to fight misinformation continues. Now it’s up to watermark designers to up the ante or develop another method to flag deepfakes. We’re not sure that this cat and mouse game will ever end.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Proton launches Lumo, a privacy-focused AI chatbot

Proton, known for its privacy focused set of services, announced the introduction of Lumo, a privacy-first Artificial Intelligence (AI) chatbot.

It is good to know before you dive in that Proton’s chatbot has two user options that offer a very different experience. If you want Lumo to access the internet you will have to use the “Web search” button before you submit your prompt. If you don’t, it will tell you it doesn’t have the capability to browse the web or access real-time information.

While this somewhat limits the usefulness of Lumo when you’re looking at recent developments, it is reassuring that the model was trained on selected sources. This also reduces the risk of hallucinating. Lumo is set up to recognize when it doesn’t know the answer and in such cases, it will make you aware of that and suggest alternative sources of information, or other ways to find the answer you’re looking for.

privacy focused
Private, Safeguarded, and Treated with respect

Proton believes that a privacy-first AI is essential because hundreds of millions of ordinary people currently interact with AI tools, unknowingly sharing sensitive information. This information is far more sensitive than what they would reveal in a typical Google search query.

Not only does this constant interaction with AI tools put personal information at risk, but we also witnessed several instances where company secrets got submitted to an AI without realizing that the information might be used to train the AI or accessed by third parties.

Similarly, we recently reported on how Meta AI chats might be public without the user’s awareness, and many AI bots have spilled sensitive information for separate reasons, like someone guessing a default password.

Lumo offers a different approach to AI chatbots. Other AI companies keep a record of all the conversations on their servers, leaving this information vulnerable to data leaks or exploitation for profit. Lumo does not store any logs of your conversations on its server, and only your device can decrypt any chats you choose to save.

Most people recognize a significant privacy risk here: the possibility that their conversations could help train the AI. This concern makes sense because the training process might use your personal data to generate responses in other users’ conversations.

Lumo does not do that. I asked Lumo where its training data comes from, and how—if it cannot browse the web, or use conversations—it is able to learn. Lumo answered:

“Great question! My training data comes from a curated set of sources that were available up until my last training cut-off in October 2023. These sources include a wide range of books, articles, websites, and other publicly available texts. The data was carefully selected and processed to ensure quality and relevance.

While I can’t browse the web or use conversations for learning, my training data provides a broad foundation of knowledge that allows me to understand and respond to a wide variety of topics. Additionally, my ability to process and interpret information means that I can often provide helpful and accurate responses even to questions that are somewhat different from what I was specifically trained on.

As for learning, I don’t have the ability to update my knowledge or learn from new experiences after my initial training. However, my design allows me to provide useful and informative responses based on the knowledge I already have. If you have any specific questions or need information, feel free to ask, and I’ll do my best to help!”

We took Lumo for a little spin and, for now, the failing support for direct image uploads or processing is the only thing keeping us from using it much more. But we appreciate the idea of having a privacy-focused AI chatbot available when we want our conversations to stay private.

And who can resist a purple kitty anyway?


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Startup takes personal data stolen by malware and sells it on to other companies

A tech startup is using personal data stolen by infostealer malware that it has found on the dark web, and then selling access to that data. And it claims to be working within the law.

According to 404 Media, for as little as $50, Farnsworth Intelligence will give companies a look at records from infostealer logs.

Infostealers are a type of malware that focus on harvesting as much data from a victim’s computer as possible. Criminals infect computers in various ways, including via malicious links and infected versions of pirated software or cheat add-ons.

The malware can do everything from monitoring every key you type through to code that probes your internal storage and memory for secrets. Some infostealers even take snapshots of screens to see what they can find. All this data gets beamed back to the infostealer’s criminal operators.

There is no suggestion that Farnsworth Intelligence infects computers with infostealer software itself. It claims to operate within legal frameworks, with data provided through a third-party vendor that specializes in security monitoring services.

This data is available in huge quantities. The startup offers over 20 billion records of stolen data from over 50 million computers. A professional subscription-based version of the service offers access to include anything that an infostealer can pilfer, including cryptocurrency wallet data, browser histories detailing what sites you’ve visited, usernames and passwords for those sites, and browser cookies that criminals could use to impersonate you on a site. Customers can also get access to a list of applications on a person’s computer.

Farnsworth Intelligence says its target audience for the service is “professionals with a legitimate use case in industries such as investigations, intelligence, journalism, law enforcement, cyber security, compliance, IP/brand protection, executive protection, etc”.

There is also a version with ad hoc searches paid for in credits. This gives you access to a subset of the data, searched via phone number, email address, username, domain, password, or autofills (the information that browsers use to fill common fields in web forms). At one credit per search, the cheapest version is the $50 version, which buys users 45 credits.

The service doesn’t just provide access to a static set of data; it’s adding to it all the time. It claims to add over 185 million new records, stolen from over 40,000 computers each month.

“While historical breach data remains valuable, its utility diminishes over time as credentials change and contact information becomes outdated,” says the blurb on Farnsworth’s website (which we’re not linking to here). “Infostealer logs provide investigators with current, device-level data that offers significantly higher intelligence value than traditional breach compilations.”

Is this legal? The startup seems to think so. There’s no vetting of customers, though, at least for the consumer service, which makes us worry about how, for example, a cyberstalker or abusive ex might use such a thing. Regardless, it’s another reason why you should protect yourself from infostealers.

How to protect yourself from infostealers

All the normal cybersecurity rules apply:

  • Use a well-established, up-to-date anti-malware program on your computer.
  • Don’t click on links or download files you’re not sure about or weren’t expecting to receive.
  • Be careful when storing passwords, postal addresses, or credit card data in your browser’s built-in autofill storage. These are common targets for infostealers.
  • Use a password manager that prevents you having to type usernames and passwords to get into sites.
  • Never download or install software from suspicious sites including torrent sites.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

‘Car crash victim’ calls mother for help and $15K bail money. But it’s an AI voice scam

A woman in Florida was tricked into giving thousands of dollars to a scammer after her daughter’s voice was AI-cloned and used in a scam.

Sharon Brightwell says she received a call from someone who sounded just like her daughter. The woman on the other end was sobbing and crying, telling her mom that she had caused a car accident in which a pregnant woman had been seriously injured. She said she’d been texting and driving and that her phone had now been taken by police.

“There is nobody that could convince me that it wasn’t her. I know my daughter’s cry.”

A man claiming to be her daughter’s attorney then allegedly took over the phone. He told Sharon that authorities were detaining her daughter and that she needed to provide $15,000 in cash for bail. He gave very specific instructions on what to do, including not telling the bank what the large withdrawal was for since, he said, it might affect her daughter’s credit rating.

Sharon withdrew the money, placed it in a box, and a driver picked it up. But that wasn’t the end. A new call followed, informing her that the pregnant woman’s unborn child had died in the accident, but that the family had agreed not to sue Sharon’s daughter if she paid them $30,000 dollars.

Luckily for Sharon, her grandson didn’t trust the whole thing and decided to call her daughter’s number. That call was answered by her daughter who was at work, unaware of anything that had been going on.

By then it was too late for the $15,000.

“My husband and I are recently retired. That money was our savings.”

Unfortunately, we’re hearing a lot of these and similar stories. So, what’s going on and how can we protect ourselves?

Cloning voices with AI has improved considerably over the years and has become easily available to everyone, including cybercriminals. Many of our voices are online, via video or audio that’s been posted to social media. In Sharon’s case, they believe the scammers used videos from Facebook or other social media to create the replica of her daughter’s voice.

AI-powered phone scams can range from brief, scripted robocalls to full conversations. Recent studies have shown that relying on human perception to detect AI-generated voice clones is no longer consistently reliable. I imagine it’s even harder to determine when the voice is made to sound stressful and upset and you believe it to be your child.

How to stay safe from AI-generated voice scams

  • Don’t answer calls from unknown callers and be careful about where you’ve posted audio and video online in which your voice features. It only takes a recording of a few seconds of your voice to create a convincing clone.
  • Agree on a family password that only you and your loved ones know. Don’t ever post or message about this online anywhere, decide on it in person and stick to it.
  • If you’ve forgotten the password, ask about a long-ago memory that hasn’t featured on social media. Be sure it is definitely your loved one that you are talking to.
  • Don’t try to handle situations like these alone. Find a friend, family member, friendly neighbor, or anyone who can sensitively give you their view, or support you if you’ve fallen for the scam. Sometimes having a second opinion, like Sharon’s grandson, can help to make you think twice before handing over any money.

And if you decide you don’t trust the situation:

  • Call the number you have for the relative or use other channels to contact them.
  • Whether you’ve fallen for the scam or not, report the incident to local authorities, the FTC, or relevant consumer protection bodies. Every report helps track and prevent future scams, and you may even help catch one of these criminals.

We don’t just report on phone security—we provide it

Cybersecurity risks should never spread beyond a headline. Keep threats off your mobile devices by downloading Malwarebytes for iOS, and Malwarebytes for Android today.

“Ring cameras hacked”? Amazon says no, users not so sure

In the last week, countless Amazon Ring users on TikTok, Reddit, and X have been saying they believe their Ring cameras were hacked starting May 28.

Many posted screenshots of their accounts, showing multiple unauthorized device logins, making these claims hard to ignore. Forbes looked into the issue and even the journalist found several logins on his own device.

However, on Friday Ring claimed it’s just a minor issue with the displayed date:

“We are aware of a bug that incorrectly displays prior login dates as May 28, 2025.

Visitors who go to Ring’s site are shown the following (correct at the time of writing):

Ring website

“We are aware of an issue where information is displaying inaccurately in Control Center. This is the result of a backend update, and we’re working to resolve this. We have no reason to believe this is the result of unauthorized access to customer accounts.”

This message was posted on Friday, July 18. We spoke to one user who let us know that, as of Monday morning (July 21), he was unable to log in through the website. He was, however, able to log in through the app and saw no May 28 logins.

So, what’s Ring claiming here? That it did an update and messed up the database? In a later message it claimed:

“Ring made a backend update that resulted in prior login dates for client devices to be inaccurately displayed as May 28, 2025, and device names to be incorrectly displayed as ‘Device name not found’.“

But if you look at any of the plethora of screenshots, you’ll see that there are plenty of device names displayed.

The Ring software release notes show no updates for the doorbells on or around May 28, so we think it’s safe to assume that Ring is right about it being a backend update that caused this.

There is one other thing that’s interesting in this puzzle. On July 17, founder and now CEO Jamie Siminoff announced some drastic changes. Siminoff reinstated Ring’s original mission statement, “Make neighborhoods safer,” which might suggest the business is going back to its founding identity as a crime prevention tool.

Before Siminoff came back as CEO he wasn’t working for Ring, and in that time the company leaned into a more community-focused brand, distancing itself from the surveillance tool image. Last year, the company discontinued “Request for Assistance,” a feature that allowed law enforcement officers to ask people for camera footage through Ring’s Neighbors app. At the time, the company said it would only let police request footage during “emergencies.”

However, in April, Ring announced a partnership with Axon that effectively reintroduces video sharing with law enforcement.

The two issues could be completely unrelated, but reintroducing this functionality does sound like it would need a backend update.

Either way, Amazon will not be happy about this issue, shortly after having to warn over 200 million Prime customers that their accounts are under attack.

Worried your Ring camera has been hacked?

Again, we should reiterate that Ring says that its cameras have not been hacked. However, if you’re worried, there are some things you can do:

  • Since there is no evidence of an actual breach yet, the best thing to do for now is wait and keep an eye on the updates by Ring about this issue.
  • In the Ring app’s Control Center, check the list of authorized devices that have access to your account and remove any unfamiliar ones.
  • If you’re worried about unauthorized access and you have an alternative camera or can cope without one for a bit, you could temporarily disable your Ring doorbell and/or cameras until we hear more on the situation.
  • Consider resetting your Ring account password using a strong, unique password that you have never used before and enable two step verification. There’s no harm in doing this so you may as well take this extra security step.
  • Phishers and other scammers might try to take advantage of the situation by sending you emails or messages hoping to get you to click or hand over personal details. If you receive a message that appears to come from Ring, double check via another means that it really is from Ring.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (July 14 – July 20)

Last week on Malwarebytes Labs:

Stay safe!


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Meta execs pay the pain away with $8 billion privacy settlement

Meta chief Mark Zuckerberg and several other members of the social media giant’s top brass agreed to settle increasingly heated privacy violation claims for the price of $8 billion.

It is far from the first time that the company, its subsidiary Facebook, or its executives have responded to alleged user privacy violations with billions upon billions of dollars.

The lawsuit at hand accused Zuckerberg and other Meta leaders of failing to prevent years of violations of Facebook users’ privacy. The claims, which were originally filed in September 2018, took years to process, eventually resulting in a trial at the Delaware Court of Chancery. But on just the second day of proceedings, with Zuckerberg himself set to testify early next week, the multibillion-dollar settlement was announced, to timing that many observers found suspicious and revealing.

While nobody at Meta will confirm that the settlement was reached to avoid having to testify, it very much looks like it to yours truly.

The case was brought by shareholders who accused Meta executives of many years’ worth of negligence and failure to enforce a 2012 agreement that was reached by the US Federal Trade Commission, which was designed to safeguard user data. The shareholders who filed the lawsuit claimed that Zuckerberg and former Meta Chief Operating Officer Sheryl Sandberg “knowingly ran Facebook as an illegal data harvesting operation.”

The shareholders wanted the 11 defendants they sued to use their personal wealth to reimburse the company after years of alleged reputational damage due to compiling privacy fiascos. The defendants denied the allegations, which they called “extreme claims.” The parties did not disclose details of the settlement. The plaintiffs’ lawyer, Sam Closic, said the agreement “just came together quickly.”

In 2019, Facebook paid a record-breaking $5.1 billion penalty after the FTC found the company had deceived users about control over their personal data. The FTC ordered Facebook to implement new restrictions and overhaul its corporate structure, ensuring greater accountability in decisions related to user privacy. This fine was imposed by the FTC after the agency concluded that Facebook had violated the earlier 2012 FTC order by deceiving users about their ability to control the privacy of their personal information. The investigation was triggered by the Cambridge Analytica meltdown which showed the data of 50 million users was obtained without express permission and used for political purposes.

The $5 billion penalty explains a large part of the $8 billion demanded by the shareholders this week. In addition, Meta faced several fines in the European Union (EU). Among others, a 1.2 billion euro ($1.4 billion) fine for Meta’s transfers of personal data to the US without explicit consent.

All this is why the shareholders wanted Zuckerberg and others to reimburse Meta an estimated $8 billion or more for the FTC fine and other legal costs. The shareholders also questioned the timing of share sales by the executives.

By settling, Zuckerberg and other defendants avoid having to answer probing questions under oath. In January, former Meta COO Sandberg was sanctioned for deleting sensitive emails related to the Cambridge Analytica investigation, complicating her testimony.

The Delaware Chancery Court will likely manage access to full court documents for this case through its case files or release them via public interest or watchdog groups as the settlement process concludes. Until then, speculation about the settlement’s magnitude will run rampant. What will remain unrevealed is the true reason why Meta’s executives chose to settle. But it stands to reason that they expected the damages of a continued trial and the associated testimonies would have been even more damaging.

In a time where Meta sees many WhatsApp users actively switching to other messaging platforms, primarily Signal and Telegram, due to growing concerns about privacy and data sharing practices and a data breach at Instagram which sparked global privacy concerns, the last thing the company needs is a magnifying glass due to an ongoing lawsuit.

What has become very clear, even without knowing all the details, is that those in the know feel that Meta keeps abusing users’ personal data for monetary gain.

Despite promises to obtain specific user consent, offer privacy settings, and improve practices, Meta has consistently disregarded users’ privacy.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.