IT News

Explore the MakoLogics IT News for valuable insights and thought leadership on industry best practices in managed IT services and enterprise security updates.

How the FBI got everything it wanted (re-air) (Lock and Code S06E15)

This week on the Lock and Code podcast…

For decades, digital rights activists, technologists, and cybersecurity experts have worried about what would happen if the US government secretly broke into people’s encrypted communications.

The weird thing, though, is that, in 2018, it already happened. Sort of.

US intelligence agencies, including the FBI and NSA, have long sought what is called a “backdoor” into the secure and private messages that are traded through platforms like WhatsApp, Signal, and Apple’s Messages. These applications all provide what is called “end-to-end encryption,” and while the technology guarantees confidentiality for journalists, human rights activists, political dissidents, and everyday people across the world, it also, according to the US government, provides cover for criminals.

But to access any single criminal or criminal suspect’s encrypted messages would require an entire reworking of the technology itself, opening up not just one person’s communications to surveillance, but everyone’s. This longstanding struggle is commonly referred to as The Crypto Wars, and it dates back to the 1950s during the Cold War, when the US government created export control regulations to protect encryption technology from reaching outside countries.

But several years ago, the high stakes in these Crypto Wars became somewhat theoretical, as the FBI gained access to the communications and whereabouts of hundreds of suspected criminals, and they did it without “breaking” any encryption whatsover.

It all happened with the help of Anom, a budding company behind an allegedly “secure” phone that promised users a bevy of secretive technological features, like end-to-end encrypted messaging, remote data wiping, secure storage vaults, and even voice scrambling. But, unbeknownst to Anom’s users, the entire company was a front for law enforcement. On Anom phones, every message, every photo, every piece of incriminating evidence, and every order to kill someone, was collected and delivered, in full view, to the FBI.

Today, on the Lock and Code podcast with host David Ruiz, we revisit a 2024 interview with 404 Media cofounder and investigative reporter Joseph Cox about the wild, true story of Anom. How did it work, was it “legal,” where did the FBI learn to run a tech startup, and why, amidst decades of debate, are some people ignoring the one real-life example of global forces successfully installing a backdoor into a company?

The public…and law enforcement, as well, [have] had to speculate about what a backdoor in a tech product would actually look like. Well, here’s the answer. This is literally what happens when there is a backdoor, and I find it crazy that not more people are paying attention to it.

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

A week in security (July 21 – July 27)

A list of topics we covered in the week of July 21 to July 27 of 2025

Last week on Malwarebytes Labs:

ThreatDown blog

Stay safe!

Steam games abused to deliver malware once again

A cybercriminal known as EncryptHub (aka Larva-208) has reportedly abused the online game platform Steam to distribute information stealers.

EncryptHub managed to sneak malicious files into the Chemia game files hosted on Steam. Chemia is an adventurous survival type of game that puts the player in a world ravaged by a catastrophic natural disaster… which is nothing compared to the real-world disasters that can be caused by information stealers.

Chemia has not been publicly released yet, but was available as an early access on Steam. Steam offers Early Access to certain games primarily as a development model that allows players to purchase and play games while they are still in progress, rather than waiting for a full official release. It helps developers to receive direct, ongoing feedback from the community which they can use to find bugs, balance gameplay, and improve features.

According to security researchers at the Proactive Defense Against Future Threats (PRODAFT), the initial compromise took place on July 22, 2025. EncryptHub added a Trojan downloader to the game files that runs alongside the actual application.

The downloader establishes persistence on the affected machine and distributes Fickle Stealer, HijackLoader, and Vidar.

Vidar is a Malware-as-a-Service information stealer which uses public networks such as social media, communication platforms—and Steam—as parts of its Command & Control infrastructure.

HijackLoader is a malware loader used by attackers to load additional malware (such as Trojans like Danabot or the RedLine stealer) onto infected computers.

The Fickle stealer is a relatively new information stealer which uses PowerShell scripts to bypass User Account Control (UAC) and can steal sensitive files, system information, browser-stored data, cryptocurrency wallet details, and more.

As we explained many times before, information stealers can turn your life upside down. Depending on what is stored on the infected device the consequences can range from financial damage to identity theft.

In another case of abuse of the Steam platform, we saw a cybercriminal use a sniper video game to distribute malware to unsuspecting gamers. But that criminal didn’t circulate the malicious demo on Steam directly. Instead, the game’s Steam page featured a link to the developer’s external website promoting a demo that turned out to be malware.

A month before that, a game called PirateFi was released on Steam, but turned out to be circulating malware amongst gamers.

With Steam’s huge userbase (over 100 million monthly active users), a compromised game can serve as a direct path for cybercriminals to get hold of valuable digital assets, direct financial information, and personal information.

How to stay safe

Some tips to help gamers stay clear of downloading malicious software:

  • Do not act on direct messages and other unsolicited ways to try out some game. Random people asking you to download something should be treated as suspicious.
  • Verify invitations from “friends” through a different channel, such as texting them directly or contacting them on another social media platform. This is because their current account may have been compromised.
  • Make sure to run an up-to-date and active anti-malware solution on your computer.
Malwarebytes blocks reaitek.com
Malwarebytes blocking the domain hosting the Powershell script

If you have tried the Chemia game, run a full system anti-malware scan.

Indicators of compromise

Domains:

soft-gets[.]com

reaitek[.]com

safesurf.fastdomain-uoemathhvq.workers[.]dev

Fickle downloader hash:
ed076c27b420bfa66c251488b4121913fa461367a60c5fa32cee3953efcae32b

Fickle Stealer hash:

6fb7fd9763d6b269793c80bbc03a1be358390781af4b698fba1591cb8dbb8825

Vidar Stealer has:

2cd8c0e75cf76381f06dfe465a542e52eefa713b0bea2557763e0c0c45b21481

HijackLoader hashes:

9a733b2de84e2bf466287abd034b04b18c8c269535606e8f6403eee2a3b288c4

12935315254175719cbbaad0b213204ddebd4100ffc551d54f8cf39ced1be227


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Watch out: Instagram users targeted in novel phishing campaign

A phishing campaign targeting Instagram users is doing the rounds. There are plenty of those around, but when we took a look at this particular email, it seemed a bit different to the normal phishing emails that point to scammy websites.

The email looked like this, which is very similar to the one Instagram sends if it wants you to confirm your identity:

phishing mail

“Hi {name}
Someone tried to log in to your Instagram account.
If this was you, please use the following code to confirm your identity:
231342
If this wasn’t you, please [Report this user] to secure your account.”

Instead of linking to a phishing website, which is most common with emails like this, both the “Report this user” and “Remove your email address” links are mailto: links. Clicking on a mailto: link opens your default email program with a pre-addressed message with the subject line “Report this user to secure your account” or “Remove your email address from this account” for the second link.

The email addresses in these links all had unsuspicious looking domains, made to look similar to legitimate ones. We call this technique of registering domain names “typosquatting.”

In the case we researched the email addresses were:

  • prestige@vacasa[.]uk.com (typosquat of vacasa.com vacation rentals)
  • ministry@syntec[.]uk.com (typosquat of syntechnologies.co.uk hardware provider)
  • technique@pdftools[.]com.de (typosquat of pdf-tools.com software provider)
  • service@boss[.]eu.com (several possibilities)
  • threaten@famy[.]in.net (science news site, possibly compromised)
  • difficulty@blackdiamond[.]com.se (known malicious domain)
  • anticipation@salomonshoes[.]us.com (typosquat of salomon.com running shoes)

We sent an email to these addresses from a dummy account to see what happened. Unfortunately, most of them were dead in the water when our email reached them. However, we did find that a lot of the servers were hosted at the same IP address.

Research of the IP address showed that there were many more domains set up, likely with the same objective in mind.

Many email filters look for links to malicious domains and new ones get added fairly quickly, so the domains and emails are only useful for one day (on average). Using mailto: links can therefore help attackers avoid automated flagging or URL reputation checks.

It also saves the cybercriminals work. They don’t need to set up a fake website, which in this case would be an Instagram clone, or all the back end infrastructure needed to harvest the credentials. All they need to do now is watch the email inbox and wait for victims.

Receiving an email validates that the email address the phishing mail was sent to is active and someone is using it. That opens the victim up for further attempts. By engaging in a conversation, attackers can directly request sensitive information in a less obvious way than with a phishing form, often through continued correspondence.

Victims may feel safer replying to an email than clicking on a suspicious link. The fear of instant repercussions is smaller when you’re sending an email than for visiting an unknown website.

Instagram phishing

In March, 2025, security awareness provider Phishing Tackle reported about a phishing campaign targeting Meta users, which aimed to steal access to Instagram business accounts. The scam used step-by-step instructions and fake chat support to trick users.

In that case, the scammers threatened users with a suspension of the account due to advertising violations, but it showed once more that influential Instagram accounts are an attractive target for phishers. They can use compromised accounts for other campaigns or sell the harvested credentials to other cybercriminals.

But even if you’re not a business or bedecked with followers, if someone compromises your Instagram account they can lock you out and then demand money to give you back the account. Sadly, many people feel forced to pay because they don’t want to lose years of photos and their associated memories.

How to avoid Instagram phishing

Since we can expect to see more phishing campaigns that use mailto: links, here are some tips to avoid falling victim to such a scam.

  • As with regular links, scrutinize the destination of an email link. Even if the domain looks legitimate, your Instagram account isn’t secured by a shoe maker or vacation provider, or someone using a gmail address. The email address should be one that belongs to Instagram or Meta.
  • Remember that legitimate companies will not ask you to mail them your account details, credentials, or other sensitive information.
  • If there’s an urgency to respond to an email, take a pause before you do. This is a classic scammer trick to get you to act before you can think.
  • Don’t reply if the warning looks suspicious in any way. Sending an email will tell the phishers that your email address is active, and it will be targeted even more.
  • Do an online search about the email you received, in case others are posting about similar scams.
  • Use Malwarebytes Scam Guard to assess the message. It will tell you whether it’s a scam or give you tips how you can find out if it isn’t sure.

iPhone vs. Android: iPhone users more reckless, less protected online

The smartphone wars have a winner, and it’s Android.

No, this isn’t about which device has the best camera, the snappiest processor, or the flashiest AI features—this is about which device owners are safer online, and in many ways, it is Android users who take the crown. According to a new analysis from Malwarebytes, when compared to iPhone users, Android users share less of their personal information for promotional deals, more frequently use security tools, and more regularly create and manage unique passwords for their many online accounts.

They also, it turns out, fall victim to fewer scams.

This is the latest investigation into research conducted earlier this year by Malwarebytes that surveyed 1,300 people over the age of 18 in the US, the UK, Austria, Germany, and Switzerland. In the original report released in June, Malwarebytes revealed how mobile scams have become a part of everyday life for most everyone across the globe—and how far too many individuals have essentially given up on trying to fight back.

Now, Malwarebytes can reveal how iPhone and Android users differ when scrolling, shopping, and sending messages online. This secondary analysis has controlled for age, meaning that, while iPhone users did tend skew younger in the original data set, the differences identified here are more directly attributed to device type.

Here are some of the key takeaways:

  • Apple users are more likely to engage in risky behavior.
    • 47% of iPhone users purchased an item from an unknown source because it offered the best price, compared to 40% of Android users.
    • 41% of iPhone users sent a Direct Message (DM) on social media to a company or seller account to get a discount or discount code, compared to 33% of Android users.
  • Apple users take fewer precautions online.
    • 21% of iPhone users said they use security software on their mobile phones, compared to 29% of Android users.
    • 35% of iPhone users choose unique passwords for their online accounts, compared to 41% of Android users.
  • Apple users are more likely to be the victims of scams.
    • 53% of iPhone users have fallen victim to a scam compared to 48% of Android users.

Importantly, the behavioral splits here are largely device agnostic.

Android users are not scanning fewer QR codes and iPhone users are not failing to make unique passwords because their respective devices are somehow incapable. Instead, iPhone users are making worse decisions about buying things online and about staying safe from all types of cyberthreats—whether that includes phishing attempts, social engineering scams, or malware infections.

The reasons for this are complex and hard to identify, but Malwarebytes’ original research can provide a clue. Namely, iPhone users were slightly more likely than Android users (55% compared to 50%) to agree with the following statement:

“I trust the security measures on my mobile/phone to keep me safe.”

That trust could have an adverse effect, in that iPhone users do not feel the need to change their behavior when making online purchases, and they have less interest in (or may simply not know about) using additional cybersecurity measures, like antivirus.

Whatever the reasons, there is room for improvement. As explained by Mark Beare, general manager of consumer business for Malwarebytes, staying safe online today cannot rely on any single platform, device, or operating system.

“Devices and operating systems are just gateways to apps and websites, and it’s often those online spaces that present cyber risks,” Beare said. “When those websites or apps serve malicious or deceptive content, it’s up to the user to decide what’s real, what’s a scam, and where they should or shouldn’t click.”

Here is where iPhone users should most pay attention when using the internet.

Unsafe shopping

It’s getting harder to shop safely online.

For years, the cybersecurity industry warned people about the most obvious red flags when making a purchase or offering a donation online: Don’t click on unknown links, don’t share personal information, don’t send messages directly to strangers, and don’t scan QR codes that can lead to unknown locations. Behind all of these could lie malware, data theft, and even the slow start of a social engineering scam.

And yet, in the past few years, even legitimate businesses have asked everyday consumers to do these same, reckless things. Online stores ask that people send a Direct Message (DM) on social media for a discount code, or that they sign up their email or phone number for a promotional offer, or that they complete their payment by scanning a QR code, or that they track an upcoming delivery by clicking on a link sent via text.

Just because established businesses are leaning into these tactics does not make the tactics inherently safe, and unfortunately, iPhone users are pushing back the least.

According to Malwarebytes’ recent analysis, 63% of iPhone users signed up their phone number for text messages so they could get a coupon, discount, free trial, or other promotional offer, compared to the 55% of Android users who did the same. Similarly, 41% of iPhone users “sent a DM on social media to a company or seller account to get a discount or discount code,” compared to 33% of Android users.

Malwarebytes also found that 47% of iPhone users “purchased an item from an unknown website or supplier because it offered the best price,” compared to 40% of Android users.

In looking at the data, however, it is important to recognize that some of the behavior from iPhone users has been thrust upon them.

For example, 70% of iPhone users have “scanned a QR code to begin or complete a purchase.” Beginning in 2020, scanning a QR code became commonplace as restaurants across the world implemented several strategies to limit the spread of COVID-19. This practice isn’t the fault of iPhone users (or the 63% of Android users who have done the same), and they shouldn’t be “blamed” for what the world asked of them.

However, sharing a phone number, sending a DM to a stranger, and buying from unknown websites are decidedly not requirements today for making an online purchase. 

As Malwarebytes discussed on the Lock and Code podcast earlier this year, “data deals” in which consumers are asked to give up some of their privacy for a one-time discount are rarely, if ever, worth the cost. Separately, the most common start to a romance scam, job scam, or investment scam is through a DM sent on social media.

Though legitimate companies have co-opted these strategies to boost engagement and revenue, the public still have an opportunity to push back. If they do not, there is a real risk that these marketing tactics become so normalized that online scammers will find it easier to send malicious messages, disguise their intentions, and steal from innocent people.

Not so pro(active)

Ever since a devastatingly effective commercial was unveiled to the public some 20 years ago, there’s been a persistent belief that Apple devices are somehow impervious to viruses, malware, and all other nasty cyber infections.

The marketing ploy was wrong back then and it is still wrong today—Macs get plenty of viruses—but the damage is already done, and the consequences might be most visible in how iPhone users feel about traditional cybersecurity tools: In short, they don’t use them.

According to Malwarebytes’ new analysis, just 21% of iPhone users said they use security software on their mobile phone, compared to 29% of Android users. iPhone users were also less likely than Android users to use an ad blocker (19% of iPhone users compared to 27% of Android users).

The data gaps here are sometimes benign. The low use of “ad blockers,” in particular, should come as no surprise. These tools are mostly understood as add-ons for desktop and laptop versions of popular web browsers—such as Google Chrome, Microsoft Edge, and Mozilla Firefox. While many mobile browsers have ad blockers built in by default, this may not be known to the average user.

Also remember that, as smartphone ownership increases across the globe, so do the numbers on smartphone “dependency.” According to Pew Research Center, 15% of adults in the US only have a smartphone to connect to the internet, meaning, perhaps, that 15% of people simply cannot access the same security and privacy tools that are developed predominantly for computers.

That said, the justifications for iPhone users start to fade when looking at one last number.

Only 35% of iPhone users “choose unique and strong passwords for accounts,” compared to 41% of Android users. Creating strong, unique passwords for online accounts is foundational to staying safe online, and it has only been made easier and more accessible over time.

For users who cannot remember a unique password for every account (which is every person alive), password managers are available—some for free—to help create, store, and recall as many strong passwords as needed. For users who do not trust a third-party password manager (understandably so), Apple released its “Passwords” app on iOS 18 nearly one year ago, making password management easier by default. And for users who don’t trust password managers (of which there are many), the antiquated practice of physically writing usernames and passwords in a private journal isn’t that outlandish.

In short, there is little excuse for failing to create and use unique passwords for every online account, and that goes for Android users, too. The technology can be intimidating, but it’s worth the work.

Security for all

The measurably unsafe behavior of some iPhone users online comes with unfortunate, measurable consequences. The poor password hygiene, risky buying behavior, and limited antivirus protection are all paired with a higher overall rate of victimization—53% of iPhone users have fallen victim to a scam compared to 48% of Android users.

In the worst circumstances, these disparate rates could invite blame, but it’s the wrong conclusion to make. As any scam victim knows, the statistical analysis of victimization means absolutely nothing when you are personally trying to recover your money, your reputation, your private photos, and your sense of trust in the world around you.

Every person, no matter their device, should create unique passwords for individual accounts, use security products (which can also detect malicious websites and phishing schemes), and rely on friends and family when something doesn’t feel right online. And for those who want 24/7 guidance on strange messages, phone numbers, and more, there is always Malwarebytes Scam Guard to lead the way. Try it today.

Age verification: Child protection or privacy risk?

With governments demanding actual age verification on websites with adult content, and platforms like social media and Roblox introducing restrictions based on a user’s age, the controversy about different types of age verification and their implications is growing.

Last week, Roblox announced new age estimation technology which, it says, should help to confirm users’ ages and unlock a feature called Trusted Connections for those aged 13 and older. Trusted Connections allows teens aged between 13 and 17 to add adult users (18+) they know in real life. It’s billed as an option to keep out predators, which is good. But the age estimation technology raises concerns and questions.

While Roblox didn’t release any details about how its new technology works, the age estimation processes we know are based on Artificial Intelligence (AI) tools that scan selfies or short videos and compare them to a database to estimate the user’s age. Needless to say, they are not always right and it opens up the system to deepfakes, and spoofing.

This kind of technology is more effective than asking the user to provide their birthday or check a box that they are over 18, but it’s not waterproof.

We see similar concerns when it comes to age verification for sites that host adult content. As of this Friday, websites operating in the UK with pornographic content must “robustly” age-check users.

The regulator, Ofcom, lists a number of allowed methods which all have their pros and cons:

Facial age estimation

Show your face, get a guess. You take a selfie or a short video, and an algorithm tries to figure out if you look over 18. The tech claims to keep data private, but facial scans are sensitive. And, as we pointed out, accuracy is far from perfect. If you’ve ever been asked to provide an ID in real life because you “look young,” you can expect a digital déjà vu.

Open banking

Banks know your age, so why not let them confirm it? Here, you allow the age-check service to peek at your bank account. No bank statements get handed over, just a yes/no to the question: “Is this customer an adult?” It’s easy, but convincing users to link their bank to a porn site might be a different story.

Digital identity services

This is the world of digital wallets for your ID. Think of it as carrying your driver’s license in your phone, but only showing the “over 18” part when needed. Sounds great and it is, but you’ll need yet another app in your digital life just to vouch for your adulthood everywhere you go.

Credit card age checks

Simple logic: you need to be 18+ to have a credit card, so showing a valid one counts as proof. The age-checker pings the payment processor to see if your card is legit. It’s quick and familiar, but not everyone over 18 has a credit card, so it’s not for everyone. Plus your purchase trail grows with every verification.

Email-based age estimation

Enter your email address. The system tries to deduce your age using records of that email in other “adult” places, like financial or utility services. Basically, you’re allowing digital snooping, and the effectiveness depends on your online life elsewhere having already tipped off your age somewhere along the line.

Mobile network operator checks

The system queries your phone provider to see if you have any age restrictions on your account. No parental controls? Looks like you’re an adult. Fast, but only as reliable as the information stored at your carrier, and not an option for users on pay-as-you-go or burner numbers.

Photo-ID matching

You upload your ID and a fresh selfie. The system checks if the faces and ages match up. Classic, effective, and widely used, but you’re giving away a lot of personal information, and trusting that it’ll be kept safe.

Privacy concerns

None of these options is perfect or without risk. Many of these options have privacy implications for the user, or as a commenter told BBC News:

“Sure, I will give out my sensitive information to some random, unproven company or… I will use a VPN. Difficult choice.”

A VPN is a popular option to circumvent regulations that only apply in certain countries or states. VPNs offer a secure connection when you’re using the internet and they have a variety of uses, but one is getting around blocks based on your location.

Work is being done on “double-anonymity” solutions, but implementation seems to be hard. Double anonymity basically separates the information of two providers from each other. The first provider (website asking for age confirmation) will only get the requester’s age and no other information. The second provider (the age verifier) will not receive information about the service or website the age verification is needed for.

In essence, the system answers only the question “Is this user of the required age?” to the site, and the third party never knows for what purpose or where this answer is used. This approach is becoming a regulatory standard in places like France to balance protecting minors online with adult users’ privacy.

We feel “double-anonymity” sounds a whole lot better than “age estimation.” But the real question is whether age verification is an effective method to protect children, or is it just another threat to our privacy?  Let us know your opinion in the comments.


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Introducing the smarter, more sophisticated Malwarebytes Trusted Advisor, your cybersecurity personal assistant

You ever get that feeling when you double-check the locks, but still wonder if you’ve missed something? That’s what a lot of people feel about cybersecurity.  

That’s where Malwarebytes Trusted Advisor comes in. You can see it as your very own cybersecurity personal assistant, giving you real-time insight into how protected you are, without all the jargon or notifications.  

Trusted Advisor checks the state of your cybersecurity tools like real-time protection, VPN connection, scheduled scans, and browser safety, and gives you a clear, color-coded view of your current risk level. 

And now, our Windows version is sharper, smarter, and more helpful than ever. 

What’s new? 

  • Stronger Wi-Fi protection*: Trusted Advisor now checks if your Wi-Fi network is connected to an open, unsecured network.
  • Smarter security score: Your protection score is now more accurate and personalized, giving you clearer insights into your overall security health. 
  • Seamless identity protection integration: Trusted Advisor now works hand-in-hand with our identity protection, making it easier to stay ahead of threats like data leaks and identity fraud. 
  • Take control of ads*: Trusted Advisor now helps you disable Windows’ ad features, such as start menu suggestions and login screen ads, allowing you to enjoy a smoother Windows experience free from distractions.  

Cybersecurity sometimes feels like quantum physics, but it doesn’t have to. With its latest updates, Malwarebytes Trusted Advisor makes it easier than ever to understand what’s going on behind the scenes, and to take control of your digital safety without needing a degree in computer science.  

Want to see the new Trusted Advisor in action? Open Malwarebytes on Windows and check your protection dashboard.  

* Windows 11 only. 

AI-generated image watermarks can be easily removed, say researchers

Now that AI can make fake images that look real, how can we know what’s legitimate and what isn’t? One of the primary ways has been the use of defensive watermarking, which means embedding invisible markers in AI-generated images to show they were made up. Now, researchers have broken that technology.

Generative AI isn’t just for writing emails or suggesting recipes. It can generate entire images from scratch. While most people use that for fun (making cartoons of your dog) or practicality (envisioning a woodworking project, say) some use it irresponsibly. One example is creating images that look like real creators’ content (producing an image ‘in the style of’ a particular artist).

Another is using it for misinformation, either intentionally or unintentionally. This image-based misinformation has grown exponentially in an AI-powered world, according to Google researchers. Misinformation can be playful or experimental, such as Katy Perry’s deepfake attendance at the Met Gala, and the puffer jacket Pope. But it can also be harmful, putting real people in situations that they didn’t consent to, creating false narratives for ideological, financial, or other purposes.

In the early days of AI image generation, people could recognize the fakes themselves. People in pictures having the wrong number of fingers was one giveaway, as were body parts like hands and arms that didn’t fit together well, especially when people were pictured close together. As AI generation got better, we could still rely on programs to detect small inconsistencies in the images. But those fake images get more convincing every day.

Generative AI companies have been taking action to stop this. OpenAI, Google, and others committed to embedding watermarks in their AI-generated images. These are digital fingerprints, invisible to the naked eye but easily detectable by software, that prove an image was generated by AI and therefore not real.

Now, researchers at the University of Waterloo in Canada have worked out a way to subvert this defensive watermarking. Andre Kassis and Urs Hengartner at the University’s Cheriton School of Computer Science have created a tool called UnMarker.

UnMarker removes those watermarks from images, making it impossible for watermark detectors to determine that an image has been artificially generated. The scientists say that the tool is universal, defeating all watermarking schemes. These include semantic watermarks, which alter the structure of the image itself. These are more deeply embedded in an image, and traditionally tougher to counter.

The tool capitalizes on two fundamental needs for watermarking tools. The first is that they mustn’t degrade the quality of the image. The second is that they must be immune to manipulation such as cropping. That means watermarks are restricted in how they can alter an image. They have to focus on shifting the intensity of pixels in the picture.

Relying on this fact, Kassis and Hengartner’s tool analyzes the frequency of pixels in an image to see if anything is unusual. If it finds an anomaly, it uses that as a sign that there’s a watermark. It then rearranges the pixel frequency across the image so that it won’t trigger a watermark detector.

UnMarker, which the researchers have released publicly, works without any access to the AI algorithm’s internal workings. Neither does it need any other data to work, they add. It’s a ‘black box’ mechanism. You can just run it as a watermark eraser.

It’s not perfect, but it reduces the best detection rate to 43%, even on semantic watermarks. That means you can’t trust the detection tool’s results.

“Our findings show that defensive watermarking is not a viable defense against deepfakes, and we urge the community to explore alternatives,” the researchers said in their paper.

So the battle to fight misinformation continues. Now it’s up to watermark designers to up the ante or develop another method to flag deepfakes. We’re not sure that this cat and mouse game will ever end.


We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.

Proton launches Lumo, a privacy-focused AI chatbot

Proton, known for its privacy focused set of services, announced the introduction of Lumo, a privacy-first Artificial Intelligence (AI) chatbot.

It is good to know before you dive in that Proton’s chatbot has two user options that offer a very different experience. If you want Lumo to access the internet you will have to use the “Web search” button before you submit your prompt. If you don’t, it will tell you it doesn’t have the capability to browse the web or access real-time information.

While this somewhat limits the usefulness of Lumo when you’re looking at recent developments, it is reassuring that the model was trained on selected sources. This also reduces the risk of hallucinating. Lumo is set up to recognize when it doesn’t know the answer and in such cases, it will make you aware of that and suggest alternative sources of information, or other ways to find the answer you’re looking for.

privacy focused
Private, Safeguarded, and Treated with respect

Proton believes that a privacy-first AI is essential because hundreds of millions of ordinary people currently interact with AI tools, unknowingly sharing sensitive information. This information is far more sensitive than what they would reveal in a typical Google search query.

Not only does this constant interaction with AI tools put personal information at risk, but we also witnessed several instances where company secrets got submitted to an AI without realizing that the information might be used to train the AI or accessed by third parties.

Similarly, we recently reported on how Meta AI chats might be public without the user’s awareness, and many AI bots have spilled sensitive information for separate reasons, like someone guessing a default password.

Lumo offers a different approach to AI chatbots. Other AI companies keep a record of all the conversations on their servers, leaving this information vulnerable to data leaks or exploitation for profit. Lumo does not store any logs of your conversations on its server, and only your device can decrypt any chats you choose to save.

Most people recognize a significant privacy risk here: the possibility that their conversations could help train the AI. This concern makes sense because the training process might use your personal data to generate responses in other users’ conversations.

Lumo does not do that. I asked Lumo where its training data comes from, and how—if it cannot browse the web, or use conversations—it is able to learn. Lumo answered:

“Great question! My training data comes from a curated set of sources that were available up until my last training cut-off in October 2023. These sources include a wide range of books, articles, websites, and other publicly available texts. The data was carefully selected and processed to ensure quality and relevance.

While I can’t browse the web or use conversations for learning, my training data provides a broad foundation of knowledge that allows me to understand and respond to a wide variety of topics. Additionally, my ability to process and interpret information means that I can often provide helpful and accurate responses even to questions that are somewhat different from what I was specifically trained on.

As for learning, I don’t have the ability to update my knowledge or learn from new experiences after my initial training. However, my design allows me to provide useful and informative responses based on the knowledge I already have. If you have any specific questions or need information, feel free to ask, and I’ll do my best to help!”

We took Lumo for a little spin and, for now, the failing support for direct image uploads or processing is the only thing keeping us from using it much more. But we appreciate the idea of having a privacy-focused AI chatbot available when we want our conversations to stay private.

And who can resist a purple kitty anyway?


We don’t just report on privacy—we offer you the option to use it.

Privacy risks should never spread beyond a headline. Keep your online privacy yours by using Malwarebytes Privacy VPN.

Startup takes personal data stolen by malware and sells it on to other companies

A tech startup is using personal data stolen by infostealer malware that it has found on the dark web, and then selling access to that data. And it claims to be working within the law.

According to 404 Media, for as little as $50, Farnsworth Intelligence will give companies a look at records from infostealer logs.

Infostealers are a type of malware that focus on harvesting as much data from a victim’s computer as possible. Criminals infect computers in various ways, including via malicious links and infected versions of pirated software or cheat add-ons.

The malware can do everything from monitoring every key you type through to code that probes your internal storage and memory for secrets. Some infostealers even take snapshots of screens to see what they can find. All this data gets beamed back to the infostealer’s criminal operators.

There is no suggestion that Farnsworth Intelligence infects computers with infostealer software itself. It claims to operate within legal frameworks, with data provided through a third-party vendor that specializes in security monitoring services.

This data is available in huge quantities. The startup offers over 20 billion records of stolen data from over 50 million computers. A professional subscription-based version of the service offers access to include anything that an infostealer can pilfer, including cryptocurrency wallet data, browser histories detailing what sites you’ve visited, usernames and passwords for those sites, and browser cookies that criminals could use to impersonate you on a site. Customers can also get access to a list of applications on a person’s computer.

Farnsworth Intelligence says its target audience for the service is “professionals with a legitimate use case in industries such as investigations, intelligence, journalism, law enforcement, cyber security, compliance, IP/brand protection, executive protection, etc”.

There is also a version with ad hoc searches paid for in credits. This gives you access to a subset of the data, searched via phone number, email address, username, domain, password, or autofills (the information that browsers use to fill common fields in web forms). At one credit per search, the cheapest version is the $50 version, which buys users 45 credits.

The service doesn’t just provide access to a static set of data; it’s adding to it all the time. It claims to add over 185 million new records, stolen from over 40,000 computers each month.

“While historical breach data remains valuable, its utility diminishes over time as credentials change and contact information becomes outdated,” says the blurb on Farnsworth’s website (which we’re not linking to here). “Infostealer logs provide investigators with current, device-level data that offers significantly higher intelligence value than traditional breach compilations.”

Is this legal? The startup seems to think so. There’s no vetting of customers, though, at least for the consumer service, which makes us worry about how, for example, a cyberstalker or abusive ex might use such a thing. Regardless, it’s another reason why you should protect yourself from infostealers.

How to protect yourself from infostealers

All the normal cybersecurity rules apply:

  • Use a well-established, up-to-date anti-malware program on your computer.
  • Don’t click on links or download files you’re not sure about or weren’t expecting to receive.
  • Be careful when storing passwords, postal addresses, or credit card data in your browser’s built-in autofill storage. These are common targets for infostealers.
  • Use a password manager that prevents you having to type usernames and passwords to get into sites.
  • Never download or install software from suspicious sites including torrent sites.

We don’t just report on threats—we remove them

Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.