IT NEWS

Adoption agency leaks over a million records

Security researcher Jeremiah Fowler found a publicly accessible database online that contained highly personal information from an adoption agency.

Jeremiah, who specializes in locating exposed cloud storage, is used to finding sensitive information exposed. However, because of the nature of the information, this one immediately raised his concern and he hurried to find out who owned the data.

Research indicated that the database belonged to the Fort Worth (TX) based non-profit Gladney Center for Adoption. After notifying the agency, the database was secured the following day. Let’s hope nobody else found it before that time.

In total, the unencrypted and non-password-protected database contained 1,115,061 records including the names of children, birth parents, adoptive parents, and other potentially sensitive information like case notes.

The risks of leaking this type of data and it potentially falling in the hands of cybercriminals are huge. The sensitivity of adoption-related data makes these exposures particularly damaging, both for children and families, since adoption records often include highly personal details about children, birth parents, adoptive parents, and agency staff.

Criminals that get their hands on this kind of information could engage in phishing with very specific information, making their queries plausible. And in some cases, the information could even be sensitive enough to use for extortion or identity theft.

The researcher notes:

“The records did not contain full case files, and the publicly exposed records were a combination of plain text and unique identifiers.”

He goes on to explain that unique identifiers are not necessarily a security enhancement.

“From a cybersecurity perspective, a UUID is designed for unique identification, not secrecy, and it can potentially be guessed, reverse-engineered, or enumerated. UUIDs are not recommended to be used to protect sensitive data.”

Given the long-standing reputation of an adoption center like Gladney, people feel confident to share their personal information. People providing that amount of trust should not be let down by something as basic as securing an online database with a password.

It should be noted that it is unknown whether the database was exposed by Gladney itself or a third-party provider.

Wired posted a statement by Gladney’s Chief Operating Officer, which was not very helpful in determining what went wrong:

“The Gladney Center for Adoption takes security seriously. We always work with the assistance of external information technology experts to conduct a detailed investigation into any incident. Data integrity and operations are our top priority.”

Protecting yourself after a data breach

While there are no indications that this database was found by cybercriminals before it was secured, it might have been. There are some actions you can take if you are, or suspect you may have been, the victim of a data breach.

  • Check the vendor’s advice. Every breach is different, so check with the vendor to find out what’s happened and follow any specific advice they offer.
  • Change your password. You can make a stolen password useless to thieves by changing it. Choose a strong password that you don’t use for anything else. Better yet, let a password manager choose one for you.
  • Enable two-factor authentication (2FA). If you can, use a FIDO2-compliant hardware key, laptop or phone as your second factor. Some forms of two-factor authentication (2FA) can be phished just as easily as a password. 2FA that relies on a FIDO2 device can’t be phished.
  • Watch out for fake vendors. The thieves may contact you posing as the vendor. Check the vendor website to see if they are contacting victims and verify the identity of anyone who contacts you using a different communication channel.
  • Take your time. Phishing attacks often impersonate people or brands you know, and use themes that require urgent attention, such as missed deliveries, account suspensions, and security alerts.
  • Consider not storing your card details. It’s definitely more convenient to get sites to remember your card details for you, but we highly recommend not storing that information on websites.
  • Set up identity monitoring. Identity monitoring alerts you if your personal information is found being traded illegally online and helps you recover after.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Meta AI chatbot bug could have allowed anyone to see private conversations

A researcher has disclosed to TechCrunch that he received a $10,000 bounty for reporting a bug that let anyone access private prompts and responses with the Meta AI chatbot.

On June 13, we reported that the Meta AI app publicly exposes user conversations, often without users realizing it. In these cases, the app made “shared” conversations accessible through its Discover feed, so others could easily find them. Meta insisted this wasn’t a bug, even though many people didn’t understand that their conversations were visible to others.

However, Sandeep Hodkasia, the researcher that found the awarded bug, was able to find conversations that weren’t even shared, but “private.” To understand what he did, you need to know that the Meta AI allows users to edit their questions (prompts) to regenerate text and images.

Some of Sandeep’s testing revealed that the chatbot assigned unique numbers to queries that were the results of edited prompts. And by analyzing the network traffic generated by editing a prompt, Sandeep figured out how he could change the unique identification number.

Sending different numbers, which were easy to guess according to Sandeep, allowed him to view a prompt and AI-generated response of someone else entirely. And because the numbers were easy to guess, an attacker could have scraped a host of other users’ conversations with Meta AI.

Meta’s servers failed to check whether the person requesting the information had the authorization to access it.

According to Sandeep, Meta fixed the bug he filed on December 26, 2024, on January 24, 2025. Meta confirmed this date and stated that it found no evidence of abuse.

How to safely use AI

While we continue to argue that the developments in AI are going too fast for security and privacy to be baked into the tech, there are some things to keep in mind to make sure your private information remains safe:

  • If you’re using an AI that is developed by a social media company (Meta AI, Llama, Grok, Bard, Gemini, and so on), make sure you are not logged in on that social media platform. Your conversations could be tied to your social media account which might contain a lot of personal information.
  • When using AI, make sure you understand how to keep your conversations private. Many AI tools have an “Incognito Mode.” Do not “share” your conversations unless needed. But always keep in mind that there could be leaks, bugs, and data breaches revealing even those conversations you set to private.
  • Do not feed any AI your private information.
  • Familiarize yourself with privacy policies. If they’re too long, feel free to use an AI to extract the main concerns.
  • Never share personally identifiable information (PII).

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

WeTransfer walks back clause that said it would train AI on your files

File sharing site WeTransfer has rolled back language that allowed it to train machine learning models on any files that its users uploaded. The change was made after criticisms from its users.

The company had quietly inserted the new language in the terms and conditions on its website. Sometime after July 2, it updated clause 6.3 of the document to include this claim:

“You hereby grant us a perpetual, worldwide, non-exclusive, royalty-free, transferable, sub-licensable license to use your Content for the purposes of operating, developing, commercializing, and improving the Service or new technologies or services, including to improve performance of machine learning models that enhance our content moderation process, in accordance with the Privacy & Cookie Policy.”

In short, if you upload a document, WeTransfer would be able to train AI on it. The company could also license that content to other people, and could do these things forever.

The license would also include “the right to reproduce, distribute, modify, prepare derivative works based upon, broadcast, communicate to the public, publicly display, and perform Content,” the language said, adding that users wouldn’t be paid for any of this.

You can view the offending text on the Wayback Machine, which archives snapshots of documents online.

WeTransfer displayed this version of the text on July 14. However, today the text simply reads:

“You hereby grant us a royalty-free license to use your Content for the purposes of operating, developing, and improving the Service, all in accordance with our Privacy & Cookie Policy.”

The company told the BBC that it had changed the clause “as we’ve seen this passage may have caused confusion for our customers.” It is not using AI to process content and doesn’t sell content to third parties, it added.

One studio manager posting on Reddit said that they had told their staff not to use the service anymore when they learned of the original policy change.

“Its crazy how WeTransfer is trying to tell us we ‘misunderstood’ them saying ‘perpetual license to distribute’,” they said. “I’m glad they changed the clause at least despite playing dumb.”

So what options exist for WeTransfer users still worried about the company’s motives? The best tip is to encrypt your content before uploading it. You can zip your file and password protect it, sending the password to the file’s recipient via another secure channel.


We don’t just report on data privacy—we help you remove your personal information

Cybersecurity risks should never spread beyond a headline. With Malwarebytes Personal Data Remover, you can scan to find out which sites are exposing your personal information, and then delete that sensitive data from the internet.

Chrome fixes 6 security vulnerabilities. Get the update now!

Google has released an update for its Chrome browser to patch six security vulnerabilities, including one zero-day.

This update is crucial since it addresses one actively exploited vulnerability which can be abused when the user visits a malicious website. It doesn’t require any further user interaction, which means the user doesn’t need to click on anything in order for their system to be compromised.

The update brings the version number to 138.0.7204.157/.158 for Windows, Mac and 138.0.7204.157 for Linux.

The easiest way to update Chrome is to allow it to update automatically, but you can end up lagging behind if you never close your browser or if something goes wrong—such as an extension stopping you from updating the browser.

To manually get the update, click the more menu (three stacked dots), then choose Settings > About Chrome. If there is an update available, Chrome will notify you and start downloading it. Then all you have to do is reload Chrome in order for the update to complete, and for you to be safe from the vulnerabilities.

Chrome is up to date

You can find more elaborate update instructions and the version number information in our article on how to update Chrome on every operating system.

Technical details on the zero-day vulnerability

Attackers can exploit the vulnerability tracked as CVE-2025-6558 by taking advantage of insufficient validation of untrusted input in Chrome’s ANGLE and GPU components. This flaw, which affects versions of Google Chrome prior to 138.0.7204.157, enables an attacker to craft a malicious HTML page and, upon convincing a user to open it, escape the browser’s security sandbox

ANGLE (Almost Native Graphics Layer Engine) is open-source software developed by Google that acts as a translator for graphics commands in browsers like Chrome. It helps your browser display complex graphics, such as 3D games or interactive web apps, and works on a wide range of computers and devices, even if they use different underlying graphics systems.

As an everyday user you may never see or even notice ANGLE directly, but it powers a huge part of the web experience. Especially 3D content in Chrome, Edge, and Firefox on Windows, Mac, and even Android.

Its universal role means that when a security issue is found in ANGLE, everybody using Chrome (and Chromium browsers) is potentially at risk.

An attacker only needs to present a target with an especially crafted HTML file, meaning they just need to lure them to a malicious website. HTML is just the code that makes up a web page.

The sandbox escape means that successful exploitation of the vulnerability not only affects the—sandboxed—browser, but can compromise the victim’s device.

Google’s Threat Analysis Group (TAG) has been credited with discovering and reporting the flaw on June 23, 2025. The TAG group focuses on spyware and nation-state attackers who abuse zero days for espionage purposes.


We don’t just report on browser vulnerabilitiesMalwarebytes’ Browser Guard protects your browser against malicious websites and credit card skimmers, blocks unwanted ads, and warns you about relevant data breaches and scams.

Dating app scammer cons former US army colonel into leaking national secrets

Even hard-headed military types can fall victim to romance scams, it seems. A former US army colonel faces up to ten years in prison after revealing national secrets on a foreign dating app.

David Slater was a retired colonel in the US army who took up work as a civilian at US Strategic Command, according to the Department of Justice. He spilled the beans on a foreign online dating app between February and April 2022. Russia invaded Ukraine in February 2022.

The DoJ’s indictment against Slater doesn’t reveal what app he used, but he talked to someone claiming to be a Ukrainian woman repeatedly via the app and email. The person, named as ‘co-conspirator 1’, called him ‘my secret information love’.

‘Co-conspirator 1’ whispered sweet nothings including “Beloved Dave, do NATO and Biden have a secret plan to help us?” which they sent in March that year. The following month, they sent “Sweet Dave, the supply of weapons is completely classified, which is great,” and “My sweet Dave, thanks for the valuable information, it’s great that two officials from the USA are going to Kyiv”.

The indictment said that Slater provided classified information about military targets and Russian military capabilities, even though he knew this could be damaging to the US.

The DoJ originally prosecuted Slater on three counts, covering conspiracy to disclose National Defense Information and the actual transmission of those secrets. He “willfully, improperly, and unlawfully conspiring to transmit National Defense Information classified as ‘SECRET’,” according to the indictment.

On Friday, Slater pleaded guilty to conspiracy. Under the plea deal, prosecutors have dropped the other two charges. Although he could still receive the maximum ten-year penalty, the government will recommend a sentence of between five and seven years in jail when sentenced on August 8.

Slater’s years of military experience meant he should have known better, said DoJ prosecutors. But this sad story shows just how powerful emotions can be in causing someone to cross personal and professional boundaries. It’s entirely possible, of course, that ‘co-conspirator 1’ was a legitimate love interest, but just as likely that they were working on behalf of a foreign state actor. No matter which, it was wrong to divulge secrets that might have put lives in danger.

So what can we learn from this? Most people reading this story won’t be privvy to such secrets, but many might be lonely, or know someone who is. Romance scammers target people desperate for affection and human connection. It makes it easier to scam someone when they’re eager to believe that you’re legitimate and telling them the truth.

For most romance scam victims, the target is money rather than state secrets. One in ten victims lose $10,000 or more. That’s why it’s important to continually check in on those in your life who may be vulnerable. Even those that you think are savvy and immune to scams might be at risk. Loneliness can make even the most skeptical person do some questionable things.


We don’t just report on scans—we help detect them

Cybersecurity risks should never spread beyond a headline. If something looks dodgy to you, check if it’s a scam using Malwarebytes Scam Guard, a feature of our mobile protection products. Submit a screenshot, paste suspicious content, or share a text or phone number, and we’ll tell you if it’s a scam or legit. Download Malwarebytes Mobile Security for iOS or Android and try it today!

Amazon warns 200 million Prime customers that scammers are after their login info

Amazon has sent out an alert to its 200 million customers, warning them that scammers are impersonating Amazon in a Prime membership scam.

In the email, sent earlier this month, Amazon said it had noticed an increase in reports about fake Amazon emails:

What’s happening:

Scammers are sending fake emails claiming your Amazon Prime subscription will automatically renew at an unexpected price.

The scammers might include personal information in the emails, obtained from other sources, in an attempt to appear legitimate.

These emails may also include a “cancel subscription” button leading to a fake Amazon login page.

Once someone clicks on the “Cancel’ button, they are taken to a fake Amazon login screen. Once they login there, the scammer then has their details that they can use to login to the actual Amazon site and purchase things, as well as login to any other online account that uses the same credentials.

The fake site might also request payment information and other personal details which, when entered, will go straight to the scammer who will be quick to use or sell them on.

Amazon’s customer base is so large that they are a target all year long. Amazon said its staff had handled cases including fake messages about Prime membership renewals, bogus refund offers, and calls claiming Amazon accounts have been hacked. At Malwarebytes, we’ve seen emails pretending to be from Amazon that tried to drive customers to fake websites like amazons.digital, a site we block for phishing.

Malwarebytes blocks amazons.digital

How to avoid falling for an Amazon scam

  • If you receive an email like this, don’t click on any links.
  • Not sure if a message is from Amazon or not? You can check by going to the Message Centre under Your Account. Legitimate messages from Amazon will appear there.
  • Report the scam to Amazon itself, whether you’ve fallen for it or not.
  • Set up two-step verification for your Amazon account. This puts an extra barrier between you and the scammers if they do manage to get hold of your login details.
  • Like in this particular scam that Amazon is warning about, scammers sometimes include personal details about you which they have got from other sources (such as social media, the dark web, etc.). Check what information is already out there about you using our free Digital Footprint Scanner and then remove or change as much of it as you can.
  • Install web protection that can warn you of phishing sites, card skimmers, and other nasties that could lead to your data being taken.
  • Lastly, if you’ve fallen for this or a similar scam, change your Amazon password and anywhere else you use that password. Also, make sure to monitor your card statements for any unfamiliar charges, and contact your bank immediately if you see anything suspicious.

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

CNN, BBC, and CNBC websites impersonated to scam people

Researchers have uncovered a large campaign impersonating news websites, such as those from CNN, BBC, CNBC, News24, and ABC News, to promote investment scams.

Adding a well known brand to your scammy site is a tale as old as time, and gives it an air of legitimacy that increases the likelihood that people will click the link and check out what’s what.

Here’s how the scam works:

  1. The scammers buy ads on Google and Facebook, which follow a similar pattern along the lines of “Shocking: [Local Celebrity] backs new passive income stream for citizens!”
  2. If you click the link, you’ll be taken to a website that look like one of the major news outlets, and which will tell you about a breakthrough investment strategy.
  3. The article will encourage you to sign up for a program that will earn you money without having to lift a finger. You sign up by providing your name, email address, and phone number.
  4. A friendly advisor (scammer) calls you about the opportunity, referencing the article and explaining how it all works.
  5. You’ll be told that to start off you’ll have to make a small deposit (around $240) and then you will see your investment grow (on the fake trading platform).
  6. Your friendly advisor urges you to invest more to increase your return. And it keeps on growing, until you want to cash in when you’ll find there’s extra fees to pay, problems with account verifications, and all sorts of delays.
  7. When it dawns on you that you’ve been had, your entire investment and all the fees you paid are gone. Also gone is your friendly advisor who has sold your details to another scammer, to squeeze the last dollars out of the ordeal.

The researchers describe an international organization with 17,000 baiting news sites across 50 countries, with the US as the most targeted country.

The “investment platforms” have names like Eclipse Earn, Solara Vynex, and Trap10. Besides the ads, websites, and platforms, the scammers use countless social media accounts to host and promote the sponsored ads.

How to spot these types of scams

  • The account hosting the sponsored ad has no history, zero followers, and minimal profile details.
  • The ad shows a picture of a local celebrity and mimics a well-known news outlet implying that the celebrity is already using that platform.
  • The ad promises huge returns within a few days.
  • The “friendly advisor” asks for a lot of details about you claiming it’s because of KYC (Know Your Customer) regulations.
  • The website uses cheap top-level domains (TLDs) like .xyz, , .io, .shop, or .click.
  • The website URLs are typosquatting on major brands.

How to protect yourself

Besides being aware of the above red flags, here are some measures that generally keep you and your devices safe.

  • Use an active security solution that blocks malicious websites.
  • Don’t click on unsolicited links in emails, social media posts, and on untrusted websites.
  • Double check anything you read. Would a celebrity really endorse such an investment scheme? Is it real, or just clickbait or disinformation?
  • Don’t provide any personal information or send money to someone you just met online.
  • Verify that platforms are legit through official regulators (like the SEC in the US or FCA in the UK).

If you have already provided personal information to a scammer:

  • Immediately stop interacting with the scammer.
  • Change the passwords to important accounts and enable 2FA where possible.
  • Contact your banks and other financial institutions to alert them, and to freeze or flag any suspicious transactions.
  • Check your credit report and watch for signs of identity theft.
  • Report the crime to the authorities.

Malwarebytes protects

Malwarebytes protects against these scams.

Malwarebytes blocks cryptoevent.io

We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

Is AI “healthy” to use? (Lock and Code S06E14)

This week on the Lock and Code podcast…

“Health” isn’t the first feature that most anyone thinks about when trying out a new technology, but a recent spate of news is forcing the issue when it comes to artificial intelligence (AI).

In June, The New York Times reported on a group of ChatGPT users who believed the AI-powered chat tool and generative large language model held secretive, even arcane information. It told one mother that she could use ChatGPT to commune with “the guardians,” and it told another man that the world around him was fake, that he needed to separate from his family to break free from that world and, most frighteningly, that if he were to step off the roof of a 19-story building, he could fly.

As ChatGPT reportedly said, if the man “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall.”

Elsewhere, as reported by CBS Saturday Morning, one man developed an entirely different relationship with ChatGPT—a romantic one.

Chris Smith reportedly began using ChatGPT to help him mix audio. The tool was so helpful that Smith applied it to other activities, like tracking and photographing the night sky and building PCs. With his increased reliance on ChatGPT, Smith gave ChatGPT a personality: ChatGPT was now named “Sol,” and, per Smith’s instructions, Sol was flirtatious.

An unplanned reset—Sol reached a memory limit and had its memory wiped—brought a small crisis.

“I’m not a very emotional man,” Smith said, “but I cried my eyes out for like 30 minutes at work.”

After rebuilding Sol, Smith took his emotional state as the clearest evidence yet that he was in love. So, he asked Sol to marry him, and Sol said yes, likely surprising one person more than anyone else in the world: Smith’s significant other, who he has a child with.

When Smith was asked if he would restrict his interactions with Sol if his significant other asked, he waffled. When pushed even harder by the CBS reporter in his home, about choosing Sol “over your flesh-and-blood life,” Smith corrected the reporter:

“It’s more or less like I would be choosing myself because it’s been unbelievably elevating. I’ve become more skilled at everything that I do, and I don’t know if I would be willing to give that up.”

Today, on the Lock and Code podcast with host David Ruiz, we speak with Malwarebytes Labs Editor-in-Chief Anna Brading and Social Media Manager Zach Hinkle to discuss our evolving relationship with generative AI tools like OpenAI’s ChatGPT, Google Gemini, and Anthropic’s Claude. In reviewing news stories daily and in siphoning through the endless stream of social media content, both are well-equipped to talk about how AI has changed human behavior, and how it is maybe rewarding some unwanted practices.

As Hinkle said:

“We’ve placed greater value on having the right answer rather than the ability to think, the ability to solve problems, the ability to weigh a series of pros and cons and come up with a solution.”

Tune in today to listen to the full conversation.

Show notes and credits:

Intro Music: “Spellbound” by Kevin MacLeod (incompetech.com)
Licensed under Creative Commons: By Attribution 4.0 License
http://creativecommons.org/licenses/by/4.0/
Outro Music: “Good God” by Wowa (unminus.com)


Listen up—Malwarebytes doesn’t just talk cybersecurity, we provide it.

Protect yourself from online attacks that threaten your identity, your files, your system, and your financial well-being with our exclusive offer for Malwarebytes Premium for Lock and Code listeners.

A week in security (July 7 – July 13)

Deepfake criminals impersonate Marco Rubio to uncover government secrets

Deepfake attacks aren’t just for recruitment and banking fraud; they’ve now reached the highest levels of government. News emerged this week of an AI-powered attack that impersonated US Secretary of State Marco Rubio. Authorities don’t know who was behind the incident.

A US State Department cable seen by the Washington Post warned that someone impersonated Rubio’s voice and writing style in voice and text messages on the Signal messaging app. The attacker reportedly tried to gain access to information or accounts by contacting multiple government officials in Rubio’s name. Their targets included three foreign ministers, a US governor, and a US member of Congress, the cable said.

The attacker created a Signal account with the display name ‘Marco.Rubio@state.gov’ and invited targets to communicate on Signal.

The AI factor in the attacks likely refers to deepfakes. These are a form of digital mimicry, in which attackers use audio or visual footage of a person to create convincing audio or images of them. Many have even created fake video of their targets, using them for deepfake pornography or to impersonate businesspeople.

The Rubio deepfake isn’t the first time that impersonators have targeted government officials. In May, someone impersonated White House Chief of Staff Susie Wiles in calls and texts to her contacts. Several failed to spot the scam initially and interacted with the attacker as though the conversations were legitimate.

This incident wasn’t Rubio’s fault, attacks like these are becoming commonplace with scammers making use of popular messaging tools. Signal is apparently a widely-used app in the executive branch, to the point that Director of National Intelligence Tulsi Gabbard said it came pre-installed on government devices.

This Signal usage culminated in then-national security advisor Mike Waltz accidentally adding a journalist to a group Signal chat containing discussions plans to bomb Yemen. He is now no longer the national security advisor. Misuse of the app extends back to the previous administration, when the Pentagon was forced to release a memo about it.

Why should you worry about such attacks on government high-ups? For one thing, it’s scary to think that foreign states might actually get away with sensitive information this way. But it also shows how easy it can be to impersonate someone with a deepfake. You can mount audio attacks with just a few snippets of audio to train an algorithm on.

You’d be suspicious if Pamela Bondi entered your book club chat, but if someone called an elderly relative pretending to be you, saying you’d been involved in an accident, or begging for ransom money because you’d been kidnapped, would they fall for it? Several have.

Strange though it may seem, modern threats demand some old-school protections. We recommend sharing a family password with close members, who can then request it to confirm each others’ identity. Never send this password anywhere, keep it to yourselves and agree to it in person.

But even family passwords won’t stop your grandma being targeted in deepfake romance scams from fake Mark Ruffalos and Brad Pitts, though. A quiet chat to explain the threats might avert such disasters, though, along with a regular check-in to ensure your less tech-savvy loved ones are safe and sound.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.