IT NEWS

Your Meta AI chats might be public, and it’s not a bug

Conversations that people are having with the Meta AI app are being exposed publicly, often without the users realizing it, revealing a variety of medical, legal, and private matters. The standalone app and the company’s integrations with artificial intelligence (AI) across its platforms—Facebook, Instagram, and WhatsApp—are now facing significant scrutiny for such privacy lapses.

The past two years have seen an explosion in generative AI tools, such as ChatGPT, Anthropic’s Claude, Google Gemini, and more. But with new players entering the market almost daily, not all of them deserve the same level of trust.

With 1 billion active monthly users, Meta AI is one of the contenders aiming for ChatGPT’s crown. To monetize this success, Meta’s CEO Zuckerberg said “there will be opportunities to either insert paid recommendations” or offer “a subscription service so that people can pay to use more compute.”

Similar to ChatGPT, Meta AI can generate text, answer questions, and help users plan, brainstorm, and strategize on various issues. But when using the Meta AI app, after submitting a question, users can also press a “share” button, which directs them to a screen showing a preview of the post, which users can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world.

Other users can access “shared” conversations by using the apps Discover feed.

With only one glance at that feed, I found this example of a conversation the user might not have wanted to be publicly visible. A teacher shared an email thread regarding their upcoming arbitration of a job termination that felt unjust. The Meta AI app responded:

example of a teacher sharing a story of unfair termination

“Hello {name}, it’s great to hear from you again. I can sense the emotional weight of revisiting this experience, and I appreciate your trust in sharing it with me. It sounds like a really tough situation, and I’m here to listen and support you as you process it. No rush, take your time to gather your thoughts, and we can continue the conversation whenever you’re ready. How’s the preparation for the arbitration going so far?”

The emphasis is by me. If the AI realizes how much trust it takes to share that story with anyone, even if it’s only a bot, then how must it feel when the user finds out the whole world can find and read it. I didn’t even have to log in to find this conversation.

TechCrunch reported about people asking for help with tax evasion and other personal matters, calling Meta AI a “privacy disaster.” Many sources found threads about medical topics, and other conversations about delicate and private issues.

What people may not realize is that if they use the integrated versions on social media, they are logged in and their conversation can be tied to their social media profile.

How to not share your chats with Meta AI

Your best option would be not to use Meta AI at all. If you do, make sure you are not logged in and double check that fact if you are logged in on any of the Meta platforms on the same device. And never, ever use the Share button unless you mean to.

Meta spokesperson Daniel Roberts wrote in an emailed statement to WIRED that users’ chats with Meta AI are private unless users go through a multistep process to share them on the Discover feed.

When asked about these mishaps, the Meta AI chatbot said:

“Some users might unintentionally share sensitive info due to misunderstandings about platform defaults or changes in settings over time.”

However, Meta has not provided detailed, easily accessible in-app guidance at the moment of sharing, which contributes to user confusion. I decided to ask Meta AI.

Question: What do I need to do so you don’t share our conversation with the rest of the world? And is it true that the information I share with you might be used for advertising? Or, at least for “sponsored” recommendations?

Answer: Your conversations with me are used to improve the model and might be reviewed by our team to ensure quality. We may also use your conversations to train our models to be more helpful and accurate. However, we take steps to protect your privacy.

If you have concerns about how your data is used, you can visit Meta’s Privacy Center to learn more about how Meta handles data: https://www.facebook.com/privacy/center/.

Did that actually answer my question? No.

So, here are the steps you may want to take if you decide to engage with Meta AI for all the different platforms where you may encounter it.

Meta AI App

  • To ensure your prompts and conversations are only visible to you:
    • Tap your profile icon in the Meta AI app.
    • Go to Data & Privacy under App settings.
    • Select Manage your information.
    • Set Make all your prompts visible to only you.
  • Avoid using the Share button unless you are certain you want your conversation to appear publicly on the Discover feed.

WhatsApp, Facebook, and Instagram

Note: Conversations with Meta AI on WhatsApp are not protected by end-to-end encryption and may be used for AI training

To limit Meta’s use of your data for AI training:

  • Go to Settings & Privacy > Privacy Center.
  • Scroll to Privacy Topics and select AI at Meta.
  • Under Submit an objection request click Your messages with AIs on WhatsApp (or any of the other platforms you’re looking for) and fill out the form to request that Meta not use your data for AI training.

Deleting AI conversation data

Meta has introduced commands to delete information shared in any chat with an AI:

  • For example, type /reset-ai in a conversation on Messenger, Instagram, or WhatsApp to delete your AI messages.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.

US airline industry quietly selling flight data to DHS

A data broker owned by some of America’s biggest airlines has been selling access to customer flight data to the US Department of Homeland Security (DHS).

The data, compiled by data broker Airlines Reporting Corporation (ARC), includes names, flight itineraries, and financial details. It also covers flights booked via US travel agencies.

ARC makes this data available to Customs and Border Protection (CBP), along with Immigration and Customs Enforcement (ICE), both of which were previously known as the US Customs Service until 2003, and both of which are offices under DHS.

ARC is owned and operated by eight major US airlines and is unique in being the only financial intermediary between the airline industry and US travel agencies, according to the data broker’s contract with ICE. ARC also provides payment settlement services for travel agencies and airlines, which has created a huge database of travel information that the data broker then makes available under its Travel Intelligence Program (TIP).

ARC’s most recently revealed contract, uncovered by tech news outlet by 404 Media, is with US Customs and Border Protection. A statement of work with that agency revealed that the TIP pilot program “generated meaningful results to current [redacted] cases and will continue to do so once fully accessible to [redacted] analysts across [redacted] Offices.”

The CBP contract mandates silence from DHS on where it got the data. The statement of work, which began in June 2024 and could optionally run until June 2029, states that the CBP will “not publicly identify vendor, or its employees, individually or collectively, as the source of the Reports unless the Customer is compelled to do so by a valid court order or subpoena and gives ARC immediate notice of same.”

ARC’s contract with ICE, meanwhile, provides a view into the data obligations from travel agencies. As the contract stated:

“Daily, travel agencies must submit ticket sales and funds for over 240 airlines worldwide to ARC. This process enables ARC’s TIP, an essential intelligence tool integrated into HSI INTEL’s investigative mission.”

HSI INTEL stands for the Homeland Security Investigations Office of Intelligence. It investigates criminal networks, and also any “individual or organization that threatens national security or seeks to exploit the customs and immigration laws of the United States,” per the DHS website.

Those with access to the TIP database can search across 39 months of flight booking data. Flight itineraries and passenger name records, along with travel dates, flight dates, and even credit card numbers are available from the database.

Other agencies that have purchased access to the database include The Secret Service, the Securities and Exchange Commission, the Drug Enforcement Administration, and the US Marshals Service, according to 404 Media.

Delta, Southwest, United, Lufthansa, Air France, American Airlines, Air Canada, Alaska Airlines, and JetBlue all have seats on the ARC board. The company also partners with hundreds of airlines and travel agencies around the world.

23andMe raked by Congress on privacy, sale of genetic data

In a Senate hearing adequately titled “23 and You: The Privacy and National Security Implications of the 23andMe Bankruptcy,” 23andMe executives addressed concerns about the privacy implications of the company’s sale and the handling of associated genetic data.

For those who missed the latest developments, in May 2025, we reported that 23andMe had agreed to sell itself to the pharmaceutical organization Regeneron for $256 million. In that agreed sale, Regeneron was also going to acquire the genetic data of 23andMe’s customers. But in early June, 23andMe’s former CEO Anne Wojcicki put forth a last-minute bid of $305 million, throwing Regeneron’s purchase into question, and placing 23andMe itself back on auction.

The bid was made through the TTAM Research Institute, a nonprofit medical research organization recently set up by Wojcicki.

We explained earlier how consumers could (and why they maybe should) delete their genetic data from 23andMe. Apparently, people listened. Interim CEO Joe Selsavage said at the hearing that since the company’s March bankruptcy filing, 1.9 million of the company’s 15 million customers have chosen to delete their data.

Committee chairman James Comer said in opening remarks:

“It is imperative that 23andMe … ensure there is absolutely no legal or illegal way for foreign adversaries or anyone else to access or manipulate and abuse Americans’ genetic data to advance their nefarious agendas.”

The urgency of the matter, undoubtedly enhanced by the way 23andMe has handled data sales and breaches in the past, lies in the impending sale of the company.

The committee criticized the company for failing to model the potential transfer of customers’ genetic data in the upcoming sale with an “opt-in” framework, and ruled that 23andMe made it too cumbersome for consumers to delete the data—23andMe’s biggest asset in the sale.

US Representative Suhas Subramanyam of Virginia said:

“If there simply was a ‘delete my data’ page or button somewhere more prominent then I think it would be easier for a lot of people to feel that control.”

During the hearing, interim CEO Selsavage and former CEO Wojcicki repeatedly declined to commit to establishing a customer opt-in mechanism, specifically one that would require consumers’ approval before their data could be sold and transferred to a new owner, despite multiple requests from committee members.

Beyond the threat of genetic data falling into foreign hands, many raised concerns that the sale could enable targeted advertising aimed at individuals with mental health conditions, drive up insurance premiums, or restrict access to credit.

23andMe assured the committee that regardless of who wins the auction, the company will not be sold to any entity unless it agrees to uphold the existing privacy policy.

23andMe’s privacy statement tells users that any new owner must adhere to its existing data protection guidelines, which include not providing user data to insurers, employers, public databases, or law enforcement without a court order, search warrant, or subpoena.

What can consumers do to protect their data?

Customers should actively manage their data on 23andMe by reviewing policies, deleting data if desired, and staying vigilant about how their sensitive genetic information is used.

People that have submitted samples to 23andMe have three different options, each providing a different level of privacy.

1. Delete your genetic data from 23andMe

For 23andMe customers who want to delete their data from 23andMe:

  • Log into your account and navigate to Settings.
  • Under Settings, scroll to the section titled 23andMe data. Select View.
  • You will be asked to enter your date of birth for extra security. 
  • In the next section, you’ll be asked which, if there is any, personal data you’d like to download from the company (make sure you’re using a personal, not public, computer). Once you’re finished, scroll to the bottom and select Permanently delete data.
  • You should then receive an email from 23andMe detailing its account deletion policy and requesting that you confirm your request. Once you confirm you’d like your data to be deleted, the deletion will begin automatically, and you’ll immediately lose access to your account. 

2. Destroy your 23andMe test sample

If you previously opted to have your saliva sample and DNA stored by 23andMe, but want to change that preference, you can do so from your account settings page, under “Preferences.”

3. Revoke permission for your genetic data to be used for research

If you previously consented to 23andMe and third-party researchers using your genetic data and sample for research, you may withdraw consent from the account settings page, under Research and Product Consents.

Check if you were caught in the 23AndMe data breach

Additionally, you may want to check if your data was exposed in the 2023 data breach. We recommend that you run a scan using our free Digital Footprint Portal to see if your data was exposed in the breach, and then to take additional steps to protect yourself (we’ll walk you through those).


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

44% of people encounter a mobile scam every single day, Malwarebytes finds

It’s become so troublesome owning a phone.

Malicious texts pose as package delivery notifications, phishing emails impersonate trusted brands, and unknown calls hide extortion attempts, virtual kidnapping schemes, or AI threats. Confusingly, even legitimate businesses now lean on outreach tactics that have long been favored by online scammers—asking people to scan QR codes, download mobile apps, and trade direct messages with, essentially, strangers.

All this junk is adding up, and it’s hurting everyday people.

According to new research conducted by Malwarebytes, 44% of people encounter a mobile scam every single day, while 78% encounter scams at least weekly. The victims of those scams—be they people who accidentally clicked on a link, filled out their information on a malicious webpage, or simply believed the person on the other side of a social media account—also suffered serious harms to their finances, emotions, and reputations. As Malwarebytes learned, 25% of scam victims were harassed or blackmailed, 19% had private info exposed, and 15% permanently lost their money.

As shared by one scam victim writing about their experience:

“I felt like I was in a horror movie. I never thought it would happen to me like this.”

These are the latest findings from original research conducted by Malwarebytes to understand the reach, frequency, and impact that mobile scams have across multiple countries. By surveying 1,300 people over the age of 18 in the US, UK, Austria, Germany, and Switzerland, Malwarebytes can reveal a mobile reality full of tension: high concern, low action, and increasingly blurred lines between what’s safe and what’s not.

The complete findings can be found in the latest report, “Tap, swipe, scam: How everyday mobile habits carry real risk.” You can read the full report below.

Here are some of the key findings:

  • 77% of people worry about mobile scams and threats. The biggest fears are around financial loss and fraud (73%), account and device lockout (70%), and identity theft (68%).
  • 66% worry about the future of AI and how realistic scams are going to become.
  • Just 15% of people strongly agreed: “I am confident in my ability to tell when something on my mobile phone is a scam.”
  • 74% of people have encountered a social engineering scam in their lives, such as phishing attempts, fake FedEx notifications, or romance scams, and 36% have fallen victim.
  • 37% of people have encountered an extortion scam and 17% have fallen victim, including 7% who were harmed specifically by a sextortion scam.  
  • 10% of people have a “safe word” in their family to “protect against things like kidnapping and extortion scams.”
  • 52% of scam victims suffered financially: 18% had to freeze their credit, 15% lost money permanently, and 8% had accounts opened fraudulently in their name.
  • Only 20% of people use traditional security measures like antivirus, a VPN, and identity theft protection.
  • 25% of people do not worry about scams at all because “it’s not something I can control.”

This is the mobile world that the public is forced to live in, and the mobile world that future generations may soon inherit. While broad, bold action is required to meaningfully catch and stop scammers, everyday people can lean on many cybersecurity best practices to stay safe and secure online. From using unique passwords, to implementing multifactor authentication (MFA), there is plenty at hand to make life more difficult for scammers.

Importantly, there’s also help from Malwarebytes.

With the launch of our free, AI-powered digital safety companion Scam Guard, users can review any concerning text, email, phone number, link, image, or online message and receive on the spot guidance to avert and report scams. Try it today and remove the fear from being online.

Scam Guard is available for both free and paid users of Malwarebytes Mobile Security (iOS and Android), without having to install an additional app.  

Try it out for yourself: Download Malwarebytes Mobile Security for iOS or Android.  

GirlsDoPorn owner faces life in jail after pleading guilty to sex trafficking

Michael James Pratt, the owner of pornographic websites GirlsDoPorn and GirlsDoToys, has pleaded guilty to sex trafficking in a US court.

Pratt ran the websites, which lured and coerced young women into filming pornographic videos, from 2013 to 2019. Pratt and his accomplices lured women from across the US and Canada to San Diego, where the filming would take place in hotel rooms or short-term rental units.

The group would advertise via online sites including Craigslist. In some cases the advertisements would promote clothed modelling jobs. It would later transpire that the work involved performing in sex videos.

When women showed doubt about appearing in such videos, Pratt and his team would convince them that the videos they made would only be distributed to a small base of private collectors outside the US, and that no one who knew the women would ever see them. Pratt would also pay other young women known as ‘reference girls’ to seal the deal by helping to persuade the victims, on the expectation that another young woman’s testimony would be more convincing.

Victims were coerced

Once the victims arrived at the airport, they would be taken to the shoot, where Pratt and his accomplices would rush them into signing contracts without giving them copies.

The perpetrators would tell the women that the shooting sessions would be short, but in fact they would take hours. Pratt and his group would bully victims into having sex on camera, according to the FBI. Sometimes they would refuse to let them leave until they completed the shoots, pressing the women to perform acts that they had previously declined. Pratt and his associates would threaten to cancel flights home or publish what had already been filmed if victims did not comply.

The operation, which ran from 2013 to 2019, targeted hundreds of people. It would post the videos on its own sites, which were available in the US for a subscription fee. The operators would also post free versions of the videos on giant adult content site Pornhub to drum up business. These were often viewed millions of times. It generated over $17 million in revenue for Pratt, who originally conceived the operation. When contacted by one victim and asked to remove a video from the site, Pratt did not reply.

A fugitive from justice

Pratt played various parts in the operation, including recruiting victims, transporting them to the shooting sessions, and filming. He had already been charged in a US court in 2019 but fled to Spain, making it onto the FBI’s most wanted list. The FBI extradited him in 2022 and he pleaded not guilty in March 2024 to 19 felony counts, including sex trafficking (of both adults and minors), production of child pornography, and conspiracy to commit money laundering.

The pornography operation had already ordered to pay $18 million to victims in 2021 after 22 women sued it for damages. The FBI has also prosecuted other people involved in the sites. Matthew Wolfe, who moved from New Zealand to begin working for Pratt in 2011 and who run multiple parts of the business, received a 14-year sentence in March 2024. Cameraman Theodore Gyi was sentenced to four years in 2022. Adult film performer Ruben Andre Garcia received 20 years on June 2021. Office manager Valerie Moser will be sentenced on September 12.

Pratt will be sentenced on September 8 this year on two counts. He faces a minimum sentence of fifteen years for sex trafficking, and a maximum penalty of life in prison. Another count of sex trafficking conspiracy also carries a maximum life sentence.

Google bug allowed phone number of almost any user to be discovered

Google has fixed vulnerabilities that made it possible to retrieve the phone numbers of almost any Google user. The flaw was found in the flow that allows users to recover their Google account using a phone number.

A cybersecurity researcher called Brutecat was able to figure out the phone number linked to any Google account, information that is usually not public and is considered sensitive.

Brutecat found that the page where users can recover their Google account if they have forgotten their login details lacked BotGuard protection. BotGuard is a cloud-based cybersecurity solution designed to protect websites and web applications from malicious bots, automated attacks, crawlers, and scrapers.

However, BotGuard does not work on websites that do not use Javascript. This is because many of its advanced detection techniques rely on executing Javascript in the visitor’s browser to gather client-side data. If a website does not serve Javascript, or if a user or bot disables Javascript, BotGuard cannot collect the necessary information for fingerprinting or behavioral analysis.

Brutecat also had to use rotating IP addresses and a trick to bypass the occasional CAPTCHAs but was able to manage 40k requests per second. At that rate, if the attacker knew the country code of the phone number, it would take about 20 minutes in the US to find out the recovery phone number. In the UK that would come down to 4 minutes because they have shorter phone numbers.

For those doing the math and finding this is impossible, it’s important to know that Google displays the last two numbers of the phone number as a hint and Brutecat used Google’s own library ‘libphonenumber’ to generate valid number formats.

But the researcher also needed the full display name of a targeted account. The researcher discovered a method to leak Google account display names by exploiting a feature in Looker Studio (formerly Google Data Studio). The researcher made a report/document in Google’s Looker Studio tool. Then changed the document’s owner to the victim’s Google account (using the victim’s email address). After transferring ownership, the victim’s full name automatically appeared on the Looker Studio home page’s “Recent documents” list even if the victim never opened the document, interacted with it, or knew about it. The key to this was finding that Looker Studio’s interface still displayed names for document transfers without requiring any action from the victim, unlike other Google services that now require prior interaction.

Google spokesperson Kimberly Samra told TechCrunch:

“This issue has been fixed. We’ve always stressed the importance of working with the security research community through our vulnerability rewards program and we want to thank the researcher for flagging this issue. Researcher submissions like this are one of the many ways we’re able to quickly find and fix issues for the safety of our users.”

Google also says it’s not aware of any confirmed reports about exploits of these vulnerabilities.

Nonetheless, a weakness allowing an attacker to trace phone numbers to Google accounts like this creates a massive risk for phishing and SIM-swapping attacks—especially since the majority of users will have their primary phone number as their account recovery number.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

How and where to report an online scam

If you’ve been scammed it’s really important to report it, if you can, in order to help prevent others falling for the same scam, and give authorities a chance to catch the criminal who did it.

The methods in which to report a scam varies according to the country you’re in, the platforms you’re using, and the outcome of the scam, so here are the most common methods you may need. Remember to report to both the authorities and the platforms the scammers are using.

How to report a scam in the United States

  • Report to the FBI’s Internet Crime Complaint Center (IC3): File a complaint online at ic3.gov as soon as possible. This is the main hub for cybercrime reports and helps with investigations and to gather intelligence about scams and the people behind them. Rapid reporting can also help support the recovery of lost funds.
  • Contact local law enforcement: If you lost money, you should also file a report with your local police department.
  • Notify your bank or credit card company: Inform them about the fraud in order to freeze accounts or reverse charges where possible.

How to report a scam in Canada

  • Canadian Anti-Fraud Centre (CAFC): Call 1-888-495-8501 or report online. The CAFC collects fraud reports nationwide and coordinates with law enforcement and the National Cybercrime Coordination Centre (NC3).
  • Local police: Report the scam to your local police department, especially if you lost money.
  • Credit bureaus: It is advisable to contact Equifax Canada and TransUnion Canada to order a free credit report immediately and ask that a fraud alert be put on your file.
  • Financial institutions: Notify your bank or credit card issuer immediately, but also to the financial institution that transferred the money in case that’s a different one.

How to report a scam in the United Kingdom

  • Action Fraud: Report online at actionfraud.police.uk or call 0300 123 2040 (Monday to Friday, 8 am to 8 pm). Action Fraud is the national reporting center for fraud and cybercrime. It collects reports about fraud on behalf of the police in England, Wales and Northern Ireland. For fraud in Scotland please report it directly to Police Scotland.
  • Local police: For urgent matters or ongoing threats, contact your local police. If the police decide not to investigate your case as a crime, you might still be able to get compensation or money back by bringing a civil case yourself. Talk to a solicitor or asset recovery agent to find out more.
  • Financial institutions: Alert your bank or credit card company to suspicious transactions.

In all countries it’s also helpful to report on the platforms where the scam took place or was initiated. Use built-in reporting tools on platforms like Facebook and WhatsApp to report scam accounts or messages:

WhatsApp

  • Open the chat with the suspicious business or individual.
  • Tap the business name or contact info at the top.
  • Scroll down and select Report Business or Report Contact.
  • Block the contact to stop further messages. The last five messages in the chat will be sent to WhatsApp.

Facebook

  • Click the three dots on the post, profile, or message you want to report.
  • Select Find support or report post/profile/message.
  • Follow the prompts to specify whether it’s a scam or fraudulent activity.
  • Facebook reviews these reports and may remove or restrict the scammer’s account so they can’t use that account anymore to defraud others.

Other platforms (e.g. Instagram, X, eBay)

  • Look for “Report” or “Help” links on the profile or message.
  • Follow platform-specific instructions to flag fraudulent behavior.
  • Provide as much detail as possible about the scam.

Been scammed online? Here’s what to do

Unfortunately, people getting scammed online is a frequent event. Scammers are getting better at social engineering and are using Artificial Intelligence (AI) to sound more authentic and eliminate any spelling errors.

It really can happen to anyone, so there’s no need to feel embarrassed if you have been scammed. Importantly, acting quickly can limit the damage. So here are some things you can do if you’ve been scammed.

1. Stop all communication immediately

Cut off contact with the scammer. Don’t reply to messages or calls, as this can prevent further manipulation or requests for even more money or information.

2. Secure your accounts

Change the passwords on all your online accounts, especially financial and email accounts. Use strong, unique passwords and enable multi-factor authentication (MFA) wherever possible. Start with the ones the scammer may have gained access to, but don’t stop there and check all your important accounts as well.

3. Monitor your financial statements

Check your bank, credit card, and payment service accounts for unauthorized transactions. Report suspicious activity to your banks and credit card company immediately to freeze or reverse fraudulent charges. Let them know what went down and find out how they can help you.

4. Avoid sponsored search results

It’s really important that you don’t click on sponsored search results when searching for help with resolving a scam. This kind of topic is seen by scammers as a perfect opportunity to scam you even more and they are known to outbid the rightful owners of certain brands. If you’re using a search engine, type the domain name yourself or scroll down to the regular search results.

5. Place fraud alerts and check credit reports

Contact credit bureaus to place fraud alerts on your file. This warns lenders to verify your identity before opening new accounts. Regularly review your credit reports for unfamiliar activity. If this is an option where you live, add a security freeze, more commonly called a credit freeze, to all of your credit reports for free.

6. Try to recover your lost funds

Sadly, recovering your lost funds will not always be possible. However, you may have some options:

  • If you paid the scammer by credit card, request a chargeback through your card provider.
  • If you paid via bank transfer or wire, contact your bank immediately since they may be able to initiate a recall in some cases.
  • If you sent the money via payment apps (e.g. PayPal, Venmo, Cash App), contact the provider to inquire about recovery options.

Never fall for people that claim they can recover payments in cryptocurrencies. These are known as recovery scams.

7. Gather evidence

Keep all records related to the scam: emails, texts, receipts, screenshots, and any communication details. This documentation supports investigations and helps law enforcement track scammers.

8. Scan your device

If you clicked any links or downloaded something during the course of the scam, make sure to scan your device with an antimalware solution. The scammer could have planted something for later use.

9. Report the scam

Reporting is crucial. It helps authorities track criminal patterns and may assist in recovering lost funds. Report to the appropriate national agencies, local police, and the platform where the scam occurred. For more details, see our article on how to report online scams.

10. Set up ongoing protection

Firstly, make sure to protect your device with a security solution like Malwarebytes Premium. Then, protect yourself in the browser using our free Browser Guard. Finally, if you want to check if something is a scam, Scam Guard—our new feature in Malwarebytes Mobile Security—allows you to upload a text, email, or DM to find out if it’s legit or a scam.


We don’t just report on threats – we help safeguard your entire digital identity

Cybersecurity risks should never spread beyond a headline. Protect your—and your family’s—personal information by using identity protection.

A week in security (June 1 – June 7)

Last week on Malwarebytes Labs:

Last week on ThreatDown:

Stay safe!


Our business solutions remove all remnants of ransomware and prevent you from getting reinfected. Want to learn more about how we can help protect your business? Get a free trial below.

OpenAI forced to preserve ChatGPT chats

OpenAI has protested a court order that forces it to retain its users’ conversations. The creator of the ChatGPT AI model objected to the order, which is part of a copyright infringement case against it by The New York Times and other publishers.

The news organizations argued that ChatGPT was presenting their content in its responses to the point where users were reading this material instead of accessing their paid content directly.

The publishers said that deleted ChatGPT conversations might show users obtaining this proprietary published content via the service.

The issue was up for debate in a January, where Judge Ona T. Wang suggested that users who heard about the legal case might delete those conversations to cover their tracks. She denied the publishers’ request for a preservation order at the time, but also asked why OpenAI couldn’t segregate and make anonymous data from users who had requested deletion. OpenAI failed to address this, Wang said, leading to her order, granted May 13.

OpenAI served with court order

Wang’s order last month said:

“OpenAI is NOW DIRECTED to preserve and segregate all output log data that would otherwise be deleted on a going forward basis until further order of the Court (in essence, the output log data that OpenAI has been destroying), whether such data might be deleted at a user’s request or because of ‘numerous privacy laws and regulations’ that might require OpenAI to do so.”

ChatGPT already retains user conversations by default, using them to train its AI model for future conversations. However, it provides an option to turn off that setting, causing all conversations with a user to be forgotten. The service also has an ad hoc temporary chat feature, which deletes a chat as soon as it’s concluded.

In a letter objecting to the order, ChatGPT said that was being forced to compromise users’ privacy.

“OpenAI is forced to jettison its commitment to allow users to control when and how their ChatGPT conversation data is used, and whether it is retained,” it said. “Every day the Preservation Order remains in place is another day OpenAI’s users are forced to forgo the privacy protections OpenAI has painstakingly put in place.”

Read OpenAI’s full response here:

The publishers have no evidence that the deleted conversations contain more of their content, OpenAI added. It warned that users frequently share sensitive details in conversations that they expect to be deleted, including everything from financial information to intimate discussions about wedding vows.

Engineering the retention of data would take months, the AI giant added.

The background to the case

Three publishers (The New York Times, the New York Daily News and the Center for Investigative Reporting) had been suing OpenAI separately for copyright infringement. In January this year, the publishers joined their cases into a single lawsuit.

OpenAI argued that it could use the content under fair use rules because its AI model transformed the content, breaking it into tokens that it then blends with other information to serve its users.

ChatGPT has a memory

Even when it does delete chats, ChatGPT retains a separate memory of details shared in conversations that it can use to understand you better. These might include details you enter about your friends and family, or about how you like your conversations formatted. The service allows users to turn off references to these memories, or to delete them altogether.

Caution is key when giving information to any online service, especially AI services, where conversations are often fluid and free-flowing. It’s also a good idea to think twice before sharing anything you’d rather others didn’t see.