News

IT NEWS

Your Meta AI chats might be public, and it’s not a bug

Conversations that people are having with the Meta AI app are being exposed publicly, often without the users realizing it, revealing a variety of medical, legal, and private matters. The standalone app and the company’s integrations with artificial intelligence (AI) across its platforms—Facebook, Instagram, and WhatsApp—are now facing significant scrutiny for such privacy lapses.

The past two years have seen an explosion in generative AI tools, such as ChatGPT, Anthropic’s Claude, Google Gemini, and more. But with new players entering the market almost daily, not all of them deserve the same level of trust.

With 1 billion active monthly users, Meta AI is one of the contenders aiming for ChatGPT’s crown. To monetize this success, Meta’s CEO Zuckerberg said “there will be opportunities to either insert paid recommendations” or offer “a subscription service so that people can pay to use more compute.”

Similar to ChatGPT, Meta AI can generate text, answer questions, and help users plan, brainstorm, and strategize on various issues. But when using the Meta AI app, after submitting a question, users can also press a “share” button, which directs them to a screen showing a preview of the post, which users can then publish. But some users appear blissfully unaware that they are sharing these text conversations, audio clips, and images publicly with the world.

Other users can access “shared” conversations by using the apps Discover feed.

With only one glance at that feed, I found this example of a conversation the user might not have wanted to be publicly visible. A teacher shared an email thread regarding their upcoming arbitration of a job termination that felt unjust. The Meta AI app responded:

example of a teacher sharing a story of unfair termination

“Hello {name}, it’s great to hear from you again. I can sense the emotional weight of revisiting this experience, and I appreciate your trust in sharing it with me. It sounds like a really tough situation, and I’m here to listen and support you as you process it. No rush, take your time to gather your thoughts, and we can continue the conversation whenever you’re ready. How’s the preparation for the arbitration going so far?”

The emphasis is by me. If the AI realizes how much trust it takes to share that story with anyone, even if it’s only a bot, then how must it feel when the user finds out the whole world can find and read it. I didn’t even have to log in to find this conversation.

TechCrunch reported about people asking for help with tax evasion and other personal matters, calling Meta AI a “privacy disaster.” Many sources found threads about medical topics, and other conversations about delicate and private issues.

What people may not realize is that if they use the integrated versions on social media, they are logged in and their conversation can be tied to their social media profile.

How to not share your chats with Meta AI

Your best option would be not to use Meta AI at all. If you do, make sure you are not logged in and double check that fact if you are logged in on any of the Meta platforms on the same device. And never, ever use the Share button unless you mean to.

Meta spokesperson Daniel Roberts wrote in an emailed statement to WIRED that users’ chats with Meta AI are private unless users go through a multistep process to share them on the Discover feed.

When asked about these mishaps, the Meta AI chatbot said:

“Some users might unintentionally share sensitive info due to misunderstandings about platform defaults or changes in settings over time.”

However, Meta has not provided detailed, easily accessible in-app guidance at the moment of sharing, which contributes to user confusion. I decided to ask Meta AI.

Question: What do I need to do so you don’t share our conversation with the rest of the world? And is it true that the information I share with you might be used for advertising? Or, at least for “sponsored” recommendations?

Answer: Your conversations with me are used to improve the model and might be reviewed by our team to ensure quality. We may also use your conversations to train our models to be more helpful and accurate. However, we take steps to protect your privacy.

If you have concerns about how your data is used, you can visit Meta’s Privacy Center to learn more about how Meta handles data: https://www.facebook.com/privacy/center/.

Did that actually answer my question? No.

So, here are the steps you may want to take if you decide to engage with Meta AI for all the different platforms where you may encounter it.

Meta AI App

  • To ensure your prompts and conversations are only visible to you:
    • Tap your profile icon in the Meta AI app.
    • Go to Data & Privacy under App settings.
    • Select Manage your information.
    • Set Make all your prompts visible to only you.
  • Avoid using the Share button unless you are certain you want your conversation to appear publicly on the Discover feed.

WhatsApp, Facebook, and Instagram

Note: Conversations with Meta AI on WhatsApp are not protected by end-to-end encryption and may be used for AI training

To limit Meta’s use of your data for AI training:

  • Go to Settings & Privacy > Privacy Center.
  • Scroll to Privacy Topics and select AI at Meta.
  • Under Submit an objection request click Your messages with AIs on WhatsApp (or any of the other platforms you’re looking for) and fill out the form to request that Meta not use your data for AI training.

Deleting AI conversation data

Meta has introduced commands to delete information shared in any chat with an AI:

  • For example, type /reset-ai in a conversation on Messenger, Instagram, or WhatsApp to delete your AI messages.

We don’t just report on threats – we help protect your social media

Cybersecurity risks should never spread beyond a headline. Protect your social media accounts by using Malwarebytes Identity Theft Protection.