Snapchat is reserving the right to use your selfie images to power Cameos, Generative AI, and other experiences on Snapchat, including ads, according to our friends at 404 Media,
The Snapchat Support page about its My Selfie feature says:
“You’ll take selfies with your Snap camera or select images from your camera roll. These images will be used to understand what you look like to enable you, Snap and your friends to generate novel images of you. If you’re uploading images from the camera roll, only add images of yourself.”
A Snapchat spokesperson told 404 Media:
“You are correct that our terms do reserve the right, in the future, to offer advertising based on My Selfies in which a Snapchatter can see themselves in a generated image delivered to them…“As explained in the onboarding modal, Snapchatters have full control over this, and can turn this on and off in My Selfie Settings at any time.”
However, according to 404 Media the “See My Selfie in Ads” feature is on by default, so you’d have to know about the feature in the first place in order to turn it off.
We also wonder how Snapchat plans to check whether the user is uploading real selfies and not pictures of someone else.
Once again, we see this assumption by a social media platform that it’s OK to use content posted on their platform for training Artificial Intelligence (AI). It isn’t!
It’s even worse to do it without explicit user consent. Hiding it somewhere deep down in a mountain of legalese called a privacy policy that nobody actually reads is not real consent. This lack of transparency and control over personal data is upsetting. The realization that some individuals may not want their likeness used for commercial purposes or to train systems they don’t support doesn’t seem to bother anyone at these social media giants.
How to change your My Selfie settings
You can change or clear your My Selfie in your Settings:
- Tap the gear icon in My Profile to open Settings
- Tap My Selfie under My Account
- Tap Update My Selfie or Clear Selfie
Why AI training on your images is bad
We have seen many cases where social media and other platforms have used the content of their users to train their AI. Some people have a tendency to shrug it off because they don’t see the dangers, but let us explain the possible problems.
- Deepfakes: AI generated content, such as deepfakes, can be used to spread misinformation, damage your reputation or privacy, or defraud people you know.
- Metadata: Users often forget that the images they upload to social media also contain metadata like, for example, where the photo was taken. This information could potentially be sold to third parties or used in ways the photographer didn’t intend.
- Intellectual property. Never upload anything you didn’t create or own. Artists and photographers may feel their work is being exploited without proper compensation or attribution.
- Bias: AI models trained on biased datasets can perpetuate and amplify societal biases.
- Facial recognition: Although facial recognition is not the hot topic it once used to be, it still exists. And actions or statements done by your images (real or not) may be linked to your persona.
- Memory: Once a picture is online, it is almost impossible to get it completely removed. It may continue to exist in caches, backups, and snapshots.
If you want to continue using social media platforms that is obviously your choice, but consider the above when uploading pictures of you, your loved ones, or even complete strangers.
We don’t just report on threats—we remove them
Cybersecurity risks should never spread beyond a headline. Keep threats off your devices by downloading Malwarebytes today.