News

IT NEWS

Deepfakes or not: new GAN image stirs up questions about digital fakery

Subversive deepfakes that enter the party unannounced, do their thing, then slink off into the night without anybody noticing are where it’s at. Easily debunked clips of Donald Trump yelling THE NUKES ARE UP or something similarly ludicrous are not a major concern. We’ve already dug into why that’s the case.

What we’ve also explored are the people-centric ways you can train your eye to spot stand-out flaws and errors in deepfake imagery—essentially, GANS (generative adversarial networks) gone wrong. There will usually be something a little off in the details, and it’s up to us to discover it.

Progress is being made in the realm of digital checking for fraud, too, with some nifty techniques available to see what’s real and what isn’t. As it happens, a story is in the news which combines subversion, the human eye, and even a splash of automated examination for good measure.

A deepfake letter to the editor

A young chap, “Oliver Taylor” studying at the University of Birmingham found himself with editorials published in major news sources such as Time of Israel and Jerusalem Post, with his writing “career” apparently  kicking into life in late 2019, with additional articles in various places throughout 2020.

After a stream of these pieces, everything exploded in April when a new article from “Taylor” landed making some fairly heavy accusations against a pair of UK-based academics.

After the inevitable fallout, it turned out that Oliver Taylor was not studying at the University of Birmingham. In fact, he was apparently not real at all and almost all online traces of the author vanished into the ether. His mobile number was unreachable, and nothing came back from his listed email address.

Even more curiously, his photograph bore all the hallmarks of a deepfake (or, controversially, not a “deepfake” at all; more on the growing clash over descriptive names later). Regardless of what you intend to class this man’s fictitious visage as, in plain terms, it is an AI-generated image designed to look as real as possible.

Had someone created a virtual construct and bided their time with a raft of otherwise unremarkable blog posts simply to get a foothold on major platforms before dropping what seems to be a grudge post?

Fake it to make it

Make no mistake, fake entities pushing influential opinions is most definitely a thing. Right leaning news orgs have recently stumbled into just such an issue. Not so long ago, an astonishing 700 pages with 55 million followers were taken down by Facebook in a colossal AI-driven disinformation blowout dubbed “Fake Face Swarm.” This large slice of Borg-style activity made full use of deepfakes and other tactics to consistently push political messaging with a strong anti-China lean.

Which leads us back to our lone student, with his collection of under-the-radar articles, culminating in a direct attack on confused academics. The end point—the 700 pages worth of political shenanigans and a blizzard of fake people—could easily be set in motion by one plucky fake human with a dream and a mission to cause aggravation for others.

How did people determine he wasn’t real?

Tech steps up to the plate

A few suspicions, and the right people with the right technology in hand, is how they did it. There’s a lot you can do to weed out bogus images, and there’s a great section over on Reuters that walks you through the various stages of detection. No longer do users have to manually pick out the flaws; technology will (for example) isolate the head from the background, making it easier to see frequently distorted flaws. Or perhaps we can make use of heatmaps generated by algorithms to highlight areas most suspected of digital interference.

Even better, there are tools readily available which will give you the under-the-hood summary of what’s happening with one image.

Digging in the dirt

If you edit a lot of photographs on your PC, you’re likely familiar with EXIF metadata. This is a mashing together of lots of bits of information at the moment the photo is taken. Camera/phone type, lens, GPS, colour details—the sky’s the limit. On the flipside, some of it, like location data, can potentially be a privacy threat so it’s good to know how to remove it if needs be.

As with most things, it really depends what you want from it. AI-generated images are often no different.

There are many ways to stitch together your GAN imagery. This leaves traces, unless you try to obfuscate it or otherwise strip some information out. There are ways to dig into the underbelly of a GAN image, and bring back useful results.

Image swiping: often an afterthought

Back in November 2019, I thought it would be amusing if the creators of “Katie Jones” had just lazily swiped an image from a face generation website, as opposed to agonising over the fake image details.

For our fictitious university student, it seems that the people behind it may well have done just that [1], [2]. The creator of the site the image was likely pulled from has said they’re looking to make their images no longer downloadable, and/or place people’s heads in front of a 100 perceNT identifiably fake background such as “space.” They also state that “true bad actors will reach for more sophisticated solutions,” but as we’ve now seen in two high-profile cases, bad actors with big platforms and influential reach are indeed just grabbing whatever fake image they desire.

This is probably because ultimately the image is just an afterthought; the cherry on an otherwise bulging propaganda cake.

Just roll with it

As we’ve seen, the image wasn’t tailor-made for this campaign. It almost certainly wasn’t at the forefront of the plan for whoever came up with it, and they weren’t mapping out their scheme for world domination starting with fake profile pics. It’s just there, and they needed one, and (it seems) they did indeed just grab one from a freely-available face generation website. It could just as easily have been a stolen stock model image, but that is of course somewhat easier to trace. 

And that, my friends, is how we end up with yet another subtle use of synthetic technology whose presence may ultimately have not even mattered that much.

Are these even deepfakes?

An interesting question, and one that seems to pop up whenever a GAN-generated face is attached to dubious antics or an outright scam. Some would argue a static, totally synthetic image isn’t a deepfake because it’s a totally different kind of output.

To break this down:

  1. The more familiar type of deepfake, where you end up with a video of [movie star] saying something baffling or doing something salacious, is produced by feeding a tool multiple images of that person. This nudges the AI into making the [movie star] say the baffling thing, or perform actions in a clip they otherwise wouldn’t exist in. The incredibly commonplace porn deepfakes would be the best example of this.
  2. The image used for “Oliver Taylor” is a headshot sourced from a GAN which is fed lots of images of real people, in order to mash everything together in a way that spits out a passable image of a 100 percent fake human. He is absolutely the sum of his parts, but in a way which no longer resembles them.

So, when people say, “That’s not a deepfake,” they’re wanting to keep a firm split between “fake image or clip based on one person, generated from that same person” versus “fake image or clip based on multiple people, to create one totally new person.”

The other common negative mark set against calling synthetic GAN imagery deepfakes, is that the digital manipulations are not what make it effective. How can it be a deepfake if it wasn’t very good?

Call the witnesses to the stand

All valid points, but the counterpoints are also convincing.

If we’re going to dismiss their right to deepfake status because digital manipulations are not effective, then we’re going to end up with very few bona-fide deepfakes. The digital manipulations didn’t make it effective, because it wasn’t very good. By the same token, we’d never know if digital manipulations haven’t made a good one because we’d miss it entirely as it flies under the radar.

Even the best movie-based variants tend to contain some level of not-quite-rightness, and I have yet to place a bunch before me where I couldn’t spot at least nine out of 10 GAN fakes mixed in with real photos.

As interesting and as cool as the technology is, the output is still largely a bit of a mess. From experience, the combo of a trained eye and some of the detection tools out there make short work of the faker’s ambitions. The idea is to do just enough to push whatever fictional persona/intent attached to the image is over the line and make it all plausible—be it blogs, news articles, opinion pieces, bogus job posting, whatever. The digital fakery works best as an extra chugging away in the background. You don’t really want to draw attention to it as part of a larger operation.

Is this umbrella term a help or a hindrance?

As for keeping the tag “deepfake” away from fake GAN people, while I appreciate the difference in image output, I’m not 100 percent sure that this is necessarily helpful. The word deepfake is a portmanteau of “deep learning” and “fake.” Whether you end up with Nicolas Cage walking around in The Matrix, or you have a pretend face sourced from an image generation website, they’re both still fakes borne of some form of deep learning.

The eventual output is the same: a fake thing doing a fake thing, even if the path taken to get there is different. Some would argue this is a potentially needless and unnecessary split/removal of a catch-all definition which manages to helpfully and accurately apply to both above—and no doubt other—scenarios.

It would be interesting to know if there’s a consensus in the AI deep learning/GAN creation/analyst space on this. From my own experience talking to people in this area, the bag of opinions is as mixed as the quality from GAN outputs. Perhaps that’ll change in the future.

The future of fakery detection

I asked Munira Mustaffa, Security Analyst, if automated detection techniques would eventually surpass the naked eye forever:

I’ve been mulling over this question, and I’m not sure what else I could add. Yes, I think an automated deepfake checking can probably make better assessment than the human eye eventually. However, even if you have the perfect AI to detect them, human review will always be needed. I think context also matters in terms of your question. If we’re detecting deepfakes, what are we detecting against?

I think it’s also important to recognise that there is no settled definition for what is a deepfake. Some would argue that the term only applies to audio/videos, while photo manipulations are “cheapfakes”. Language is critical. Semantics aside, at most, people are playing around with deepfakes/cheapfakes to produce silly things via FaceApp. But the issue here is really not so much about deepfakes/cheapfakes, but it is the intent behind the use. Past uses have indicated how deepfakes have been employed to sway perception, like that Nancy Pelosi ‘dumbfake’ video.

At the end of the day, it doesn’t matter how sophisticated the detection software is if people are not going to be vigilant with vetting who they allow into their network or who is influencing their point of view. I think people are too focused on the concept that deepfakes’ applications are mainly for revenge porn and swaying voters. We have yet to see large scale ops employing them. However, as the recent Oliver Taylor case demonstrated to us, deepfake/cheapfake applications go beyond that.

There is a real potential danger that a good deepfake/cheapfake that is properly backstopped can be transformed into a believable and persuasive individual. This, of course, raises further worrying questions: what can we do to mitigate this without stifling voices that are already struggling to find a platform?

We’re deepfakes on the moon

We’re at a point where it could be argued deepfake videos are more interesting conceptually than in execution. MIT’s Centre for Advanced Virtuality has put together a rendition of the speech Richard Nixon was supposed to give if the moon landing ended in tragedy. It is absolutely a chilling thing to watch; however, the actual clip itself is not the best technically.

The head does not play well with the light sources around it, the neckline of the shirt is all wrong against the jaw, and the voice has multiple digital oddities throughout. It also doesn’t help that they use his resignation speech for the body, as one has to wonder about the optics of shuffling papers as you announce astronauts have died horribly.

No, the interesting thing for me is deciding to show the deceptive nature of deepfakes by using a man who was born in 1913 and died 26 years ago. Does anyone under the age of 40 remember his look, the sound of his voice outside of parody and movies well enough to make a comparison? Or is the disassociation from a large chunk of collective memory the point? Does that make it more effective, or less?

I’m not sure, but it definitely adds weight to the idea that for now, deepfakes—whether video or static image—are more effective as small aspects of bigger disinformation campaigns than attention drawing pieces of digital trickery.

See you again in three months?

It’s inevitable we’ll have another tale before us soon enough, explaining how another ghostly entity has primed a fake ID long enough to drop their payload, or sow some discord at the highest levels. Remember that the fake imagery is merely one small stepping stone to an overall objective and not the end goal in and of itself. It’s a brave new world of disruption, and perhaps by the time you’re pulling up another chair, I might even be able to give you a definitive naming convention.

The post Deepfakes or not: new GAN image stirs up questions about digital fakery appeared first on Malwarebytes Labs.