Site icon TheDailyCheck.net

The liar’s dividend of deepfake tech means we don’t trust what we can see

A video of Barack Obama manipulated his speech to make him call Donald Trump a ‘dipshit’

Deepfake videos featuring everyone from Barack Obama to Tom Cruise are everywhere on social media. They could be fun memes – or they could be dangerous disinformation, writes Elena Siniscalco

“The FBI has informed me that it is very likely that these are visitors of extraterrestrial nature”, said US President Joe Biden last week in a speech about the flying objects identified over US airspace. “I advise the American people to remain calm”, he added in a video that racked up millions of views.

Except he didn’t. The video was a deepfake – a piece of multimedia content altered through artificial intelligence to look real.

If you have an active internet presence, you have encountered deepfakes. They’re everywhere. There was the viral video of Mark Zuckerberg admitting control of “billions of people’s stolen data” in 2019 – a piece of satire forming part of an art exhibition. There is a Tom Cruise deepfake account making waves on TikTok. And there are much darker examples like deepfakes of Ukrainian president Volodymyr Zelensky calling on his citizens to surrender.

Deepfakes permeate online culture; they have penetrated the meme industry, from TikTok to Twitter. There’s nothing wrong with that: as often is the case with tech, deepfake technology is not inherently good or bad.

But they’re the symptom of a broader development: we are past the age where we can believe in what we see with our own eyes. If before, audio and video were considered definite proofs of truth – audio of a phone conversation was evidence in a police investigation, a video of a protest could be evidence of police brutality – now, they can as easily be faked. We are now in the world of synthetic media – or “ChatGPT on steroids”, as Touradj Ebrahimi, professor at the Swiss Federal Institute of Technology in Lausanne, calls it.

Ebrahimi says we witnessed a “true democratisation of the manipulation of visual and audio content”. This democratisation is precisely the memes’ strength: because they’re accessible, and pretty much anyone can create them and enjoy them, they enable community. They bring people together, allowing them to engage with topics they wouldn’t necessarily encounter otherwise.

This is true particularly for politics. A video of Barack Obama calling Donald Trump “a total and complete dipshit” broke the Internet four years ago. It was a deepfake made by American filmmaker Jordan Peele to warn people about the importance of relying on trusted news sources in an era of disinformation. In becoming an Internet sensation, it also drew attention to what was soon to become a key topic of public conversation.

When deepfakes emerged in 2017, it was hard to make good ones: you needed processors, high-end desktops and specific skills. Now you can make them on your phone. The viral Twitter meme of Elon Musk faceswapped into The Rock was made with Reface. The app has been downloaded more than 250 million times. Chief executive and co-founder Dima Shvets says much of the predictions of synthetic media “causing an infocalypise so far have not been realised”.

Yet the problem is not just disinformation. The “liar’s dividend” is what Jeffrey McGregor, chief executive of software company TruePic, is really worried about. “Bad actors can undermine all digital imagery by suggesting it may be fake”, he says. The impact of this in a world where conspiracy theories are much more mainstream than they’ve ever been is momentous. It already happened in Gabon in 2019, when the president, who had been ill for months, appeared in a video explaining he had a stroke but was now stable. Many thought the video looked odd – the military went further, staging a coup claiming the video was a deepfake and the president was actually dead. Needless to say, he wasn’t.

Governments have started to wake up to the danger. California has made it illegal to create or distribute videos, images of audio or politicians modified to resemble real footage within 60 days of an election. But with such chameleonic AI, legislation alone can’t do all the heavy lifting. “Users are also accomplices: they retweet, they share, they contribute”, says Siwei Lyu, professor at the Department of Computer Science and Engineering at the University at Buffalo. For him, the best way to engage with the issue is to improve users’ awareness and resilience to fake media.

The line between memes and disinformation becomes thin with deepfakes. We should train ourselves to detect what’s real and what’s fake – and we should be thinking of the motivation behind making a deepfake. There are clues: Ricardo Amper, chief exec of Incode, suggests looking for “unusual skin tones, strange lighting, and oddly positioned shadows, comparing all these elements to an original reference”. And ultimately, in a world that increasingly relies on everything digital, we must take more responsibility for everything we share.

For all the latest Lifestyle News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – abuse@thedailycheck.net The content will be deleted within 24 hours.
Exit mobile version