This viral ‘photo’ of the pope in a puffer jacket fooled the internet. How can we spot AI deepfakes?

A viral photo of Pope Francis decked out in a stylish Balenciaga puffer jacket and bejeweled crucifix is making the rounds on social media, racking up millions of views on Twitter, Reddit and more.

There’s just one problem — the pope never posed for the pic. The image, which on first glance looks to be a photograph, was actually generated by an artificial intelligence program called Midjourney.

The image was initially posted to Reddit last Friday in the subreddit r/midjourney. A day later, the picture had already been reposted to multiple social media platforms, fooling thousands on Twitter.

“I thought the pope’s puffer jacket was real and didn’t give it a second thought,” said model and TV personality Chrissy Teigen over Twitter. “No way am I surviving the future of technology.”

Who made the Balenciaga pope deepfake?

The original creator, a Reddit user who went by u/trippy_art_special, has since been suspended from the site. Buzzfeed News identified the creator as Chicago-based Pablo Xavier, a 31-year-old construction worker. He declined to share his last name, out of fear he’d be attacked for creating the images.

“I was just blown away,” Pablo Xavier told Buzzfeed News after the pictures went viral. “I didn’t want it to blow up like that.”

Although he harboured no ill will toward the pontiff — “I just thought it was funny to see the Pope in a funny jacket” — Pablo Xavier said he’s seen the image being used to criticize the church for lavish spending.

It’s “definitely scary,” he said. “People are running with it and thought it was real without questioning it.”

The pope pictures are just the latest in a line of convincing “deepfakes,” synthetic media which use AI and deep learning to manufacture fake images or videos of individuals that never existed. Days earlier, another controversial AI image went mega-viral, this time of former U.S. President Donald Trump being arrested.

Although convincing, the technology isn’t perfect yet. Here are some tips on how to tell current AI images from real photos.

How to spot AI images

While convincing from afar, current AI images tend to fall apart when you pay attention to the details — especially the eyes, hands and clothing. These commonly result in misshapen facial features, eyes looking in different directions, mismatched or missing earrings and accessories that tend to blend into the skin or clothing.

Take this pope picture for instance. Zoom into the eyes and you might notice something off about his glasses — the shadow of the frame is distorted, and it appears the right lens is both merging into his cheek and jabbing through his eyelid.

Now look at his right hand, gripping the air above a floating, distorted coffee cup. In previous months, AI struggled to depict hands, often drawing them as multi-fingered messes. The latest version of Midjourney has fixed the issue, but it still has difficulties showing how fingers interact with the environment.

This is partly because the image generator works by mashing together topics without understanding exactly how they go together — it has no notion of physics or gravity, explaining the floating cup and intangible glasses.

Keen-eyed viewers might have also noticed the pope’s crucifix is missing half of its chain, that the folds of clothing around his collar seem to twist into each other. The crucifix itself is asymmetrical and crooked.

While inapplicable here, one of the biggest giveaways is to look for text in the image. As it is now, AI image generators can’t seem to replicate writing. Any text would come out as illegible scribbles instead.

Finally, take a look at the title and comment section of the post. What are people saying about it? You can also search where else the image was used with reverse image search; this could alert you to any articles or experts calling the image out.

Yet these tips may soon become outdated. AI technology is developing at a breakneck pace and will be improving on all of these pitfalls. Researchers are working on detection systems, but it appears many of thse currently available are only sometimes accurate.

As deepfakes grow increasingly more convincing over time, one thing has become clear: seeing is no longer believing.

JOIN THE CONVERSATION

Conversations are opinions of our readers and are subject to the Code of Conduct. The Star
does not endorse these opinions.

For all the latest Technology News Click Here 

 For the latest news and updates, follow us on Google News

Read original article here

Denial of responsibility! TheDailyCheck is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected] The content will be deleted within 24 hours.