fFor several years now, the public has been warned about the risk posed by images created by artificial intelligence, also known as deepfakes. But until recently, it was relatively easy to tell an AI-generated image from a photograph.
Never ever. Over the weekend, an AI-generated image of Pope Francis wearing a Balenciaga down jacket went viral online, fooling many internet users.
In just a few months, public AI imaging tools have become powerful enough to create photorealistic images. While the image of the Pope did contain clear signs of a fake, it was convincing enough to fool many internet users, including celebrity Chrissy Teigen. “I thought the Pope’s down jacket was real and didn’t think about it,” she wrote. “I will not survive the future of technology in any way.”
More from TIME
While AI-generated images have gone viral before, none have fooled as many people as quickly as the image of the Pope. The picture was bizarre, and, it should be noted, much of its virality was due to the fact that people deliberately shared it for the sake of laughter. But history may consider the Balenciaga Pope to be the first truly viral disinformation event fueled by deepfake technology and a sign that the worst is coming.
While deepfake victims, especially women who have been targeted by deepfake pornography without consent, have been warning about the risks of this technology for years, in recent months imaging tools have become much more accessible and powerful, producing better fake images of any kind. As AI advances rapidly, it becomes increasingly difficult to distinguish whether an image or video is real or fake. This can have a significant impact on public susceptibility to foreign influence operations, targeted targeting of individuals, and trust in the news.
Here are some tips to help you recognize AI-generated images today and not be fooled by even more compelling generations of technology in the future.
How to recognize an image created by AI today
If you look closely at the image of the Pope from Balenciaga, you will find several clear signs that it was created with the help of AI. The crucifix hanging on his chest is inexplicably held in the air, and in place of the second half of the chain there is only a white down jacket. In his right hand he holds what looks like a vague coffee cup, but his fingers are closed around the rarefied air, not the cup itself. His eyelid somehow merges with his glasses, which in turn blend into their own shadow.
Today, in order to identify an image created by artificial intelligence, it is often helpful to look at these intricate details of an image. AI image generators are essentially pattern replicators: they have learned what the Pope looks like and what a Balenciaga down jacket might look like, and they can miraculously squeeze them together. But they (yet) do not understand the laws of physics. They have no idea why a crucifix can’t float in the air without a chain supporting it, or that the glasses and the shadow behind them aren’t the same single object. It is in these often peripheral parts of an image that humans can intuitively notice inconsistencies that AI cannot.
But soon, artificial intelligence technology will improve to correct these kinds of errors. Just a few weeks ago, Midjourney, the artificial intelligence tool used to create the image of the Pope, was unable to create realistic images of human hands. To check if the image of a person was generated by artificial intelligence, you could look to see if he has seven blurry fingers or some other alien appendage. Not anymore. The newest version of Midjourney can generate realistic looking human hands, removing perhaps the easiest way to identify an AI image. Since AI image generators are evolving so quickly, it’s logical to assume that the advice above could quickly become outdated.
How not to be deceived in the future
For now, media literacy techniques may be your best bet to stay up to date with AI-generated images in the future. Asking these questions won’t help you spot 100% of fake images, but they will help you spot more of them, as well as become more resilient to other forms of misinformation. Remember to always ask: where did this image come from? Who is sharing this and why? Does this conflict with any other reliable information you have access to?
When it comes to detecting AI-generated viral images, it’s best to check what others are saying. Google and other search engines have a reverse image search tool that allows you to check where an image has already been posted online and what people are saying about it. With this tool, you can find out if experts or trusted publications have determined that an image is fake, as well as find the location where the image was posted in the first place. If an image allegedly taken by a news photographer was first posted by an Internet user under a pseudonym on a social networking site, this is a reason to doubt its authenticity.
If you’re a Twitter user, the community notes feature can often give you more information about an image, although often not until some time after the tweet was first posted. After the image of the Pope had already gone viral, a note was added to the original post in which it was posted: “This image of Pope Francis is an AI-generated image, not a real one,” the note reads. “The image was created in the Midjourney AI imaging app.”
Are there technological solutions?
There are many software on the market that claim to be able to detect deepfakes, including a proposal from Intel that says it is 96% accurate in detecting deepfake videos. But there are several free online tools that can reliably tell you if an image was created by artificial intelligence or not. One Free AI Image detector, hosted on the artificial intelligence platform Hugging Face, was able to correctly determine with a 69% chance that the image of Balenciaga’s dad was generated by artificial intelligence. But presented with Image of Elon Musk created by artificial intelligencealso created by the latest version of Midjourney, the tool gave the wrong answer, saying it was 54% sure the image was genuine.
When AI researchers at software company Nvidia and the University of Naples set out to find out how difficult it would be to build an AI-generated image detector, they found several limitations, according to a November 2022 paper they published. They found that AI image generators do indeed leave invisible telltale marks on the images they create, and that these “hidden artifacts” look slightly different depending on which particular program was used to create the image. The bad news was that these artifacts tend to become harder to detect when the image is resized or the quality is lowered – as is often the case when images are posted and re-posted on social media. According to Annalisa Verdoliva, one of the paper’s co-authors, the researchers created a tool that could determine that the image of the Pope from Balenciaga was generated by artificial intelligence. But while the tool’s code is available online, it’s not built into the web app, which means it’s hard for the average user to access it.
Another method, first described in paper published earlier this month may provide more generalized results. A team of researchers found that when an advanced AI image generator known as a diffusion model loaded an AI-generated image, the program could easily create a near-exact copy of the input image. In contrast, it was difficult for the instrument to reproduce even a rough copy of a real photo. Their discovery has not yet turned into an accessible online tool, but it gives hope that in the future, the application will be able to reliably determine whether an image was created by artificial intelligence or not.
More must-read content from TIME
Write Billy Perrigo: billy.perrigo@time.com.