Most People Struggle to Tell Real Images From AI Fakes, Study Finds

Just a few years ago, making a realistic fake image required abundant time and skill. Now, it can be done quickly using generative AI platforms—and thanks to those platforms, it's becoming increasingly difficult for internet users to tell authentic images and AI-generated fakes apart.
New research from Microsoft's AI for Good Lab reports that many people struggle to identify AI-generated or modified images. In August 2024, Microsoft introduced the "Real or Not" quiz: a simple game that asks users to determine whether images are "real" and "artificial." The study used data from over 287,000 image evaluations from more than 12,500 people who played the quiz. On average, people correctly recognized AI images just 62% of the time.
Researchers also found that people had an easier time judging photos with humans, scoring 65%, compared to pictures of nature, for which the accuracy rate was 59%. Researchers say this difference might be due to the human brain's tendency to easily identify faces.
The study suggests that authentic images containing odd yet genuine elements, like lighting that seems unnatural at first glance, often cause people to mistake the picture for being AI-edited. This means people's ideas about what "looks fake" can be unreliable. Images made using Generative Adversarial Networks (GANs) were especially tough for players, with a 55% failure rate, as reported by Windows Central.
The Microsoft research team is developing its own AI image detector, which is said to boast a 95% accuracy rate. Still, researchers explain that even new AI tools can make mistakes.
With the "Real or Not" quiz still live, you can see how skilled you are at detecting AI images.