It’s never been more urgent to know how to identify AI fakes.
Generative A.I. (artificial intelligence that creates new material) has been around for decades, but it exploded into broader cultural awareness and use in 2021 with the launch of DALL•E. Midjourney was released the following year, opening the floodgates for a new world of generated images, videos, and text.
There are many valuable uses for AI. In fact, at Influenced® we use AI to generate images that support our digital safety training resources (we always clearly indicate we use AI imagery). However, this extremely powerful technology can be used for more sinister purposes.
One example is the rampant use of AI to create sexually explicit, non-consensual “deepfakes,” pictures and videos that mimic a person so credibly that it’s hard to tell if the imagery is real or not. These violating images have affected people of a wide variety of backgrounds, from superstar Taylor Swift to cyberbullied kids in schools.
This can have ramifications far beyond just the trauma of being humiliated online. In a recent case, a French woman lost $850,000 when scammers used AI generated imagery to con her into believing that she was long-distance dating Brad Pitt. The F.B.I. has even had to issue a warning, explaining that AI is being used to generate realistic-looking social media profile images to pose as a target’s friends and family members to ask for money.
It’s easy to feel like these cases are extreme and could never fool you. But studies have shown that there is at best a 50/50 chance that the average person will be able to identify AI versus real photos.
How can you tell if an image you find online is real or fake? Here are five practical tips to identify AI fakes.
1. Be suspicious of perfection
Is someone’s skin impossibly smooth? Are they totally freckle and mole-free? This might be an indicator that they are AI-generated.
“An aesthetic sort of smoothing effect leaves skin looking incredibly polished,” Henry Ajder, an AI expert from Latent Space Advisory, told AP.
We downloaded a stock image and described it in detail to Midjourney, asking it to create something similar. Can you spot the differences?


Another classic tell is that AI image generation models tend to prioritize faces and bodies that are culturally preferable and aesthetically pleasing. This has been proven to create a tremendous amount of racial, gender, and body-size bias. AI is trained disproportionately on certain kinds of bodies, recognizes certain kinds of bodies better than others, and represents the biases of the groups who built and control this technology.
Because we know these biases exist, signs to look for include universally light skin, thin bodies, stereotypical gender presentation, perfect hair with no frizz or fly-away strands, and faces that are symmetrical and conventionally attractive. Especially if there are groups, examine if there is any diversity represented. If the people look uncannily homogenous, it could be AI.
2. Check the practical science
For example, a staircase might lead to an impossible destination or an object might be balanced in an impossibly precarious way. Architecture could look warped or distorted, or simply be utterly unrealistic.
If motion is involved, someone might be jumping improbably high or moving in a way that looks unnatural. A lack of motion blur, or hair and clothes moving in an unrealistic way while in motion, could also be signs of AI. This is especially evident in generated or modified videos.
You can also ask yourself, “What would the photographer have had to do in order to take this photo? Would that be possible?”
The below photo, while adorable, has multiple clear signs of being AI: the lack of definition in the fur, the way the subject is hyper-crisp despite being in motion, and the fact that it would be near-impossible to lie in a position to take this photo.

3. Look at light and shadows
This is closely related to checking gravity and physics. Generative AI often incorrectly creates shadows — or leaves them out altogether. Check where the light source is in an image. Are the shadows falling accurately on the opposite side of objects?
Lighting might also appear unnaturally bright or tinted. If a series of photos all have a perfect sunset glow, it might be cause for a deeper look. If the coloring feels inconsistent, unrealistic, or hyper-saturated, that could also be a sign.
You can also check things like glare and reflections. Often, AI does not create glare on objects like glasses, windows, or mirrors. Notice where an image may be missing glare, or if it has a glare that seems unrealistic or overblown. Sometimes you can find errors in something as simple as a lack of light being cast in the iris of a subject’s eyes.
4. Be detail-oriented
Even as AI generators get increasingly powerful, they continue to struggle with accuracy in the details. A prime example is anything that has words or text, such as signs, t-shirts, or logos. AI’s ability to accurately generate readable, correctly-spelled text continues to be hit or miss. Check the background of city scenes in particular to see if there are errors.
Experts at MIT say, “Pay attention to the cheeks and forehead. Does the skin appear too smooth or too wrinkly? Is the agedness of the skin similar to the agedness of the hair and eyes? DeepFakes may be incongruent on some dimensions.”
Check the background details as well as the subject of an image. Do objects get weirdly pixelated or start blurring into each other? Are the proportions unrealistic on things like furniture? Is there a type of plant or foliage that is out of place?
Sometimes, being detail-oriented also means noticing what isn’t there. For example, if an image of someone in a kitchen portrays a space with no clutter, stains, or other signs of being lived-in, that might be a flag.
Consider zooming in on different parts of an image. Does it hold up to scrutiny?
5. Examine the context
Context is your most powerful tool in figuring out if imagery was created by AI. AI expert Mike Caulfield has developed a leading framework for examining generated content called S.I.F.T.:
- Stop
- Investigate the source
- Find better coverage
- Trace the original context
While some of that primarily applies to text, it can be adapted for images. For example, you might be able to investigate the source by reverse image searching a picture using Google Lens. Can you identify where the photo came from? Is there another version of the image posted elsewhere?
You can also examine the supporting text. Social media platforms are increasingly adding tags to images that are created with the help of AI. Many websites will clarify in the fine print if an image was created using AI.
You can also look for inconsistencies between the image and the article or caption around it. For example, is the article about World War II, but the image includes technology that wouldn’t have existed then?
Examining content also involves assessing intent. What would the platform hosting the content have to gain by generating an image? Does the image support a seemingly sensationalist or extreme claim that would otherwise be difficult to believe? Is there any additional evidence for that claim outside of the image (such as links to external, trusted sources)?
Next steps in identifying AI images
Obviously, not every image requires this level of scrutiny. If someone is posting a playful AI-generated image on social media, it doesn’t require a deep dive. But the more high stakes the claim— whether it’s claiming to portray a current event, a person you know, or other subjects with real-world implications— the more care you should invest in investigating.
Fortunately, more and more tools exist to do quick assessments for AI alteration. We fed Fake Image Detector and Sight AI’s tools some test AI images, and both correctly identified them. AI Or Not works similarly. There is even an extension for Google Chrome developed by V7. Several major tech contenders, such as Microsoft, are developing more robust tools for AI detection that can be used at the corporate level.
All of these tools are in the early stages of development, and none are foolproof. However, they can serve as helpful tools as part of detecting generated images.
How can I get AI images of myself or a friend taken down?
If someone is spreading AI-generated deepfakes of you or a friend, there are steps you can take.
If you encounter explicit or troubling AI images, you can report them at takeitdown.ncmec.org. This tool can be used with total anonymity, and it is deisnged to support in removing any sexualized content of minors.
Next, you can report the content to the platform itself. RAINN lays out guidelines for reporting and then blocking. It’s important to remember that you should document all posts and harmful content with screencaps before blocking the account posting them; that way, you can access the evidence if you need to.