Can You Spot an AI-Generated Image? (Be Honest)

by Claire L. Brady, EdD

A few weeks ago, my colleague and friend Josie Ahlquist asked me a simple question: “Do you think you can spot an AI-generated image?”

My answer was immediate: No.

Then I paused and added, “But I can spot a bad AI image.”

That distinction matters.

For a long time, AI images were easy to identify because they were sloppy—extra fingers, strange shadows, warped faces, or text that made no sense. What we were really spotting wasn’t “AI,” it was poor execution. As the tools have improved, those giveaways have largely disappeared.

Which is why I’m throwing down a challenge for higher education professionals this week:

Can you spot an AI-generated image?

Honestly, I don’t think you can.

Leon Furze recently updated his Real or Fake AI image game—Can You Spot an AI-Generated Image?—using the newest generation of image tools, including Google’s Nano Banana Pro. The quiz presents 10 pairs of images and asks viewers to identify which ones were created by AI.

I took it.

I didn’t do as well as I expected.

And that’s exactly the point.

For years, we taught people to look for the “tells”: extra fingers, strange shadows, distorted faces, or warped text. Those cues made AI images feel uncanny—almost right, but not quite. That phase is over. We’ve left the uncanny valley. Today’s AI-generated images are clean, coherent, and contextually convincing. In many cases, they’re indistinguishable from real photographs.

That shift matters for higher education.

Why “Spot the AI” Is No Longer Enough

Many institutions still operate under an implicit assumption that faculty, staff, or students will notice when something isn’t real. Visual detection is often treated as a sufficient safeguard.

Leon Furze’s quiz quietly dismantles that assumption.

Even educators who understand AI—and who actively teach media literacy—struggle to reliably identify AI-generated images by sight alone. And the technology is improving faster than most institutional guidance, training, or policy frameworks.

This isn’t just a technical issue. It’s social, legal, and political.

AI-generated images now shape perception, influence trust, and move faster than verification. They raise questions about consent, likeness, and ownership that the law is still catching up to. And they’re already being used—intentionally or not—to mislead, persuade, and reinforce existing narratives.

For higher education professionals, the risk isn’t that people will fail a quiz. The risk is that we continue to build teaching strategies, academic integrity practices, and communication policies on outdated assumptions about what people can reliably detect.

What We Should Be Teaching Instead

Rather than asking “Can you spot the AI?” we need to help people ask better questions:

  • What is the purpose of this image?

  • What context is missing?

  • Who benefits if this image is taken at face value?

  • What verification steps are appropriate in this setting—classroom, marketing, research, or crisis response?

AI literacy now requires interpretation, not just detection. It requires slowing down judgment, triangulating sources, and understanding how meaning is constructed—not just how images are produced.

A Leadership Moment for Higher Ed

Leon Furze’s quiz isn’t about winning or losing. It’s about recalibrating our assumptions.

Higher education has always been in the business of sense-making—helping people interpret information, question sources, and understand how knowledge is constructed. AI-generated images don’t change that mission. They sharpen it.

What has changed is the usefulness of our old rules of thumb. We can’t train our way forward by pointing out warped fingers or odd shadows, and we can’t policy our way forward by assuming people will “just know.” Visual detection is no longer a reliable safeguard.

Discernment now matters more than detection. Context matters more than confidence. And institutional guidance matters more than individual intuition.

So yes—take the quiz if you’re curious.

But the more important question for higher education leaders is this: how are we preparing our communities to interpret, verify, and responsibly use information in a world where “real” is no longer something you can confirm with a glance?

Checklist: Using AI-Generated Images Without Losing Trust

Before publishing an AI-generated image, pause and walk through the following.

1. Purpose Check

☐ Is this image clarifying, illustrating, or supporting the message?

☐ Would a real photo or simple graphic serve the purpose better?

☐ Is this image adding meaning—not just visual polish?

2. Representation & Accuracy

☐ Does the image depict people, places, or events that did not actually occur?

☐ Could someone reasonably mistake this for a real photograph?

☐ Is the image appropriate for the context (instructional, marketing, informational)?

If the image represents reality, accuracy and transparency matter more.

3. Context & Placement

☐ Is it clear how this image should be interpreted?

☐ Does the surrounding text provide sufficient context?

☐ Is the image supporting the content—not distracting from it?

4. Transparency & Disclosure

☐ Is AI use disclosed where appropriate (caption, alt text, credit line)?

☐ Is the language neutral and clear (e.g., “AI-generated image”)?

☐ Would viewers feel informed—not misled—if they noticed the disclosure?

5. Consistency & Standards

☐ Is this consistent with how AI images are used elsewhere by your unit or institution?

☐ Does the visual style align with your broader communications?

☐ Would repeated use of this style build familiarity rather than confusion?

6. Accessibility & Inclusion

☐ Is alt text accurate and descriptive?

☐ Does the image avoid reinforcing stereotypes or unintended bias?

☐ Does it support accessibility rather than creating barriers?

7. Final Trust Check

☐ Does this image help people understand—or just persuade?

☐ Would you feel comfortable explaining why this image was AI-generated?

☐ Does it align with institutional values around honesty and representation?

Bottom line:

AI-generated images work best when they clarify ideas—not when they blur the line between illustration and reality.

Promotional graphic for a blog post titled “Using AI-Generated Images (Not Real Photographs) Without Losing Trust.” The design features a navy blue background with white and teal text and a photo icon representing AI-generated images.
Previous
Previous

Using AI Video Without Undermining Your Credibility

Next
Next

New Blog Series: Using AI-Generated Media Without Losing Trust