When AI Sees Optical Illusions — What It Reveals About Brains

AI Falls for Optical Illusions — What It Reveals
AI Sees Illusions
  • Deep neural networks such as PredNet can be fooled by motion illusions, supporting predictive-coding explanations of perception.
  • Key differences remain: AI models often lack attention mechanisms and perceive scenes differently from humans.
  • Quantum-inspired networks can reproduce bistable switches (Necker cube, Rubin vase) with human-like timing.
  • These AI experiments offer a controllable, ethical way to probe how brains extract visual information and how perception changes in space.

How researchers tested AI with optical illusions

Researchers trained a predictive deep neural network called PredNet on videos captured by head-mounted cameras to approximate natural human visual input. PredNet learns to predict future frames from past frames, mirroring the predictive-coding theory of perception.

When presented with variations of the rotating snakes illusion — a static pattern that humans perceive as motion — PredNet produced motion-like predictions for the same images that fool people. The system had never seen illusions during training, yet it generalized learned motion rules to these stimuli.

Why this supports predictive coding

Predictive coding proposes the brain anticipates sensory input and then updates predictions when actual input deviates. PredNet’s mistake suggests the illusion contains features that trigger motion predictions, causing both artificial and biological systems to infer movement where none exists.

But the match is not perfect. Humans typically experience a gaze-dependent stabilization — a circle appears to stop when fixated — while PredNet reported all circles moving simultaneously. That discrepancy highlights a missing attention mechanism in the model; PredNet processes the whole image uniformly rather than focusing resources on a specific location.

Differences that matter

Behavioral and fMRI studies show perception of shapes and motion rely on partly distinct processes. Clinical cases—such as people who regain sight after early blindness—suggest motion perception can be more resilient than shape processing. AI models can help isolate which computations produce each effect.

Quantum-inspired AI and bistable perception

Other teams explored different architectures. Ivan Maksymov combined quantum-tunneling-inspired dynamics with neural networks to model bistable images like the Necker cube and the Rubin vase. His system alternated between interpretations over time, with switching intervals similar to human observers.

Maksymov stresses this does not mean the brain is quantum; rather, quantum-inspired mathematics can be an effective way to model probabilistic decision dynamics in perception (a field called quantum cognition).

Broader implications: neuroscience and spaceflight

AI offers an ethical, manipulable proxy for experiments that would be difficult on humans. Models can be tweaked to test hypotheses about attention, predictive priors, and sensory deprivation.

This work also intersects with space research. Astronauts show altered bistable perception after extended time on the ISS, suggesting gravity influences depth cues. AI models that reproduce these effects could help predict and mitigate perceptual issues for long-duration missions.

Bottom line

AI is not yet a perfect mirror of human vision, but it can reproduce specific illusion-driven errors and reveal which computational principles—predictive coding, attention, stochastic switching—underpin perception. Those insights feed both neuroscience and practical applications, from diagnostics to astronaut training.

Read more