On attention, AR, and the strange truth that more information in your field of view often means less awareness of the world.

A famous experiment, in case you haven’t seen it
Sometime around 1999, two psychologists named Daniel Simons and Christopher Chabris ran an experiment that has since become one of the most famous demonstrations in cognitive science. They filmed a short video of six people in a room passing two basketballs back and forth — three players in white shirts, three in black. They asked viewers a simple question: count how many times the players in white shirts pass the ball.
Most people watch the video carefully, count the passes, and report a number — usually correct. Then the experimenters ask: did you see the gorilla?
The viewers stare at them. What gorilla?
They play the video again. About thirty seconds in, a person in a full gorilla suit walks into the middle of the frame, stops, faces the camera, beats their chest, and walks off the other side. The gorilla is on screen for a full nine seconds. It is not subtle. It is not hidden. It is, by any normal measure, the most interesting thing in the video.
And about half of all viewers, on the first watch, do not see it at all.
This effect has a name. It’s called inattentional blindness, and once you know about it, it changes how you think about pretty much every visual interface you’ve ever used. Including, very specifically, augmented reality.
What inattentional blindness actually is
The technical definition is simple. Inattentional blindness is the failure to notice an unexpected object that appears in plain sight, when your attention is focused on something else. It is not a failure of your eyes. Your retinas captured the gorilla just fine. The light bounced off the gorilla, hit your photoreceptors, and got transmitted to your visual cortex like it was supposed to. The failure happened later, in the part of your brain that decides what gets to enter your conscious awareness.
That part of your brain has a budget. It cannot consciously process everything your eyes are taking in — there is just too much information coming in every second for that to be possible. So it has to choose. And what it chooses to make you aware of is heavily influenced by what you’ve told it to look for. You told your brain to count basketball passes, so your brain spent its conscious-awareness budget on basketball passes, and a person in a gorilla suit walked across the stage and never made it onto your awareness invoice at all.
This is not a quirk that happens to weak-minded people. It happens to everyone. It happens to airline pilots. It happens to surgeons. It happens to police officers in foot pursuits. There’s published research documenting all of these. The takeaway from forty years of attention research is that your conscious visual awareness is much, much smaller than you think it is, and the gap between what you see and what you think you see is the place where bad things happen.
Now let’s put that finding in front of an augmented reality display and see what happens.
The car windshield example
Imagine — and this is not science fiction, this is a thing major car companies are actively building right now — you’re driving down a city street at thirty-five miles an hour. Your windshield has an augmented reality heads-up display built into it. There’s a soft blue navigation arrow floating on the road ahead, showing you exactly where to turn. There’s a little badge in the corner that says it’s seventy-two degrees outside. There’s a notification that your favorite coffee shop on the next block is offering a special on lattes. The radar system has highlighted the car ahead of you with a faint green outline, so you know it’s being tracked.
This sounds amazing. This sounds safer, even — your eyes never have to leave the road to look at the dashboard or the GPS screen. The marketing for these systems leans hard on exactly that point. Eyes up, attention forward, all the information you need without ever having to look away.
Here’s what the research actually says. There’s a phenomenon that human factors researchers call cognitive tunneling, and AR heads-up displays cause it pretty reliably. When you put an interesting visual element directly in someone’s field of view, their attention gets stuck on that element, and the rest of the visual field gets tuned out. They are looking through the windshield. They are not seeing through the windshield.
A 2023 study published in Traffic Injury Prevention tested exactly this. The researchers put participants in a driving simulator with an AR heads-up display and showed them video of a normal urban drive. Embedded in the video were unexpected hazards — pedestrians stepping into the road, motorcycles appearing in unexpected places — that the participants were supposed to react to. The researchers measured how often the drivers failed to notice these hazards entirely.
The result was striking and a little uncomfortable. When an unexpected hazard appeared in a part of the visual field where the AR overlay was also displayed, drivers were significantly more likely to miss it. The fancy word for this in the paper is “on-HUD hazard,” and the rate of inattentional blindness for on-HUD hazards was high enough to make the researchers explicitly recommend that future AR heads-up display designs needed to consider this risk and design countermeasures into the interface.
Translated out of academic English: putting a navigation arrow in front of a pedestrian made drivers more likely to hit the pedestrian.
It’s worse in airplanes, and we’ve known for years
Aviation has been wrestling with this problem for longer than the auto industry, because pilots have had heads-up displays since the 1960s. There are decades of published research on what they do to pilot attention. Most of the news is good — well-designed HUDs improve pilot performance in landing, navigation, and bad-weather flying. The whole point of putting flight data on the windshield was to keep the pilot’s eyes outside the cockpit during critical phases of flight, and that goal has been mostly achieved.
But there’s a footnote in the research that doesn’t get talked about as much. A 2021 study published in Applied Ergonomics with the wonderful title “In plane sight” trained novice pilots in a flight simulator and then had them fly two flights — one normal, one while engaged in a distracting auditory task. The experimenters placed unexpected objects in the visual scene during both flights and measured how often the pilots noticed them.
When pilots were focused and undistracted, they noticed most of the objects. When they were distracted by a phone-call-style conversation while flying, they missed a lot of them. The objects were not subtle. They were not hidden. They were placed in the part of the visual field the pilots were actively looking through. The pilots’ eyes saw them and their conscious minds did not. The study’s plain-language conclusion was that inattentional blindness “poses significant flight safety risks” and needs more research.
Pilots are some of the most highly trained visual operators on Earth. They are not amateurs. If their attention can be stolen by a distracting task while they are flying an airplane, your attention can be stolen by an AR notification while you’re driving to the grocery store. There is no special category of human who is immune to this. The gorilla gets all of us.
What about the fancier kind of AR?
I want to be fair to the technology, because it would be easy to make this post sound like a hit piece. It isn’t. The same body of research that documents the inattentional blindness problem also documents real benefits from well-designed AR systems. In aviation studies, AR heads-up displays measurably reduced workload, improved situational awareness, and helped pilots distribute their gaze more evenly between instruments and the world outside. In automotive research, AR HUDs that highlight pedestrians and obstacles can sometimes improve obstacle detection rather than impair it. The technology is not bad. The technology is complicated, and what makes it work or fail is largely a question of how thoughtfully it was designed.
The pattern in the research is pretty consistent on one point, though. The more visual information you put in the field of view, the more the cognitive tunneling effect kicks in. There’s a sweet spot somewhere — enough information to be useful, not so much that it eats your attention budget. Finding that sweet spot is genuinely hard, and most of the early consumer products are not finding it. They are erring on the side of more, because more looks impressive in a demo and impressive demos sell units. The cost of erring in that direction is paid later, by the user who didn’t see the kid on the bicycle.
Why I’m telling you this before you build something
If you take nothing else from this post, take this: adding information to a display is not the same as adding awareness to the user. Those two things sound like they should be the same. They sound like they have to be the same, because more information has to mean more knowledge, right? But the human visual attention system doesn’t work that way. It works by selection. Every pixel you put on the screen is competing for a fixed-size budget of conscious attention, and the pixels that win are not always the pixels that should win.
This is one of the most important things a builder in this space can internalize, and it is one of the things the marketing materials almost never mention. When you’re designing an AR interface and you find yourself thinking “and we could also show them the weather, and the time, and a notification about their next meeting, and a little badge for the coffee shop they like” — stop. Every one of those things is a tax on the user’s attention. Every one of them is a coin you are spending out of a finite wallet. And the things that get pushed out of the wallet are the things you didn’t put there on purpose. Like the kid on the bicycle. Like the gorilla.
The good AR designers of the next decade will be the ones who understand this in their bones. They will design interfaces that show the user less than the technology is capable of showing, on purpose, because they understand that visual attention is a zero-sum game and the user’s life is on one side of the scale. That kind of restraint is rare and valuable and possibly career-defining.
Maybe it’s yours.
This is the second post in a four-part series on the cautionary side of AR. Next up: “The Pokémon Go Body Count” — what happens when the digital layer forgets you have a body in the real world.