Always On

On what we already know about minds that never look away — and what it might mean to wear the screen on your face.


A different kind of question

The first three posts in this series were about things we can measure. Eye strain, with tens of thousands of subjects across decades of optometry research. Inattentional blindness, with controlled studies in driving simulators and flight cockpits. Pedestrian deaths, with police accident reports and peer-reviewed papers. All of that is real. All of that is solid ground.

This last post is going to walk us off the solid ground a little, and I want to be honest about that up front. The question of what happens when augmented reality moves from a thing you sometimes use to a thing you always wear is, as of this writing, an open question. The glasses are not yet ubiquitous. The contact lenses don’t exist yet. The data set we’d need to answer the big version of the question hasn’t been collected, because the experiment hasn’t been run on a big enough population for long enough.

So I’m not going to make predictions. I’m not going to tell you what AR glasses are going to do to society in 2035. I have no idea, and anybody who tells you they do is selling you something. What I’m going to do instead is something a little sneakier and a lot more honest: I’m going to walk you through what we already know about what phones have done to human attention, memory, and presence — because phones are basically AR glasses that haven’t quite made it onto your face yet, and the research on phones is a lot further along than the research on glasses. Then I’ll let you do the math.

Continue reading Always On

The Pokémon Go Body Count

What happens when an augmented reality layer forgets you still have a body in the real world — and what the first big real-world dataset has to teach the next generation of builders.


The summer the world went outside

In July of 2016, something happened that the technology industry had been predicting for about twenty years and had nonetheless completely failed to prepare for. A small company called Niantic released a free mobile game called Pokémon Go, which used your phone’s camera and GPS to overlay little cartoon monsters onto the real world. To catch them, you had to physically walk to where they were. To battle in a “gym,” you had to physically stand near the gym’s real-world location. The game’s slogan was Gotta Catch ‘Em All, and within a few weeks, what felt like half of the developed world was outside trying.

If you were old enough to remember it, you remember the surreal sight of grown adults wandering through public parks at midnight in groups of twenty, their faces lit up by phone screens, occasionally letting out a cheer when somebody caught a rare one. People who had not voluntarily been outside in years were suddenly logging miles on foot. Cardiologists wrote excited articles about it. Public health researchers ran studies on the activity benefits. For a brief shining moment, it looked like augmented reality might single-handedly solve the obesity crisis.

And then the other dataset started coming in.

Continue reading The Pokémon Go Body Count

The Gorilla You Didn’t See

On attention, AR, and the strange truth that more information in your field of view often means less awareness of the world.


A famous experiment, in case you haven’t seen it

Sometime around 1999, two psychologists named Daniel Simons and Christopher Chabris ran an experiment that has since become one of the most famous demonstrations in cognitive science. They filmed a short video of six people in a room passing two basketballs back and forth — three players in white shirts, three in black. They asked viewers a simple question: count how many times the players in white shirts pass the ball.

Most people watch the video carefully, count the passes, and report a number — usually correct. Then the experimenters ask: did you see the gorilla?

The viewers stare at them. What gorilla?

They play the video again. About thirty seconds in, a person in a full gorilla suit walks into the middle of the frame, stops, faces the camera, beats their chest, and walks off the other side. The gorilla is on screen for a full nine seconds. It is not subtle. It is not hidden. It is, by any normal measure, the most interesting thing in the video.

And about half of all viewers, on the first watch, do not see it at all.

This effect has a name. It’s called inattentional blindness, and once you know about it, it changes how you think about pretty much every visual interface you’ve ever used. Including, very specifically, augmented reality.

Continue reading The Gorilla You Didn’t See

Your Eyes Were Not Built For This

The physical cost of putting a digital layer in front of your face — and why the most exciting thing about AR is also the part nobody’s solved yet.


A weird little experiment you can try right now

Hold your finger up about six inches from your face and look at it. Really look at it — focus on the fingerprint, the little ridges, the half-moon at the base of your nail. Now, without moving your finger, shift your focus past it to something across the room. A doorway. A window. Watch what happens to your finger. It splits into two ghostly versions of itself, blurred and translucent, while the doorway snaps into clarity.

Now do the reverse. Focus on the doorway. The doorway is sharp; your finger is a fuzzy double-image. Switch back. Doorway blurs, finger sharpens. Switch again.

What you just did is two things at once, and your brain handled them so smoothly you probably never noticed. Your eyeballs physically rotated inward and outward to point at the right depth — that’s called vergence. And the lenses inside your eyes physically squished and stretched to focus at the right distance — that’s called accommodation. Your visual system has been doing these two things together, in perfect lockstep, since you were a few months old. They are so tightly coupled that neuroscientists don’t really treat them as two separate systems anymore. They’re one system with two outputs, and the outputs always agree with each other, because in the natural world they always have to agree.

I’m telling you about this because in the last few years, humanity invented a way to make them disagree. And then we strapped that thing to our faces and called it the future.

Continue reading Your Eyes Were Not Built For This

You Don’t Need to Know Surgery to Build a Surgery Trainer

How real builders work with experts in fields they’ll never master — and why that’s the whole point.


A confession to start with

Let me tell you about a thing that happens to almost every builder I’ve ever met, including me.

You get an idea. A good one. The kind of idea that makes you sit up a little straighter in your chair, open Claude in a new browser tab and start sketching on the back of a receipt. And then, about ninety seconds in, a little voice shows up and says:

“Who do you think you are? You don’t know anything about that.”

And the idea dies right there, on the back of the receipt, because the voice sounds reasonable. You don’t know anything about medicine. You don’t know anything about aerospace. You don’t know anything about education theory or marine biology or whatever the idea was touching. So clearly you have no business building anything near it. Right?

Wrong. And I want to spend the next few minutes showing you exactly why — using one of the coolest examples I know.

Continue reading You Don’t Need to Know Surgery to Build a Surgery Trainer

Realm Forge Academy: Forging the Future of Education with Modern Tools and Purpose-Driven Pedagogy

By D.W. Denney


The education landscape is shifting. Traditional four-year degrees are pricing out the very people who need them most, while the industries shaping our future — immersive technology, artificial intelligence, blockchain, game development, spatial computing — are evolving faster than any institution can keep up with. Meanwhile, aspiring creators and builders are left choosing between crushing debt and being left behind.

Realm Forge Academy was built to solve that problem.

Build Worlds. Not Debt.

That’s not just a tagline. It’s a promise — and it’s the founding principle behind everything we do at RFA.

Continue reading Realm Forge Academy: Forging the Future of Education with Modern Tools and Purpose-Driven Pedagogy

Where Two Worlds Become One: Building a Browser-Based Mixed Reality Tool for Education

By Donald Denney | Realm Forge Academy

There’s a moment in every emerging technology’s lifecycle when it crosses over from novelty to necessity. For mixed reality, that moment is now — and it’s happening inside web browsers.

As part of the Realm Forge Academy course Defining a New Reality: Enter the Metaverse, I’ve built a browser-based mixed reality tool that lets students use their device camera to detect a flat surface in their physical environment and place a 3D avatar into a fixed position in the real world. No app store. No headset. Just a browser and curiosity.

[Image: Screenshot of the MR tool detecting a surface and placing an avatar]

Continue reading Where Two Worlds Become One: Building a Browser-Based Mixed Reality Tool for Education

🔮✨ The Architecture of Wonder: Understanding Magic Systems in Fantasy

By Professor DeeDubs | Realm Forge Academy


When we watch a wizard cast a spell in our favorite fantasy story, something magical happens beyond the fictional incantation itself. We either lean forward, invested in whether the magic will work—or we lean back, sensing that the author will simply make whatever happens most convenient for the plot. The difference between these two experiences isn’t luck or talent alone. It’s architecture.

Fantasy author Brandon Sanderson has articulated a framework for thinking about magic systems that has quietly revolutionized how creators approach the fantastical. His observations—often called “Sanderson’s Laws of Magic”—aren’t rules to be followed rigidly, nor are they secrets known only to published authors. They’re learnable principles that help us understand why some magical moments leave us breathless while others leave us shrugging.

More importantly, they’re tools you can use in your own creative work.

Continue reading 🔮✨ The Architecture of Wonder: Understanding Magic Systems in Fantasy

The Hybrid Strategy: How Power Users Actually Work with AI

Combining Platforms for Maximum Effectiveness in the Modern Digital Landscape


In the rapidly evolving world of artificial intelligence, a quiet revolution is taking place—not in the technology itself, but in how the most sophisticated users are deploying it. While casual observers debate which AI platform reigns supreme, power users have moved beyond the binary choice. They’ve discovered something far more valuable: the strategic orchestration of multiple AI systems working in concert.

The Multi-Platform Paradigm Shift

Here’s a secret from power users: the most effective AI collaborators don’t choose one platform—they use multiple strategically. This approach, which I call the “Hybrid Strategy,” represents a fundamental shift in how we conceptualize AI assistance. Rather than viewing these tools as competing products, experienced practitioners treat them as complementary instruments in a sophisticated toolkit.

Professor Deedubs, an experienced expert in AI with deep knowledge of how to use it effectively, has observed this phenomenon firsthand in both academic and professional settings. “The users who extract the most value from AI aren’t the ones with the most expensive subscription,” Professor Deedubs notes. “They’re the ones who understand the unique strengths of each platform and know exactly when to deploy them.”

Continue reading The Hybrid Strategy: How Power Users Actually Work with AI

Why Claude Is My Favorite AI

A Multimedia Specialist’s Perspective on What Makes Anthropic’s Assistant Stand Apart

In a landscape crowded with AI assistants, each promising to revolutionize how we work, I’ve settled on Claude as my primary daily workspace. This isn’t a decision I made lightly. After extensive use across coding projects, research tasks, and technical documentation, Claude has consistently proven itself to be more than just another chatbot—it’s a genuinely useful instrument for getting real work done. Here’s why.

A Powerful Coding Tool

Let me start with what matters most to me professionally: coding. Claude isn’t just competent at writing code—it’s genuinely exceptional. Anthropic’s latest models have achieved industry-leading results on the SWE-bench Verified benchmark, which tests AI’s ability to solve real-world GitHub issues from popular open-source projects. We’re talking about an 80.9% success rate, surpassing other frontier models.

But benchmarks only tell part of the story. What I appreciate most is how Claude approaches code. It doesn’t just generate solutions—it understands context, follows existing patterns in your codebase, and produces clean, maintainable code.

Deep Knowledge, Accessible Delivery

Claude has broad and deep knowledge across domains—from technical documentation to complex research questions—and it presents that knowledge accessibly, without condescension. It functions like having access to a well-organized reference library combined with an expert consultant who can synthesize information on demand.

Continue reading Why Claude Is My Favorite AI