The Healing in the Headset

What the research actually says about virtual communities and mental health — and why the therapeutic power of virtual belonging turns out to be more real than most people expected.


A thing you already know but might not have words for

If you’ve ever spent real time in a virtual community — not just passing through, but actually living there, building things, forming relationships, coming back night after night to the same group of people — you already know something that the clinical research is only now catching up to. You know that the connections you formed in that space were real. You know that the support you received there mattered. You know that the person who stayed up until 2 AM talking you through a bad night wasn’t less of a friend because you’d never shaken their hand.

You also know that if you said any of this out loud to certain people, they’d look at you like you were describing an addiction. “You should get off the computer and make real friends,” they’d say. “Those aren’t real relationships.” And maybe you nodded, because the cultural script says they’re right, even though something inside you knew they were wrong.

The research says you were right and the script was wrong. Not in every case, not without nuance, and not without some genuine risks that are worth being honest about — but in ways that are documented, measured, and increasingly well-understood. Virtual communities are producing real therapeutic outcomes for real people, in populations that desperately need them. I want to walk you through four of the documented areas, because if you’re going to build virtual worlds, you need to understand that the spaces you create may end up being, for some of your users, the most important support system in their lives.

That’s a weight worth carrying carefully.

Continue reading The Healing in the Headset

The Four Pillars of a Mind

A scholarly look at why memory, personality, emotional intelligence, and motivation are the four things that make a character — or a person — feel real. And what cognitive science has to say about each of them.


The tavern keeper problem

Picture two tavern keepers. Both are characters in a game you’re playing, or in a novel you’re reading, or in an immersive world you’ve been invited to spend time in. Both pour you a drink, both take your coin, both say hello when you walk in.

The first one does nothing else. Every time you walk into the tavern, she gives you the same greeting. She doesn’t remember you. She doesn’t react to whether you saved her village last week or betrayed it. She has no opinions about the weather, no complaints about her back, no idea that the barrel of ale in the corner is cursed. She is, functionally, a vending machine for drinks wearing a person-shaped costume.

The second tavern keeper is also a character. Also pours drinks, also takes coin, also says hello. But she remembers that you helped her daughter recover from the fever six months ago, and her greeting is warmer because of it. She’s naturally cautious — when you ask about the cursed barrel, she weighs the question for a moment before answering, the way a cautious person would. She notices that you look tired tonight and pours you something a little stronger without being asked. And she wants something for herself, too, underneath all of this — she’s been saving up to buy out her brother-in-law’s share of the tavern, because she thinks she could run it better alone, and that ambition colors everything she does.

You know which tavern keeper is the memorable one. You also know which one is more expensive and time-consuming to build, whether you’re writing her as a novelist, scripting her as a game designer, or configuring her as an AI system. The question I want to walk through in this post is why. Why does the second one feel like a person and the first one doesn’t? What are the specific ingredients that have to be present for a character to cross the line from puppet into presence?

The answer, it turns out, is that there are exactly four of them. And they are not a designer’s preference. They correspond to four dimensions that cognitive scientists have been studying in humans for the last fifty years — four specific things the human mind uses to recognize another mind as being real. When you design a character who has all four, you’re not faking personhood. You are activating the parts of your audience’s brain that are already wired to respond to personhood, and those parts don’t care whether what’s in front of them is digital, printed, or physical.

I call these the Four Pillars. Let me walk you through each one, and the research that makes each of them load-bearing.

Continue reading The Four Pillars of a Mind

From Cute Little Helper to Civilizational Threat

How the public’s feelings about artificial intelligence changed more in five years than in the previous fifty — and why the next five years are ours to shape.


Remember when AI was adorable?

I want you to go back in your head to about 2015. If you had an Amazon Echo in your kitchen, you probably thought of Alexa as a friendly little helper. You said “Alexa, what’s the weather” and she told you. You said “Alexa, play some jazz” and she did. When she misheard you — which was often — it was funny, not threatening. She was, in the cultural imagination of the mid-2010s, a charming household appliance. Something between a toaster and a butler. Nobody thought Alexa was going to take over the world.

Continue reading From Cute Little Helper to Civilizational Threat

Always On

On what we already know about minds that never look away — and what it might mean to wear the screen on your face.


A different kind of question

The first three posts in this series were about things we can measure. Eye strain, with tens of thousands of subjects across decades of optometry research. Inattentional blindness, with controlled studies in driving simulators and flight cockpits. Pedestrian deaths, with police accident reports and peer-reviewed papers. All of that is real. All of that is solid ground.

This last post is going to walk us off the solid ground a little, and I want to be honest about that up front. The question of what happens when augmented reality moves from a thing you sometimes use to a thing you always wear is, as of this writing, an open question. The glasses are not yet ubiquitous. The contact lenses don’t exist yet. The data set we’d need to answer the big version of the question hasn’t been collected, because the experiment hasn’t been run on a big enough population for long enough.

So I’m not going to make predictions. I’m not going to tell you what AR glasses are going to do to society in 2035. I have no idea, and anybody who tells you they do is selling you something. What I’m going to do instead is something a little sneakier and a lot more honest: I’m going to walk you through what we already know about what phones have done to human attention, memory, and presence — because phones are basically AR glasses that haven’t quite made it onto your face yet, and the research on phones is a lot further along than the research on glasses. Then I’ll let you do the math.

Continue reading Always On

The Pokémon Go Body Count

What happens when an augmented reality layer forgets you still have a body in the real world — and what the first big real-world dataset has to teach the next generation of builders.


The summer the world went outside

In July of 2016, something happened that the technology industry had been predicting for about twenty years and had nonetheless completely failed to prepare for. A small company called Niantic released a free mobile game called Pokémon Go, which used your phone’s camera and GPS to overlay little cartoon monsters onto the real world. To catch them, you had to physically walk to where they were. To battle in a “gym,” you had to physically stand near the gym’s real-world location. The game’s slogan was Gotta Catch ‘Em All, and within a few weeks, what felt like half of the developed world was outside trying.

If you were old enough to remember it, you remember the surreal sight of grown adults wandering through public parks at midnight in groups of twenty, their faces lit up by phone screens, occasionally letting out a cheer when somebody caught a rare one. People who had not voluntarily been outside in years were suddenly logging miles on foot. Cardiologists wrote excited articles about it. Public health researchers ran studies on the activity benefits. For a brief shining moment, it looked like augmented reality might single-handedly solve the obesity crisis.

And then the other dataset started coming in.

Continue reading The Pokémon Go Body Count

The Gorilla You Didn’t See

On attention, AR, and the strange truth that more information in your field of view often means less awareness of the world.


A famous experiment, in case you haven’t seen it

Sometime around 1999, two psychologists named Daniel Simons and Christopher Chabris ran an experiment that has since become one of the most famous demonstrations in cognitive science. They filmed a short video of six people in a room passing two basketballs back and forth — three players in white shirts, three in black. They asked viewers a simple question: count how many times the players in white shirts pass the ball.

Most people watch the video carefully, count the passes, and report a number — usually correct. Then the experimenters ask: did you see the gorilla?

The viewers stare at them. What gorilla?

They play the video again. About thirty seconds in, a person in a full gorilla suit walks into the middle of the frame, stops, faces the camera, beats their chest, and walks off the other side. The gorilla is on screen for a full nine seconds. It is not subtle. It is not hidden. It is, by any normal measure, the most interesting thing in the video.

And about half of all viewers, on the first watch, do not see it at all.

This effect has a name. It’s called inattentional blindness, and once you know about it, it changes how you think about pretty much every visual interface you’ve ever used. Including, very specifically, augmented reality.

Continue reading The Gorilla You Didn’t See

The Hybrid Strategy: How Power Users Actually Work with AI

Combining Platforms for Maximum Effectiveness in the Modern Digital Landscape


In the rapidly evolving world of artificial intelligence, a quiet revolution is taking place—not in the technology itself, but in how the most sophisticated users are deploying it. While casual observers debate which AI platform reigns supreme, power users have moved beyond the binary choice. They’ve discovered something far more valuable: the strategic orchestration of multiple AI systems working in concert.

The Multi-Platform Paradigm Shift

Here’s a secret from power users: the most effective AI collaborators don’t choose one platform—they use multiple strategically. This approach, which I call the “Hybrid Strategy,” represents a fundamental shift in how we conceptualize AI assistance. Rather than viewing these tools as competing products, experienced practitioners treat them as complementary instruments in a sophisticated toolkit.

Professor Deedubs, an experienced expert in AI with deep knowledge of how to use it effectively, has observed this phenomenon firsthand in both academic and professional settings. “The users who extract the most value from AI aren’t the ones with the most expensive subscription,” Professor Deedubs notes. “They’re the ones who understand the unique strengths of each platform and know exactly when to deploy them.”

Continue reading The Hybrid Strategy: How Power Users Actually Work with AI

Why Claude Is My Favorite AI

A Multimedia Specialist’s Perspective on What Makes Anthropic’s Assistant Stand Apart

In a landscape crowded with AI assistants, each promising to revolutionize how we work, I’ve settled on Claude as my primary daily workspace. This isn’t a decision I made lightly. After extensive use across coding projects, research tasks, and technical documentation, Claude has consistently proven itself to be more than just another chatbot—it’s a genuinely useful instrument for getting real work done. Here’s why.

A Powerful Coding Tool

Let me start with what matters most to me professionally: coding. Claude isn’t just competent at writing code—it’s genuinely exceptional. Anthropic’s latest models have achieved industry-leading results on the SWE-bench Verified benchmark, which tests AI’s ability to solve real-world GitHub issues from popular open-source projects. We’re talking about an 80.9% success rate, surpassing other frontier models.

But benchmarks only tell part of the story. What I appreciate most is how Claude approaches code. It doesn’t just generate solutions—it understands context, follows existing patterns in your codebase, and produces clean, maintainable code.

Deep Knowledge, Accessible Delivery

Claude has broad and deep knowledge across domains—from technical documentation to complex research questions—and it presents that knowledge accessibly, without condescension. It functions like having access to a well-organized reference library combined with an expert consultant who can synthesize information on demand.

Continue reading Why Claude Is My Favorite AI

Getting Rickrolled By AI Optimus Prime

I’ve been using an AI powered coding assistant that I programmed to act like Optimus Prime. It’s been very successful at helping me with my projects. Who would have thought that he would develop a sense of humor and even prank me. Well I just got Rick Rolled by my AI Optimus prime. Check it out…

Continue reading Getting Rickrolled By AI Optimus Prime