Five Thousand Years to Get Here

A brief history of every tool humanity ever built to teach its children — and the one that finally broke the pattern.

By D.W. Denney


Every tool on the same curve

I want to tell you a story that covers five thousand years and fits on the back of a napkin. It’s the story of every educational technology humanity has ever invented, and the punchline is that until very recently, they were all doing the same thing.

Here’s the napkin version. Somebody knows something. They need to get it into somebody else’s head. Every tool we’ve ever built for that purpose — every single one, across all of recorded history — has been a more efficient way to do one of four things: store information, distribute information, drill information into memory, or assess whether the information stuck. That’s it. Four functions. Five millennia. One curve.

Let me walk you through the timeline, and watch how the technology changes while the function doesn’t.

Oral tradition. Before writing, knowledge lived in the mouths of elders and was transferred by speech. The teacher spoke. The student listened, repeated, and memorized. If the elder died before the transfer was complete, the knowledge died with them. The storage medium was the human brain. The distribution method was the human voice. The range was the distance sound carries across a campfire. This worked, and it worked for a long time, and the stories and songs and genealogies that survived this era are a testament to how powerful the human memory can be when it has no other option. But the system was fragile. One forgotten line, one dead elder, one scattered tribe, and the knowledge was gone.

Writing. Sometime around 3200 BCE, the Sumerians started pressing wedge-shaped marks into wet clay tablets. The Egyptians wrote on papyrus. The Greeks and Romans wrote on parchment. The function was the same as oral tradition — store information and transmit it — but the storage medium had changed. Knowledge was no longer dependent on a living memory. It could survive the death of the person who knew it. This was an enormous leap, and it changed everything about how civilizations accumulated knowledge across generations. But the teaching model didn’t change. A teacher still stood in front of students and talked. The students still listened and memorized. The writing was a backup, not a replacement.

The printing press. In the 1440s, Johannes Gutenberg built a machine that could produce identical copies of a written text at a speed and cost that handwriting could not match. The function was the same as writing — store and distribute information — but the distribution had scaled. A book that previously existed in three handwritten copies could now exist in three hundred, then three thousand. The implications for education were staggering: for the first time, a student could own the same text as the teacher. The textbook was born. But the teaching model still didn’t change. The teacher still lectured. The students still listened. The textbook was a reference, not a tutor.

The chalkboard. In the early 1800s, a large slate surface mounted on a classroom wall gave teachers the ability to write and draw in real time, visible to an entire room of students. The function was the same as a lecture — distribute information — but the channel had expanded from purely auditory to auditory-visual. The teacher could now show as well as tell. This was a genuine improvement in the richness of the instructional experience. But the model didn’t change. One teacher, many students, information flowing in one direction.

Pencil, paper, and the workbook. The mass production of cheap paper and reliable pencils in the 1800s gave every student their own surface to work on. The function was practice and assessment — drill the information, test whether it stuck. Flashcards, worksheets, and workbooks followed. Spaced repetition was discovered and formalized. All of these were improvements in the efficiency of a very old function: getting information from short-term memory into long-term memory through structured repetition. The model didn’t change. The student still practiced alone, and the teacher still graded the result after the fact.

Radio, film, and television. Starting in the 1920s, electronic broadcast media made it possible to deliver a lecture to thousands or millions of students simultaneously. Educational radio, instructional films, and later educational television (think Sesame Street, the single most studied educational intervention in the history of broadcast media) all did the same thing: distribute a lecture at scale. A great teacher could now reach students who would never have had access to that teacher in person. This was a real and important advance. But the model didn’t change. The lecture was still one-directional. The student still sat and received. The broadcast didn’t know whether the student understood, or was confused, or had fallen asleep.

The personal computer and educational software. Starting in the 1980s, computers in classrooms and homes delivered interactive drills, educational games, and multimedia presentations. The function was practice and assessment — the same function as the workbook — but the medium was now digital, which meant the drill could be adaptive (harder questions if you got the last one right, easier ones if you didn’t) and the feedback could be immediate (a green checkmark or a red X, right now, instead of a graded paper returned next Tuesday). This was a genuine improvement. But the model didn’t change. The software presented material. The student responded. The software evaluated the response. It was a faster, flashier workbook.

The internet and the LMS. Starting in the 1990s, the internet made it possible to distribute lectures, textbooks, workbooks, and assessments to anyone with a connection, anywhere in the world. Canvas. Blackboard. Moodle. Khan Academy. Coursera. All of these are, at their core, digital infrastructure for doing the same four things humanity has been doing since the Sumerians: store information, distribute information, drill it into memory, and test whether it stuck. Khan Academy’s innovation was putting a world-class lecture series on YouTube for free. Coursera’s innovation was putting university courses online with automated grading. Both were genuine advances in access and distribution. Neither changed the fundamental model. The student still watched, practiced, and was assessed. The system still didn’t know who they were or what they were struggling with or why.

The pattern

Do you see it? Every technology on that timeline improved the efficiency of one of the four functions. Writing improved storage. The printing press improved distribution. The chalkboard improved the richness of the lecture. The workbook improved the efficiency of drill. The computer improved the speed of feedback. The internet improved the reach of all of the above. Each one was a genuine advance, and I don’t want to diminish any of them — the printing press alone arguably created the modern world.

But none of them changed the model. The model, from the campfire to the LMS, has always been the same: one teacher, many students, information flowing in one direction, with periodic checks to see if the students absorbed it. The ratio has changed. The speed has changed. The medium has changed. The model has not. For five thousand years, humanity has been building faster, cheaper, more widely distributed versions of the same four-function educational machine.

And there’s a reason for that, and the reason has a name.

The Two Sigma Problem

In 1984, an educational psychologist at the University of Chicago named Benjamin Bloom published a paper in Educational Researcher that remains one of the most cited — and most haunting — papers in the history of education. The paper was called “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring.”

Bloom and his graduate students had conducted a straightforward experiment. They divided students into three groups. The first group received conventional classroom instruction — one teacher, thirty students. The second group received the same instruction but with a structured feedback-and-correction system called mastery learning. The third group received one-on-one tutoring with mastery learning techniques.

The results were not subtle. The average student in the tutoring group performed two standard deviations above the average student in the conventional classroom. In practical terms, that means the average tutored student scored better than 98 percent of the students in the conventional class. The tutored students didn’t just do a little better. They occupied a different universe of performance. Roughly 90% of the tutored students reached a level of achievement that only the top 20% of the conventional class reached.

Bloom’s finding confirmed something that educators and wealthy parents had known for centuries: one-on-one tutoring, where a knowledgeable person sits with a single student and adapts their instruction in real time to that student’s specific needs, confusions, and pace, is catastrophically more effective than anything else we know how to do. It’s not 10% better. It’s not twice as good. It is in a different category entirely.

And then Bloom named the problem. He called it the Two Sigma Problem, and stated it with painful clarity: one-on-one tutoring produces extraordinary results, but it is “too costly for most societies to bear on a large scale.” You can’t give every student a personal tutor. The math doesn’t work. There aren’t enough tutors, and even if there were, no society could afford to pay them. Bloom’s challenge to the field was to find methods of group instruction that could approximate the results of one-on-one tutoring.

I want to be honest about something here, because this is a scholarly blog and you deserve the full picture. Bloom’s original two-sigma claim has been scrutinized in the decades since 1984, and there are legitimate questions about whether the effect is quite as large as he reported. A more recent analysis in Education Next pointed out that the original studies held tutored students to a higher mastery standard (90%) than classroom students (80%), which may have inflated the comparison. The broader meta-analytic literature suggests that the true effect of tutoring may be somewhat smaller than two full standard deviations. But even the most conservative readings of the data agree that one-on-one tutoring produces a large effect — substantially larger than any other instructional intervention that has been reliably measured. The core of Bloom’s insight stands: personalized, responsive, one-on-one instruction is dramatically better than anything else, and for five thousand years it has been available only to the few who could afford it.

Aristotle tutoring Alexander the Great. Royal tutors educating future monarchs. Wealthy families hiring private instructors for their children. The best educational technology in human history has always been a single knowledgeable human being, sitting with a single student, paying attention to that student and only that student, and adapting in real time. Everything else — the textbooks, the lectures, the software, the LMS platforms — has been an attempt to approximate that experience at scale, and every approximation has fallen short by a measurable and significant margin.

For forty years, Bloom’s Two Sigma Problem stood as an open challenge. Find a way to give every student the equivalent of a personal tutor. Nobody solved it. The tools kept getting better — faster, cheaper, more accessible — but they stayed on the same curve. They were still doing the same four things. Store, distribute, drill, assess. The model didn’t change.

November 30, 2022

And then a chatbot launched, and the curve broke.

I want to be careful here, because the hype around generative AI in education is already thick enough to choke on, and I don’t want to add to it thoughtlessly. ChatGPT did not solve education. It did not make human teachers obsolete. It did not fulfill Bloom’s challenge overnight. It is not a replacement for a great teacher, and anyone who tells you it is should not be trusted.

But here is what it did do, and this is the part I want you to see clearly, because it is genuinely new.

For the first time in roughly five thousand years of recorded educational history, a student sat down in front of a tool that was not a faster textbook, not a recorded lecture, not a digital workbook, not an adaptive quiz. It was a conversational, responsive, infinitely patient entity that met the student where they were. It could answer a question. It could answer the follow-up question. It could explain the same concept three different ways until one of them clicked. It could notice that the student was confused about a prerequisite and back up to fill the gap. It could work at 2 AM, on a Sunday, in a language the student’s school didn’t teach in, without getting tired, without getting frustrated, without checking the clock.

None of the tools on the five-thousand-year timeline could do any of that. The printing press couldn’t answer a question. The chalkboard couldn’t notice confusion. Khan Academy couldn’t adapt its explanation in real time based on what a specific student said three seconds ago. Every prior tool was a one-directional delivery mechanism for information. This tool is a conversational partner — imperfect, sometimes wrong, sometimes confidently wrong, but conversational in a way that no educational technology before it has ever been.

The technical term for what happened is discontinuous innovation. Every prior educational technology fell on a continuous improvement curve — each one was a better, faster, cheaper version of the same basic functions. Generative AI did not improve the curve. It introduced a function that was not on the curve at all: real-time, adaptive, conversational, one-on-one instruction, available to anyone with an internet connection, at a cost approaching zero.

That is the function that, for all of human history, required a human tutor. A human tutor who was expensive, scarce, and therefore available only to the privileged. Bloom measured the advantage at two standard deviations. The advantage was available to Alexander the Great, to the children of European monarchs, to the kids whose parents could afford $200 an hour. It was not available to the kid in the rural school with one overwhelmed teacher and thirty-five students in the room.

It might be available now. Not perfectly. Not without caveats. Not without the very real risks of hallucination, of over-reliance, of the substitution of a machine for a human relationship. But the function — the conversational, responsive, adaptive, patient one-on-one instructional interaction — is, for the first time in the history of the species, not locked behind a price tag that only the wealthy can pay.

What I want you to take with you

I did not write this post to sell you on AI. I wrote it to give you perspective, because perspective is the thing the hype cycle steals first.

When you use an AI tutor — and you will, if you haven’t already — I want you to understand where it sits in the longest timeline you can hold in your head. Five thousand years of tools that stored, distributed, drilled, and assessed. Forty years of an unsolved problem that said the best form of education was too expensive for most of humanity. And then a tool that, for all its flaws, introduced a function that had never existed in an affordable, scalable form before.

That’s not hype. That’s history. The tool is imperfect. The tool will get better. The tool will also get misused, overhyped, poorly implemented, and blamed for things that aren’t its fault. All of that is going to happen, because it happens with every technology that matters.

But underneath all of that noise, something real has changed. The curve broke. A function that was previously available only to the privileged is now available to anyone who can type a question into a box. What humanity does with that — whether we waste it or build on it — is an open question, and some of the people who will answer it are reading this post right now.

The printing press didn’t make everyone literate. It took centuries of effort — schools, teachers, curricula, social movements — to turn the press into widespread literacy. The AI tutor will not make everyone educated. It will take effort, design, wisdom, and a lot of thoughtful builders to turn the tool into the transformation it could be.

Some of those builders are going to be you. The tool is here. The timeline delivered it. What you build with it is the next line on the napkin.

Write something worth reading.


Sources and further reading

On Bloom’s Two Sigma Problem: Bloom, B. S. (1984), “The 2 Sigma Problem: The Search for Methods of Group Instruction as Effective as One-to-One Tutoring,” Educational Researcher, 13(6), 4-16. Based on dissertation research by Joanne Anania and Joseph Arthur Burke at the University of Chicago.

On the scrutiny of Bloom’s original claims: von Hippel, P. T. (2025), “Two-Sigma Tutoring: Separating Science Fiction from Science Fact,” Education Next. This piece provides important context on the methodological limitations of the original studies, including the differing mastery thresholds between conditions, while affirming that the core finding of a large tutoring effect is supported by the broader literature.

On the broader meta-analytic evidence for tutoring effects: VanLehn, K. (2011), “The Relative Effectiveness of Human Tutoring, Intelligent Tutoring Systems, and Other Tutoring Systems,” Educational Psychologist, 46(4), 197-221. Also reviewed in the Nintil systematic review of Bloom’s Two Sigma Problem, which synthesizes mastery learning, tutoring, and direct instruction literatures.

On the history of educational technology: A comprehensive treatment of the progression from oral tradition through digital media can be found in Cuban, L. (1986), Teachers and Machines: The Classroom Use of Technology Since 1920, Teachers College Press. For the broader historical arc: Saettler, P. (2004), The Evolution of American Educational Technology, Information Age Publishing.

On the Aristotle-Alexander tutoring lineage: The canonical example of elite one-on-one tutoring in the ancient world is Aristotle’s tutorship of Alexander the Great, beginning around 343 BCE. Referenced in Bloom’s own framing and in the Education Next analysis.

On discontinuous innovation as a concept: The distinction between continuous (incremental) and discontinuous (paradigm-breaking) innovation is discussed broadly in the innovation literature. A useful entry point: Christensen, C. M. (1997), The Innovator’s Dilemma, Harvard Business School Press — though Christensen’s specific framework (disruptive vs. sustaining innovation) applies to market dynamics rather than pedagogical function.

Note to readers: verify the primary sources yourself before quoting. Bloom’s Two Sigma claim in particular has been the subject of forty years of debate, and the honest scholarly position is that tutoring has a large effect but the exact magnitude remains under discussion. The citations above are entry points into that discussion, not settlements of it.

The Permission to Not Know Everything

On the science of expertise, the art of knowing enough, and why the smartest move a producer can make is to choose what not to learn.


The guilt you’re carrying right now

You’re sitting in front of your computer, and somewhere in one of your open tabs there is a tutorial you should probably watch. Maybe it’s Blender. Maybe it’s Unity. Maybe it’s some new AI framework that just dropped last week and already has six thousand Twitter threads about why you’re behind if you haven’t tried it yet. You tell yourself you’ll get to it tonight. You tell yourself that every day. The list doesn’t get shorter. It gets longer. And underneath the list there’s a feeling you might not have named, but I bet you recognize it: I should know more than I do. Everyone else seems to know more. If I were serious about this, I’d have already learned that tool. What’s wrong with me?

Nothing is wrong with you. What’s wrong is the assumption underneath the guilt — the assumption that a serious professional should be working toward mastery of every tool in their field. That assumption is not just impractical. It is, according to a Nobel Prize-winning economist, mathematically impossible, and the research on expertise says it’s not even desirable.

I want to give you a framework that replaces the guilt with a decision. It’s called the Three-Tier Tool Fluency Model, and it does something simple but powerful: it takes every tool you will ever encounter in your career and asks you to sort it into one of three categories — not based on what the tool deserves, but based on what you need. Once you’ve made the sort, the guilt evaporates, because the guilt was never about the tools. It was about the absence of a decision.

Here are the three tiers, and the research behind each one.

Continue reading The Permission to Not Know Everything

The Architecture of Trust

What thirty years of research on organizational trust has to say about why some virtual communities feel safe and others feel dangerous — and how to build the kind that lasts.


The thing nobody tells you about trust

Here’s a thing you’ve probably experienced but never had a vocabulary for. You walk into a new online community — a Discord server, a game guild, a forum, a virtual world — and within about thirty seconds, before anyone has said a word to you, you have already made a judgment about whether you trust this place. Not whether you like it. Whether you trust it. Whether you are willing to put a small piece of yourself on the table and see what happens.

You can’t quite name what triggered the judgment. Something about the tone of the welcome message. Something about how organized the channels look. Something about whether the moderator names are visible or hidden. Something about whether the recent conversations feel warm or performative. You’re scanning for signals, dozens of them, faster than you can consciously process, and the aggregate of those signals produces a feeling that sits somewhere between “I could belong here” and “I should leave.”

Continue reading The Architecture of Trust

Beyond the Bullet Point List

How Cognitive Science and Neurodiversity Research Should Reshape the Way We Teach Complex Ideas


Open almost any online course, corporate training module, or educational slide deck in 2026 and you will find the same default gesture: dense content broken into bullet points. The bullet is the visual idiom of modern learning design. It signals clarity. It promises ease. For many of us, it is the first formatting move we make when a paragraph starts to feel “too long.”

Yet decades of cognitive science suggest that this default is often wrong — not slightly wrong, but consequentially wrong for the kinds of learning we say we care about most. The bullet is excellent at one thing (quick reference) and poor at something else entirely (building durable understanding of connected ideas). When we confuse these two goals, we produce materials that feel educational while failing to educate.

This article makes the case, from the research literature, for a more careful approach to formatting complex material — one that treats format not as decoration but as a cognitive variable that directly shapes what learners take away. We will look at what working memory can and cannot do, why prose and bullets operate on different cognitive systems, and what research on neurodivergent learners reveals about a common but mistaken assumption: that fragmenting information is always an act of accessibility. The truth, as is so often the case, is more interesting than the folk wisdom.

Continue reading Beyond the Bullet Point List

The Healing in the Headset

What the research actually says about virtual communities and mental health — and why the therapeutic power of virtual belonging turns out to be more real than most people expected.


A thing you already know but might not have words for

If you’ve ever spent real time in a virtual community — not just passing through, but actually living there, building things, forming relationships, coming back night after night to the same group of people — you already know something that the clinical research is only now catching up to. You know that the connections you formed in that space were real. You know that the support you received there mattered. You know that the person who stayed up until 2 AM talking you through a bad night wasn’t less of a friend because you’d never shaken their hand.

You also know that if you said any of this out loud to certain people, they’d look at you like you were describing an addiction. “You should get off the computer and make real friends,” they’d say. “Those aren’t real relationships.” And maybe you nodded, because the cultural script says they’re right, even though something inside you knew they were wrong.

The research says you were right and the script was wrong. Not in every case, not without nuance, and not without some genuine risks that are worth being honest about — but in ways that are documented, measured, and increasingly well-understood. Virtual communities are producing real therapeutic outcomes for real people, in populations that desperately need them. I want to walk you through four of the documented areas, because if you’re going to build virtual worlds, you need to understand that the spaces you create may end up being, for some of your users, the most important support system in their lives.

That’s a weight worth carrying carefully.

Continue reading The Healing in the Headset

Why Your Virtual Village Feels Like Home

The science of why people grieve when their Minecraft house burns down, trade favors with strangers they’ve never met in person, and develop inside jokes about things that never happened in the real world.


A house that isn’t there

Let me tell you about something that happens all the time and that almost nobody takes seriously. Somebody builds a house in a video game. A digital structure, made of digital blocks, sitting on a digital plot of land that exists only as data on a server somewhere. They spend hours on it — maybe weeks. They choose the materials carefully. They place the windows where the light comes in right. They build a little garden out back, because the garden makes it feel complete. The house is not real. It cannot be lived in. It has no value on any market that deals in physical objects.

And when somebody griefs it — when some other player comes along and burns it down or blows it up for laughs — the person who built it feels a surge of anger and loss that is, by any honest measure, real. Not metaphorical. Not exaggerated. The feeling is genuinely comparable, in both quality and intensity, to the feeling of having something physical vandalized. They feel violated. They feel robbed. Some of them log off and don’t come back.

Every experienced gamer knows this. Most people outside of gaming dismiss it. But there is a growing body of research in psychology, neuroscience, and behavioral economics that says the gamers are right and the dismissers are wrong — and that the feelings people develop about virtual places, virtual objects, and virtual communities are not pale imitations of “real” feelings. They are the same feelings, running on the same psychological machinery, triggered by the same mechanisms. The virtual village feels like home because your brain is using the same hardware to process it that it uses to process your actual home.

I want to walk you through four pieces of that research, because they map almost perfectly onto four dynamics that make virtual communities work. And if you’re somebody who designs virtual worlds for a living — or wants to — understanding these dynamics is not optional. They are the difference between building a world people visit and building a world people belong to.

Continue reading Why Your Virtual Village Feels Like Home

The Four Pillars of a Mind

A scholarly look at why memory, personality, emotional intelligence, and motivation are the four things that make a character — or a person — feel real. And what cognitive science has to say about each of them.


The tavern keeper problem

Picture two tavern keepers. Both are characters in a game you’re playing, or in a novel you’re reading, or in an immersive world you’ve been invited to spend time in. Both pour you a drink, both take your coin, both say hello when you walk in.

The first one does nothing else. Every time you walk into the tavern, she gives you the same greeting. She doesn’t remember you. She doesn’t react to whether you saved her village last week or betrayed it. She has no opinions about the weather, no complaints about her back, no idea that the barrel of ale in the corner is cursed. She is, functionally, a vending machine for drinks wearing a person-shaped costume.

The second tavern keeper is also a character. Also pours drinks, also takes coin, also says hello. But she remembers that you helped her daughter recover from the fever six months ago, and her greeting is warmer because of it. She’s naturally cautious — when you ask about the cursed barrel, she weighs the question for a moment before answering, the way a cautious person would. She notices that you look tired tonight and pours you something a little stronger without being asked. And she wants something for herself, too, underneath all of this — she’s been saving up to buy out her brother-in-law’s share of the tavern, because she thinks she could run it better alone, and that ambition colors everything she does.

You know which tavern keeper is the memorable one. You also know which one is more expensive and time-consuming to build, whether you’re writing her as a novelist, scripting her as a game designer, or configuring her as an AI system. The question I want to walk through in this post is why. Why does the second one feel like a person and the first one doesn’t? What are the specific ingredients that have to be present for a character to cross the line from puppet into presence?

The answer, it turns out, is that there are exactly four of them. And they are not a designer’s preference. They correspond to four dimensions that cognitive scientists have been studying in humans for the last fifty years — four specific things the human mind uses to recognize another mind as being real. When you design a character who has all four, you’re not faking personhood. You are activating the parts of your audience’s brain that are already wired to respond to personhood, and those parts don’t care whether what’s in front of them is digital, printed, or physical.

I call these the Four Pillars. Let me walk you through each one, and the research that makes each of them load-bearing.

Continue reading The Four Pillars of a Mind

From Cute Little Helper to Civilizational Threat

How the public’s feelings about artificial intelligence changed more in five years than in the previous fifty — and why the next five years are ours to shape.


Remember when AI was adorable?

I want you to go back in your head to about 2015. If you had an Amazon Echo in your kitchen, you probably thought of Alexa as a friendly little helper. You said “Alexa, what’s the weather” and she told you. You said “Alexa, play some jazz” and she did. When she misheard you — which was often — it was funny, not threatening. She was, in the cultural imagination of the mid-2010s, a charming household appliance. Something between a toaster and a butler. Nobody thought Alexa was going to take over the world.

Continue reading From Cute Little Helper to Civilizational Threat

What a Buzz Actually Does To You

A short tour of the measurable effects haptic feedback has on the human nervous system — and one unintended consequence nobody planned.

By D W Denney (Professor DeeDubs)


A haptic buzz feels like nothing. A tiny tremor against your skin, barely worth noticing, gone in a fraction of a second. It is the smallest, cheapest kind of feedback a device can give you. And yet, under a scientist’s microscope, that tiny tremor turns out to be doing surprising work on the inside of you — work that reaches into your motor control, your perception of reality, and even your experience of pain. Let me show you three things the research has nailed down, and one thing it’s still figuring out.

Continue reading What a Buzz Actually Does To You

Always On

On what we already know about minds that never look away — and what it might mean to wear the screen on your face.


A different kind of question

The first three posts in this series were about things we can measure. Eye strain, with tens of thousands of subjects across decades of optometry research. Inattentional blindness, with controlled studies in driving simulators and flight cockpits. Pedestrian deaths, with police accident reports and peer-reviewed papers. All of that is real. All of that is solid ground.

This last post is going to walk us off the solid ground a little, and I want to be honest about that up front. The question of what happens when augmented reality moves from a thing you sometimes use to a thing you always wear is, as of this writing, an open question. The glasses are not yet ubiquitous. The contact lenses don’t exist yet. The data set we’d need to answer the big version of the question hasn’t been collected, because the experiment hasn’t been run on a big enough population for long enough.

So I’m not going to make predictions. I’m not going to tell you what AR glasses are going to do to society in 2035. I have no idea, and anybody who tells you they do is selling you something. What I’m going to do instead is something a little sneakier and a lot more honest: I’m going to walk you through what we already know about what phones have done to human attention, memory, and presence — because phones are basically AR glasses that haven’t quite made it onto your face yet, and the research on phones is a lot further along than the research on glasses. Then I’ll let you do the math.

Continue reading Always On