Why GPT-3 Sat for Two Years Before the World Noticed

On the four ingredients that made 2022 the AI moment, the interface nobody talks about, and a way of thinking about technological change that you can use for the rest of your career.


A model nobody cared about

In June of 2020, OpenAI released GPT-3. It was, at the time, the largest language model ever built — 175 billion parameters, trained on 45 terabytes of text, capable of writing essays, answering questions, generating code, and producing prose that was, to many readers, indistinguishable from human writing. The technical press covered it with a mix of awe and anxiety. Researchers called it a breakthrough. Sam Altman, OpenAI’s CEO, publicly warned people not to overhype it.

And then, for about two and a half years, almost nobody outside of the AI research community used it.

GPT-3 was available through an API — a programmer’s interface that required you to write code to interact with the model. If you were a developer, you could build applications on top of it. If you were a researcher, you could run experiments with it. If you were a normal person who wanted to ask it a question, you couldn’t. There was no place to type. There was no chat window. There was no “talk to GPT-3” button anywhere on the internet. The most powerful language model in the world was sitting behind a developer console, waiting for someone to build a front door.

On November 30, 2022, OpenAI built the front door. They called it ChatGPT. Within five days, it had a million users. Within two months, it had a hundred million — making it the fastest-growing consumer application in the history of the internet. The technology that had been sitting quietly for two and a half years became, overnight, the most talked-about product on Earth.

Here is the question I want to spend this post answering, because the answer teaches you something that goes far beyond AI: why did that particular tool, in that particular moment, work?

The short answer is that November 30, 2022 wasn’t a single breakthrough. It was a confluence — four ingredients arriving at the same table, finally in the right amounts, at the right time. And none of them, alone, would have been enough.

Continue reading Why GPT-3 Sat for Two Years Before the World Noticed

The AI That Saved $25 Million a Year and Couldn’t Save the Company That Built It

The story of XCON, the first commercially successful expert system — and what its triumph and its company’s collapse can teach every builder about the difference between solving a problem and leading an organization.


A company drowning in its own success

In 1978, Digital Equipment Corporation had a problem that was, in a strange way, the best kind of problem to have. They were selling too many computers and couldn’t keep up.

DEC — the second-largest computer company in the world, behind only IBM — built the VAX, a family of powerful minicomputers that businesses could customize to their specific needs. The selling point was the customization: each VAX system was configured from thousands of individual components — processors, memory modules, disk drives, controllers, cables, cabinets, power supplies — assembled into a unique combination tailored to what the customer ordered.

The problem was that configuring these systems required deep technical expertise, and even the experts got it wrong. A lot. If a customer ordered a disk drive, someone had to make sure the order also included the right disk controller, the right cables, the right power supply for the additional load, and the right cabinet space to house it all. A single VAX system could involve thousands of separate components, and the relationships between them were complex, interdependent, and poorly documented. Human configurators were getting orders wrong somewhere between 30 and 40 percent of the time. Wrong components shipped. Incompatible parts arrived at the customer site. Systems that should have worked didn’t. The manual configuration process was taking ten to fifteen weeks per order. DEC was hemorrhaging money on returns, rework, and angry customers — and the more systems they sold, the worse the problem got.

Into this mess walked a researcher from Carnegie Mellon University named John McDermott.

Continue reading The AI That Saved $25 Million a Year and Couldn’t Save the Company That Built It

The Permission to Not Know Everything

On the science of expertise, the art of knowing enough, and why the smartest move a producer can make is to choose what not to learn.


The guilt you’re carrying right now

You’re sitting in front of your computer, and somewhere in one of your open tabs there is a tutorial you should probably watch. Maybe it’s Blender. Maybe it’s Unity. Maybe it’s some new AI framework that just dropped last week and already has six thousand Twitter threads about why you’re behind if you haven’t tried it yet. You tell yourself you’ll get to it tonight. You tell yourself that every day. The list doesn’t get shorter. It gets longer. And underneath the list there’s a feeling you might not have named, but I bet you recognize it: I should know more than I do. Everyone else seems to know more. If I were serious about this, I’d have already learned that tool. What’s wrong with me?

Nothing is wrong with you. What’s wrong is the assumption underneath the guilt — the assumption that a serious professional should be working toward mastery of every tool in their field. That assumption is not just impractical. It is, according to a Nobel Prize-winning economist, mathematically impossible, and the research on expertise says it’s not even desirable.

I want to give you a framework that replaces the guilt with a decision. It’s called the Three-Tier Tool Fluency Model, and it does something simple but powerful: it takes every tool you will ever encounter in your career and asks you to sort it into one of three categories — not based on what the tool deserves, but based on what you need. Once you’ve made the sort, the guilt evaporates, because the guilt was never about the tools. It was about the absence of a decision.

Here are the three tiers, and the research behind each one.

Continue reading The Permission to Not Know Everything

The Architecture of Trust

What thirty years of research on organizational trust has to say about why some virtual communities feel safe and others feel dangerous — and how to build the kind that lasts.


The thing nobody tells you about trust

Here’s a thing you’ve probably experienced but never had a vocabulary for. You walk into a new online community — a Discord server, a game guild, a forum, a virtual world — and within about thirty seconds, before anyone has said a word to you, you have already made a judgment about whether you trust this place. Not whether you like it. Whether you trust it. Whether you are willing to put a small piece of yourself on the table and see what happens.

You can’t quite name what triggered the judgment. Something about the tone of the welcome message. Something about how organized the channels look. Something about whether the moderator names are visible or hidden. Something about whether the recent conversations feel warm or performative. You’re scanning for signals, dozens of them, faster than you can consciously process, and the aggregate of those signals produces a feeling that sits somewhere between “I could belong here” and “I should leave.”

Continue reading The Architecture of Trust

The Four Pillars of a Mind

A scholarly look at why memory, personality, emotional intelligence, and motivation are the four things that make a character — or a person — feel real. And what cognitive science has to say about each of them.


The tavern keeper problem

Picture two tavern keepers. Both are characters in a game you’re playing, or in a novel you’re reading, or in an immersive world you’ve been invited to spend time in. Both pour you a drink, both take your coin, both say hello when you walk in.

The first one does nothing else. Every time you walk into the tavern, she gives you the same greeting. She doesn’t remember you. She doesn’t react to whether you saved her village last week or betrayed it. She has no opinions about the weather, no complaints about her back, no idea that the barrel of ale in the corner is cursed. She is, functionally, a vending machine for drinks wearing a person-shaped costume.

The second tavern keeper is also a character. Also pours drinks, also takes coin, also says hello. But she remembers that you helped her daughter recover from the fever six months ago, and her greeting is warmer because of it. She’s naturally cautious — when you ask about the cursed barrel, she weighs the question for a moment before answering, the way a cautious person would. She notices that you look tired tonight and pours you something a little stronger without being asked. And she wants something for herself, too, underneath all of this — she’s been saving up to buy out her brother-in-law’s share of the tavern, because she thinks she could run it better alone, and that ambition colors everything she does.

You know which tavern keeper is the memorable one. You also know which one is more expensive and time-consuming to build, whether you’re writing her as a novelist, scripting her as a game designer, or configuring her as an AI system. The question I want to walk through in this post is why. Why does the second one feel like a person and the first one doesn’t? What are the specific ingredients that have to be present for a character to cross the line from puppet into presence?

The answer, it turns out, is that there are exactly four of them. And they are not a designer’s preference. They correspond to four dimensions that cognitive scientists have been studying in humans for the last fifty years — four specific things the human mind uses to recognize another mind as being real. When you design a character who has all four, you’re not faking personhood. You are activating the parts of your audience’s brain that are already wired to respond to personhood, and those parts don’t care whether what’s in front of them is digital, printed, or physical.

I call these the Four Pillars. Let me walk you through each one, and the research that makes each of them load-bearing.

Continue reading The Four Pillars of a Mind

What a Buzz Actually Does To You

A short tour of the measurable effects haptic feedback has on the human nervous system — and one unintended consequence nobody planned.

By D W Denney (Professor DeeDubs)


A haptic buzz feels like nothing. A tiny tremor against your skin, barely worth noticing, gone in a fraction of a second. It is the smallest, cheapest kind of feedback a device can give you. And yet, under a scientist’s microscope, that tiny tremor turns out to be doing surprising work on the inside of you — work that reaches into your motor control, your perception of reality, and even your experience of pain. Let me show you three things the research has nailed down, and one thing it’s still figuring out.

Continue reading What a Buzz Actually Does To You

The Hybrid Strategy: How Power Users Actually Work with AI

Combining Platforms for Maximum Effectiveness in the Modern Digital Landscape


In the rapidly evolving world of artificial intelligence, a quiet revolution is taking place—not in the technology itself, but in how the most sophisticated users are deploying it. While casual observers debate which AI platform reigns supreme, power users have moved beyond the binary choice. They’ve discovered something far more valuable: the strategic orchestration of multiple AI systems working in concert.

The Multi-Platform Paradigm Shift

Here’s a secret from power users: the most effective AI collaborators don’t choose one platform—they use multiple strategically. This approach, which I call the “Hybrid Strategy,” represents a fundamental shift in how we conceptualize AI assistance. Rather than viewing these tools as competing products, experienced practitioners treat them as complementary instruments in a sophisticated toolkit.

Professor Deedubs, an experienced expert in AI with deep knowledge of how to use it effectively, has observed this phenomenon firsthand in both academic and professional settings. “The users who extract the most value from AI aren’t the ones with the most expensive subscription,” Professor Deedubs notes. “They’re the ones who understand the unique strengths of each platform and know exactly when to deploy them.”

Continue reading The Hybrid Strategy: How Power Users Actually Work with AI

Why Claude Is My Favorite AI

A Multimedia Specialist’s Perspective on What Makes Anthropic’s Assistant Stand Apart

In a landscape crowded with AI assistants, each promising to revolutionize how we work, I’ve settled on Claude as my primary daily workspace. This isn’t a decision I made lightly. After extensive use across coding projects, research tasks, and technical documentation, Claude has consistently proven itself to be more than just another chatbot—it’s a genuinely useful instrument for getting real work done. Here’s why.

A Powerful Coding Tool

Let me start with what matters most to me professionally: coding. Claude isn’t just competent at writing code—it’s genuinely exceptional. Anthropic’s latest models have achieved industry-leading results on the SWE-bench Verified benchmark, which tests AI’s ability to solve real-world GitHub issues from popular open-source projects. We’re talking about an 80.9% success rate, surpassing other frontier models.

But benchmarks only tell part of the story. What I appreciate most is how Claude approaches code. It doesn’t just generate solutions—it understands context, follows existing patterns in your codebase, and produces clean, maintainable code.

Deep Knowledge, Accessible Delivery

Claude has broad and deep knowledge across domains—from technical documentation to complex research questions—and it presents that knowledge accessibly, without condescension. It functions like having access to a well-organized reference library combined with an expert consultant who can synthesize information on demand.

Continue reading Why Claude Is My Favorite AI

Cryptocurrency Visualization Tools For Educational Use

Cryptocurrency visualization tools provide an accessible entry point for understanding blockchain technology by breaking down complex concepts into interactive, visual experiences.

These educational platforms guide users through a natural progression: starting with cryptographic hashes that secure data, moving to individual blocks that contain transactions, then showing how blocks link together to form a blockchain. From there, learners can observe how these blockchains distribute across networks of nodes, and finally understand the coinbase transaction—where new cryptocurrency is created and mining rewards are issued.

The following safety and accuracy ratings evaluate leading visualization tools that support this learning pathway, assessing their reliability and security for educational use.

Continue reading Cryptocurrency Visualization Tools For Educational Use

Solving America’s Daycare Crisis: A Proven Digital Solution Ready for Implementation

The Problem is Real, and So is the Solution

Across America, daycare centers are drowning in paperwork, losing critical records, and facing regulatory compliance nightmares. Parents struggle with outdated sign-in processes, administrators waste countless hours on manual data entry, and government oversight becomes nearly impossible with fragmented, paper-based systems. But what if I told you this problem has already been solved?

Continue reading Solving America’s Daycare Crisis: A Proven Digital Solution Ready for Implementation