FORGE: A Five-Step Method for Thinking in the Age of AI

Building on the SIFT method with a framework designed for a generation that uses AI to find the truth, not just to question it.

Suggested citation: Denney, D. W. (2026). FORGE: A five-step method for thinking in the age of AI. Realm Forge Academy Research. Published May 12, 2026. https://apps.dwdenney.com/forge-method

License and use: This research is published openly and freely. Journalists, researchers, educators, and students are welcome to read, reference, cite, and build upon this work with proper attribution using the citation above.


The method that got us here

In 2017, a digital literacy researcher at Washington State University named Mike Caulfield introduced a framework he called the Four Moves, later formalized as the SIFT method — Stop, Investigate the Source, Find Better Coverage, Trace Claims to their Origin (Caulfield, 2017; 2019). SIFT was designed as a fast, practical alternative to the older CRAAP test (a checklist-based evaluation method that had been the default in academic libraries for years). Where the CRAAP test asked students to evaluate a source in isolation — currency, relevance, authority, accuracy, purpose — SIFT asked them to leave the source and check what the rest of the web said about it. The technique Caulfield called “lateral reading” was borrowed directly from professional fact-checkers, and it worked.

SIFT became, and remains, one of the most widely adopted information literacy frameworks in higher education. It is taught in university libraries from Chicago to Carleton. It has a Creative Commons license, a companion course, and a substantial body of classroom adoption behind it. I want to be direct about this: SIFT is good work. It does what it was designed to do, and what it was designed to do is important. The emotional pause. The source investigation. The upstream tracing. These are genuine skills, and Caulfield deserves credit for distilling them into a framework simple enough to teach in a single class session.

But SIFT was designed for a specific information landscape — the landscape of 2017, where the primary threat was misinformation spreading through social media, and the primary question was “should I believe this?” The landscape has changed. The threats have changed. And the tools available to the person doing the evaluating have changed in ways that SIFT, through no fault of its own, does not address.

This post introduces FORGE — a five-step framework that builds on SIFT’s foundation and extends it for a generation that lives in a world SIFT wasn’t built for.

Continue reading FORGE: A Five-Step Method for Thinking in the Age of AI

Why GPT-3 Sat for Two Years Before the World Noticed

On the four ingredients that made 2022 the AI moment, the interface nobody talks about, and a way of thinking about technological change that you can use for the rest of your career.


A model nobody cared about

In June of 2020, OpenAI released GPT-3. It was, at the time, the largest language model ever built — 175 billion parameters, trained on 45 terabytes of text, capable of writing essays, answering questions, generating code, and producing prose that was, to many readers, indistinguishable from human writing. The technical press covered it with a mix of awe and anxiety. Researchers called it a breakthrough. Sam Altman, OpenAI’s CEO, publicly warned people not to overhype it.

And then, for about two and a half years, almost nobody outside of the AI research community used it.

GPT-3 was available through an API — a programmer’s interface that required you to write code to interact with the model. If you were a developer, you could build applications on top of it. If you were a researcher, you could run experiments with it. If you were a normal person who wanted to ask it a question, you couldn’t. There was no place to type. There was no chat window. There was no “talk to GPT-3” button anywhere on the internet. The most powerful language model in the world was sitting behind a developer console, waiting for someone to build a front door.

On November 30, 2022, OpenAI built the front door. They called it ChatGPT. Within five days, it had a million users. Within two months, it had a hundred million — making it the fastest-growing consumer application in the history of the internet. The technology that had been sitting quietly for two and a half years became, overnight, the most talked-about product on Earth.

Here is the question I want to spend this post answering, because the answer teaches you something that goes far beyond AI: why did that particular tool, in that particular moment, work?

The short answer is that November 30, 2022 wasn’t a single breakthrough. It was a confluence — four ingredients arriving at the same table, finally in the right amounts, at the right time. And none of them, alone, would have been enough.

Continue reading Why GPT-3 Sat for Two Years Before the World Noticed

The AI That Saved $25 Million a Year and Couldn’t Save the Company That Built It

The story of XCON, the first commercially successful expert system — and what its triumph and its company’s collapse can teach every builder about the difference between solving a problem and leading an organization.


A company drowning in its own success

In 1978, Digital Equipment Corporation had a problem that was, in a strange way, the best kind of problem to have. They were selling too many computers and couldn’t keep up.

DEC — the second-largest computer company in the world, behind only IBM — built the VAX, a family of powerful minicomputers that businesses could customize to their specific needs. The selling point was the customization: each VAX system was configured from thousands of individual components — processors, memory modules, disk drives, controllers, cables, cabinets, power supplies — assembled into a unique combination tailored to what the customer ordered.

The problem was that configuring these systems required deep technical expertise, and even the experts got it wrong. A lot. If a customer ordered a disk drive, someone had to make sure the order also included the right disk controller, the right cables, the right power supply for the additional load, and the right cabinet space to house it all. A single VAX system could involve thousands of separate components, and the relationships between them were complex, interdependent, and poorly documented. Human configurators were getting orders wrong somewhere between 30 and 40 percent of the time. Wrong components shipped. Incompatible parts arrived at the customer site. Systems that should have worked didn’t. The manual configuration process was taking ten to fifteen weeks per order. DEC was hemorrhaging money on returns, rework, and angry customers — and the more systems they sold, the worse the problem got.

Into this mess walked a researcher from Carnegie Mellon University named John McDermott.

Continue reading The AI That Saved $25 Million a Year and Couldn’t Save the Company That Built It

Five Thousand Years to Get Here

A brief history of every tool humanity ever built to teach its children — and the one that finally broke the pattern.

By D.W. Denney


Every tool on the same curve

I want to tell you a story that covers five thousand years and fits on the back of a napkin. It’s the story of every educational technology humanity has ever invented, and the punchline is that until very recently, they were all doing the same thing.

Here’s the napkin version. Somebody knows something. They need to get it into somebody else’s head. Every tool we’ve ever built for that purpose — every single one, across all of recorded history — has been a more efficient way to do one of four things: store information, distribute information, drill information into memory, or assess whether the information stuck. That’s it. Four functions. Five millennia. One curve.

Let me walk you through the timeline, and watch how the technology changes while the function doesn’t.

Oral tradition. Before writing, knowledge lived in the mouths of elders and was transferred by speech. The teacher spoke. The student listened, repeated, and memorized. If the elder died before the transfer was complete, the knowledge died with them. The storage medium was the human brain. The distribution method was the human voice. The range was the distance sound carries across a campfire. This worked, and it worked for a long time, and the stories and songs and genealogies that survived this era are a testament to how powerful the human memory can be when it has no other option. But the system was fragile. One forgotten line, one dead elder, one scattered tribe, and the knowledge was gone.

Continue reading Five Thousand Years to Get Here

The Permission to Not Know Everything

On the science of expertise, the art of knowing enough, and why the smartest move a producer can make is to choose what not to learn.


The guilt you’re carrying right now

You’re sitting in front of your computer, and somewhere in one of your open tabs there is a tutorial you should probably watch. Maybe it’s Blender. Maybe it’s Unity. Maybe it’s some new AI framework that just dropped last week and already has six thousand Twitter threads about why you’re behind if you haven’t tried it yet. You tell yourself you’ll get to it tonight. You tell yourself that every day. The list doesn’t get shorter. It gets longer. And underneath the list there’s a feeling you might not have named, but I bet you recognize it: I should know more than I do. Everyone else seems to know more. If I were serious about this, I’d have already learned that tool. What’s wrong with me?

Nothing is wrong with you. What’s wrong is the assumption underneath the guilt — the assumption that a serious professional should be working toward mastery of every tool in their field. That assumption is not just impractical. It is, according to a Nobel Prize-winning economist, mathematically impossible, and the research on expertise says it’s not even desirable.

I want to give you a framework that replaces the guilt with a decision. It’s called the Three-Tier Tool Fluency Model, and it does something simple but powerful: it takes every tool you will ever encounter in your career and asks you to sort it into one of three categories — not based on what the tool deserves, but based on what you need. Once you’ve made the sort, the guilt evaporates, because the guilt was never about the tools. It was about the absence of a decision.

Here are the three tiers, and the research behind each one.

Continue reading The Permission to Not Know Everything

The Architecture of Trust

What thirty years of research on organizational trust has to say about why some virtual communities feel safe and others feel dangerous — and how to build the kind that lasts.


The thing nobody tells you about trust

Here’s a thing you’ve probably experienced but never had a vocabulary for. You walk into a new online community — a Discord server, a game guild, a forum, a virtual world — and within about thirty seconds, before anyone has said a word to you, you have already made a judgment about whether you trust this place. Not whether you like it. Whether you trust it. Whether you are willing to put a small piece of yourself on the table and see what happens.

You can’t quite name what triggered the judgment. Something about the tone of the welcome message. Something about how organized the channels look. Something about whether the moderator names are visible or hidden. Something about whether the recent conversations feel warm or performative. You’re scanning for signals, dozens of them, faster than you can consciously process, and the aggregate of those signals produces a feeling that sits somewhere between “I could belong here” and “I should leave.”

Continue reading The Architecture of Trust

Beyond the Bullet Point List

How Cognitive Science and Neurodiversity Research Should Reshape the Way We Teach Complex Ideas


Open almost any online course, corporate training module, or educational slide deck in 2026 and you will find the same default gesture: dense content broken into bullet points. The bullet is the visual idiom of modern learning design. It signals clarity. It promises ease. For many of us, it is the first formatting move we make when a paragraph starts to feel “too long.”

Yet decades of cognitive science suggest that this default is often wrong — not slightly wrong, but consequentially wrong for the kinds of learning we say we care about most. The bullet is excellent at one thing (quick reference) and poor at something else entirely (building durable understanding of connected ideas). When we confuse these two goals, we produce materials that feel educational while failing to educate.

This article makes the case, from the research literature, for a more careful approach to formatting complex material — one that treats format not as decoration but as a cognitive variable that directly shapes what learners take away. We will look at what working memory can and cannot do, why prose and bullets operate on different cognitive systems, and what research on neurodivergent learners reveals about a common but mistaken assumption: that fragmenting information is always an act of accessibility. The truth, as is so often the case, is more interesting than the folk wisdom.

Continue reading Beyond the Bullet Point List

The Healing in the Headset

What the research actually says about virtual communities and mental health — and why the therapeutic power of virtual belonging turns out to be more real than most people expected.


A thing you already know but might not have words for

If you’ve ever spent real time in a virtual community — not just passing through, but actually living there, building things, forming relationships, coming back night after night to the same group of people — you already know something that the clinical research is only now catching up to. You know that the connections you formed in that space were real. You know that the support you received there mattered. You know that the person who stayed up until 2 AM talking you through a bad night wasn’t less of a friend because you’d never shaken their hand.

You also know that if you said any of this out loud to certain people, they’d look at you like you were describing an addiction. “You should get off the computer and make real friends,” they’d say. “Those aren’t real relationships.” And maybe you nodded, because the cultural script says they’re right, even though something inside you knew they were wrong.

The research says you were right and the script was wrong. Not in every case, not without nuance, and not without some genuine risks that are worth being honest about — but in ways that are documented, measured, and increasingly well-understood. Virtual communities are producing real therapeutic outcomes for real people, in populations that desperately need them. I want to walk you through four of the documented areas, because if you’re going to build virtual worlds, you need to understand that the spaces you create may end up being, for some of your users, the most important support system in their lives.

That’s a weight worth carrying carefully.

Continue reading The Healing in the Headset

Why Your Virtual Village Feels Like Home

The science of why people grieve when their Minecraft house burns down, trade favors with strangers they’ve never met in person, and develop inside jokes about things that never happened in the real world.


A house that isn’t there

Let me tell you about something that happens all the time and that almost nobody takes seriously. Somebody builds a house in a video game. A digital structure, made of digital blocks, sitting on a digital plot of land that exists only as data on a server somewhere. They spend hours on it — maybe weeks. They choose the materials carefully. They place the windows where the light comes in right. They build a little garden out back, because the garden makes it feel complete. The house is not real. It cannot be lived in. It has no value on any market that deals in physical objects.

And when somebody griefs it — when some other player comes along and burns it down or blows it up for laughs — the person who built it feels a surge of anger and loss that is, by any honest measure, real. Not metaphorical. Not exaggerated. The feeling is genuinely comparable, in both quality and intensity, to the feeling of having something physical vandalized. They feel violated. They feel robbed. Some of them log off and don’t come back.

Every experienced gamer knows this. Most people outside of gaming dismiss it. But there is a growing body of research in psychology, neuroscience, and behavioral economics that says the gamers are right and the dismissers are wrong — and that the feelings people develop about virtual places, virtual objects, and virtual communities are not pale imitations of “real” feelings. They are the same feelings, running on the same psychological machinery, triggered by the same mechanisms. The virtual village feels like home because your brain is using the same hardware to process it that it uses to process your actual home.

I want to walk you through four pieces of that research, because they map almost perfectly onto four dynamics that make virtual communities work. And if you’re somebody who designs virtual worlds for a living — or wants to — understanding these dynamics is not optional. They are the difference between building a world people visit and building a world people belong to.

Continue reading Why Your Virtual Village Feels Like Home

The Four Pillars of a Mind

A scholarly look at why memory, personality, emotional intelligence, and motivation are the four things that make a character — or a person — feel real. And what cognitive science has to say about each of them.


The tavern keeper problem

Picture two tavern keepers. Both are characters in a game you’re playing, or in a novel you’re reading, or in an immersive world you’ve been invited to spend time in. Both pour you a drink, both take your coin, both say hello when you walk in.

The first one does nothing else. Every time you walk into the tavern, she gives you the same greeting. She doesn’t remember you. She doesn’t react to whether you saved her village last week or betrayed it. She has no opinions about the weather, no complaints about her back, no idea that the barrel of ale in the corner is cursed. She is, functionally, a vending machine for drinks wearing a person-shaped costume.

The second tavern keeper is also a character. Also pours drinks, also takes coin, also says hello. But she remembers that you helped her daughter recover from the fever six months ago, and her greeting is warmer because of it. She’s naturally cautious — when you ask about the cursed barrel, she weighs the question for a moment before answering, the way a cautious person would. She notices that you look tired tonight and pours you something a little stronger without being asked. And she wants something for herself, too, underneath all of this — she’s been saving up to buy out her brother-in-law’s share of the tavern, because she thinks she could run it better alone, and that ambition colors everything she does.

You know which tavern keeper is the memorable one. You also know which one is more expensive and time-consuming to build, whether you’re writing her as a novelist, scripting her as a game designer, or configuring her as an AI system. The question I want to walk through in this post is why. Why does the second one feel like a person and the first one doesn’t? What are the specific ingredients that have to be present for a character to cross the line from puppet into presence?

The answer, it turns out, is that there are exactly four of them. And they are not a designer’s preference. They correspond to four dimensions that cognitive scientists have been studying in humans for the last fifty years — four specific things the human mind uses to recognize another mind as being real. When you design a character who has all four, you’re not faking personhood. You are activating the parts of your audience’s brain that are already wired to respond to personhood, and those parts don’t care whether what’s in front of them is digital, printed, or physical.

I call these the Four Pillars. Let me walk you through each one, and the research that makes each of them load-bearing.

Continue reading The Four Pillars of a Mind