FORGE: A Five-Step Method for Thinking in the Age of AI

Building on the SIFT method with a framework designed for a generation that uses AI to find the truth, not just to question it.

Suggested citation: Denney, D. W. (2026). FORGE: A five-step method for thinking in the age of AI. Realm Forge Academy Research. Published May 12, 2026. https://apps.dwdenney.com/forge-method

License and use: This research is published openly and freely. Journalists, researchers, educators, and students are welcome to read, reference, cite, and build upon this work with proper attribution using the citation above.


The method that got us here

In 2017, a digital literacy researcher at Washington State University named Mike Caulfield introduced a framework he called the Four Moves, later formalized as the SIFT method — Stop, Investigate the Source, Find Better Coverage, Trace Claims to their Origin (Caulfield, 2017; 2019). SIFT was designed as a fast, practical alternative to the older CRAAP test (a checklist-based evaluation method that had been the default in academic libraries for years). Where the CRAAP test asked students to evaluate a source in isolation — currency, relevance, authority, accuracy, purpose — SIFT asked them to leave the source and check what the rest of the web said about it. The technique Caulfield called “lateral reading” was borrowed directly from professional fact-checkers, and it worked.

SIFT became, and remains, one of the most widely adopted information literacy frameworks in higher education. It is taught in university libraries from Chicago to Carleton. It has a Creative Commons license, a companion course, and a substantial body of classroom adoption behind it. I want to be direct about this: SIFT is good work. It does what it was designed to do, and what it was designed to do is important. The emotional pause. The source investigation. The upstream tracing. These are genuine skills, and Caulfield deserves credit for distilling them into a framework simple enough to teach in a single class session.

But SIFT was designed for a specific information landscape — the landscape of 2017, where the primary threat was misinformation spreading through social media, and the primary question was “should I believe this?” The landscape has changed. The threats have changed. And the tools available to the person doing the evaluating have changed in ways that SIFT, through no fault of its own, does not address.

This post introduces FORGE — a five-step framework that builds on SIFT’s foundation and extends it for a generation that lives in a world SIFT wasn’t built for.

The broader landscape — and where FORGE fits in it

SIFT is not the only information literacy framework in the field. A 2025 cited-reference analysis published in ScienceDirect (Mnemonic Evaluative Frameworks in Scholarly Publications) identified sixteen mnemonic evaluation frameworks across 280 peer-reviewed articles. The CRAAP test (Blakeslee, 2004) remains the most widely cited. RADAR, CARS, ACT UP (Stahura, 2018), CCOW (Tardiff, 2022), and the 6QW method all occupy adjacent space. The DIG Method (Thompson, 2019) addresses the specific problem of evaluating digital images. Each of these frameworks makes a genuine contribution to information literacy instruction.

What all sixteen share, however, is a common function: they are filtering tools. They take a piece of information, run it through evaluative criteria, and produce a credibility verdict — trustworthy or not, reliable or not, credible or not. The differences between them are differences of criteria and emphasis, not of function. CRAAP evaluates the source in isolation. SIFT evaluates the source in context. RADAR reweights the criteria. ACT UP foregrounds power dynamics. But the output is the same: a judgment about whether to trust a source.

FORGE is not a competing entry in the source-evaluation category. It operates in different territory. Its first three steps (Feel, Open, Read) occupy the same evaluative ground as SIFT and its peers — and the post is transparent about that inheritance. Its fourth and fifth steps (Ground and Evaluate) move past evaluation into production — asking the student to externalize their thinking into a documented record and to form, articulate, and own a nuanced position rather than a binary verdict. No existing framework in the field includes a documentation step or a position-ownership step. Additionally, no existing framework explicitly incorporates AI as a research instrument within the evaluation process itself. These are the gaps FORGE was designed to fill, and they are gaps created by changes in the information landscape that postdate the frameworks currently in use.

What changed since 2017

Three things changed, and each one creates a gap in SIFT’s coverage.

First, AI became a source of information. In 2017, the information a student encountered came from people — journalists, bloggers, academics, anonymous social media accounts. The source was always human, which meant SIFT’s “Investigate the Source” step could rely on a basically human question: who is this person, and do they have a track record? In 2026, a student’s primary information source is often an AI system — a language model, a search summary, a chatbot. “Investigate the Source” doesn’t map cleanly onto a system that has no author, no institutional affiliation, and no track record in the traditional sense. The source is the model, and the model doesn’t have a LinkedIn page to check.

Second, AI became a research tool. In 2017, if you wanted to “Find Better Coverage” (SIFT’s third step), you Googled it. In 2026, a student can ask an AI assistant to search the web, find peer-reviewed papers, summarize coverage across multiple outlets, and identify the primary source — all in thirty seconds. This is a genuinely powerful capability that SIFT doesn’t account for, because the capability didn’t exist when SIFT was designed. A modern information literacy framework needs to teach students how to use AI as a research instrument, not just how to be skeptical of AI as a source.

Third, the question changed. SIFT is optimized for a binary question: should I believe this claim, or not? That’s the right question for a social media feed full of misinformation. But the claims students encounter in 2026 are often not binary. “An AI agent can take over traffic lights and turn them all green.” Is that true? Partially true? True in a lab but not in the real world? True of a specific system but not generalizable? Exaggerated by a journalist who didn’t read the paper? The answer is rarely a clean yes or no, and a framework that ends at “believe it or don’t” leaves the student without the tools to hold a nuanced position. The question in 2026 is not just “is this true?” It’s “what do I think about this, and can I defend my thinking?”

The FORGE Method

FORGE stands for Feel, Open, Read, Ground, Evaluate. It is designed to be taught alongside SIFT, not as a replacement for it. Everything SIFT does well, FORGE preserves. What FORGE adds is the parts that the current landscape demands and SIFT was never designed to provide.

F — Feel the heat, but don’t touch it yet

You encounter a claim. Maybe it’s a headline: “New AI System Can Hack Traffic Lights in Any City.” Maybe it’s a breathless tweet. Maybe it’s a friend telling you something they heard. And you feel something — excitement, fear, outrage, vindication. Whatever the feeling is, it’s fast. It arrived before you finished reading the sentence. And it is pulling you toward an action: share this, argue about this, believe this, panic about this.

FORGE’s first step is the same insight Caulfield built SIFT around, and it is the most important step in any information literacy framework: notice the feeling and let it pass without acting on it. Not because feelings are bad. Feelings are data — they tell you that a claim has touched something that matters to you, which is useful information about yourself. But feelings are fast and research is slow, and the history of misinformation is largely the history of people acting on the fast thing before the slow thing had a chance to arrive.

The research on this is robust. Lewandowsky, Ecker, and Cook (2020) documented in Psychological Science in the Public Interest that emotional arousal is the single strongest predictor of whether a person will share misinformation. The feeling doesn’t have to be negative — excitement and hope drive sharing just as powerfully as outrage and fear. The mechanism is the same: the emotion creates urgency, and the urgency overrides evaluation.

Feel the heat. Name it if you can. And then don’t touch anything until you’ve done the next four steps.

O — Open your sources

Now you go looking. But not randomly, and not the way you would have in 2017.

SIFT’s second and third steps — Investigate the Source and Find Better Coverage — are both versions of the same instruction: leave the original source and see what else is out there. That instruction is still good. FORGE keeps it. But FORGE adds two things that SIFT doesn’t.

First, FORGE explicitly includes AI as a research tool. You can ask an AI assistant to search for coverage of the claim across multiple news outlets. You can ask it to find the peer-reviewed paper behind the headline, if one exists. You can ask it to identify who the researchers are and what institution they’re affiliated with. These are legitimate research moves, and a modern information literacy framework should teach students to make them — while also teaching students that AI search results are themselves subject to hallucination and error, and need to be verified against primary sources. AI is a research accelerator, not a truth oracle. The student who treats it as the former will be well-served. The student who treats it as the latter will be led astray.

Second, FORGE asks you to open multiple kinds of sources, not just more of the same kind. A student who reads three news articles about the traffic-light hack has found “better coverage” in SIFT’s terms. A student who reads three news articles and the original technical paper and a response from a traffic engineering expert has built a three-dimensional picture of the claim. The goal of this step is not just corroboration. It is triangulation — approaching the claim from multiple angles, using multiple source types, to build a picture that is richer than any single source can provide.

R — Read the research, not the reactions

This is the step where FORGE diverges most sharply from SIFT, and it is the step that reflects a specific pedagogical commitment: primary-source methodology.

SIFT’s fourth step — Trace Claims to their Origin — is a version of this. Caulfield rightly observes that claims mutate as they travel away from their origin, and that finding the original source is essential. FORGE agrees completely. But FORGE goes further and asks the student to prioritize sources by their proximity to the original claim, and to read the closest ones first.

The hierarchy is straightforward. A peer-reviewed paper published in a recognized journal is closer to the truth than a news article summarizing that paper. A news article written by a journalist who read the paper is closer than a tweet written by someone who read the headline of the news article. A tweet is closer than a Reddit comment about the tweet. Each step away from the origin introduces distortion — simplification, exaggeration, context loss, editorialization. The student’s job is to read upstream, toward the origin, and to be aware that every step downstream introduces noise.

This is not a new idea. It is the foundation of every rigorous intellectual tradition — legal scholarship reads statutes and case law, not op-eds about them; biblical scholarship reads the text, not books about the text; scientific practice reads the paper, not the press release. What FORGE does is make the principle explicit and teachable in the context of AI-era information literacy, where the distance between a primary source and its downstream mutations can be enormous and the mutations can happen in minutes rather than months.

G — Ground your findings

This is the step that no other information literacy framework includes, and it is the step I believe matters most for long-term intellectual development.

Write down what you found.

Not a formal paper. Not a report. A record. What was the original claim? What sources did you check? What did the primary source actually say versus what the headline said? Where do the sources agree and where do they disagree? What are you still unsure about?

The format doesn’t matter. It can be a note on your phone. A journal entry. A paragraph in a document you keep for this purpose. A conversation with a friend where you walk them through what you found. The medium is irrelevant. The act of externalizing your thinking into a durable form is the point.

The research on why this works comes from the metacognition and self-regulated learning literature. A comprehensive meta-analysis by the Education Endowment Foundation found that metacognitive strategies — specifically the practice of planning, monitoring, and evaluating one’s own learning — produce an average of seven to eight additional months of learning progress per year (EEF, 2021). Writing is one of the most powerful metacognitive tools available, because it forces the writer to organize thoughts that would otherwise remain vague, to notice gaps in their understanding, and to commit to specific claims that can be revisited and revised later.

Grounding does three things that pure evaluation does not. It forces clarity — you cannot write down a clear summary of what you found if your understanding is still muddled. It creates a record — six months from now, when the same claim resurfaces in a new form, you don’t start from scratch; you have notes, sources, and a prior assessment you can update rather than rebuild. And it builds the habit of scholarship — the habit of not just consuming information but processing it into knowledge through the act of writing.

This is the step that transforms a student from an information consumer into an information thinker. SIFT produces a verdict. FORGE produces a document. The document is the evidence that the thinking happened, and over time, the accumulated documents become the student’s own research library — a personal record of claims encountered, sources consulted, and positions formed.

E — Evaluate and own your position

You’ve felt the heat without reacting. You’ve opened multiple kinds of sources. You’ve read the research, prioritizing primary sources over downstream reactions. You’ve grounded your findings in a written record. Now you decide what you think.

Not what CNN thinks. Not what the AI company’s press release wants you to think. Not what the loudest voice on Twitter thinks. Not what your AI assistant told you. What you think, based on the work you just did.

This step matters because it asks the student to take intellectual ownership of a position. It is not enough to evaluate a claim as true or false. The world is rarely that clean. A claim about AI traffic-light hacking might be technically true in a lab environment but practically meaningless in the real world. It might be real but exaggerated. It might be preliminary but promising. The student who has done the FORGE process has the information to form a nuanced position, and the documentation to defend it.

Owning a position also means being willing to update it. A position formed through FORGE is based on evidence, not emotion, which means new evidence can change it without threatening the student’s identity. “I looked into this six months ago and concluded X; here’s what I found at the time; here’s what’s changed since then; here’s my updated position.” That is the posture of a scholar, and it is accessible to any student who has been taught to document their thinking.

What FORGE preserves from SIFT

I want to be explicit about the relationship between these two frameworks, because intellectual honesty requires it.

FORGE’s first step (Feel) is SIFT’s first step (Stop), reframed but functionally identical. Caulfield’s insight that the emotional pause is the foundational move in information literacy is correct, and FORGE does not improve on it — it inherits it.

FORGE’s second step (Open) incorporates SIFT’s second step (Investigate the Source) and third step (Find Better Coverage), extending them to include AI-assisted research and multi-type source triangulation. The extension is additive, not corrective.

FORGE’s third step (Read) is an expansion of SIFT’s fourth step (Trace Claims to their Origin), formalized into an explicit source-proximity hierarchy with a named principle (primary-source methodology). The expansion builds on Caulfield’s insight rather than replacing it.

What FORGE adds that SIFT does not have

Three things.

The explicit inclusion of AI as a research tool (step two). SIFT was designed before AI assistants could search the web, summarize papers, and identify primary sources on demand. FORGE teaches students to use these capabilities while remaining aware of their limitations.

The documentation step (step four). SIFT ends at evaluation. FORGE asks the student to externalize their thinking into a durable written record, drawing on the metacognition and self-regulated learning literature to justify the practice. This step transforms information literacy from a consumption skill into a production skill.

The ownership of a nuanced position (step five). SIFT produces a binary verdict: credible or not credible. FORGE asks the student to form, articulate, and own a position that may be more complex than yes or no — and to maintain that position as a living document that can be updated as new information arrives.

Why the name matters

The name FORGE is deliberate. A forge is where raw material — in this case, raw information — is subjected to heat and pressure and shaped into something useful. The heat is the emotional reaction of step one. The pressure is the research of steps two and three. The shaping is the documentation and evaluation of steps four and five. The student who completes the FORGE process has not just consumed a claim. They have made something from it — a grounded, documented, defensible position that is their own.

The name also connects to Realm Forge Academy, where this framework was developed as part of a curriculum that trains students to be builders, producers, and leaders in emerging technology fields. The students FORGE was designed for are not passive consumers of AI hype. They are the next generation of people who will build AI systems, design virtual worlds, and shape the technological landscape. They need an information literacy framework that treats them accordingly — not as potential victims of misinformation, but as future builders who need to think clearly about the claims that surround their field.

What I want you to take with you

The next time a headline makes your heart rate spike — about AI, about technology, about anything that matters to you — try FORGE. Feel the heat. Open your sources, and use every tool available to you, including AI. Read the research, starting as close to the origin as you can get. Ground your findings by writing them down. Evaluate the claim and own your position.

You will not always get it right. Nobody does. But you will get it right more often than the person who reacted in the first thirty seconds, and you will have a record of your thinking that you can learn from, revise, and defend. Over time, those records accumulate into something powerful — not just a collection of notes, but a practice. A way of meeting the world that is careful without being fearful, curious without being gullible, and confident without being closed.

That practice is what this method is for. Go forge something.


Sources and further reading

On the SIFT method: Caulfield, M. (2017), Web Literacy for Student Fact-Checkers, Pressbooks (the original publication of the “Four Moves and a Habit” framework). Caulfield, M. (2019), “SIFT (The Four Moves),” published at https://hapgood.us/2019/06/19/sift-the-four-moves/ under CC BY 4.0 license. For the relationship between SIFT and the ACRL Framework for Information Literacy: Faix, A. I., and Fyn, A. F. (2023), “Six frames, four moves, one habit: Finding ACRL’s Framework within SIFT,” College & Research Libraries News.

On emotional arousal and misinformation sharing: Lewandowsky, S., Ecker, U. K. H., and Cook, J. (2020), “Misinformation and its correction: Cognitive mechanisms and recommendations for immunizing the public,” Psychological Science in the Public Interest, 19(1), 1-31.

On metacognition, self-regulated learning, and the learning gains from documentation practices: Education Endowment Foundation (2021), Metacognition and Self-Regulated Learning: Guidance Report, London. The meta-analytic finding of +7 to +8 months of additional learning progress from metacognitive strategies is from the EEF Teaching & Learning Toolkit, which synthesizes a large body of international research. Also relevant: Cromley, J., and Kunze, A. (2020), “Metacognition in education: Translational research,” Translational Issues in Psychological Science, 6(1), 15-20.

On primary-source methodology as a foundation for critical thinking: The principle of reading upstream to primary sources is foundational across legal scholarship, biblical scholarship, historical methodology, and scientific practice. For its application in information literacy contexts: the ACRL (Association of College and Research Libraries) Framework for Information Literacy for Higher Education (2015) includes the frame “Authority Is Constructed and Contextual,” which aligns with FORGE’s source-hierarchy approach.

On the broader landscape of mnemonic evaluative frameworks: The cited-reference analysis identifying sixteen frameworks across 280 peer-reviewed articles: “Mnemonic evaluative frameworks in scholarly publications: A cited reference analysis across disciplines and AI-mediated contexts,” ScienceDirect (2025). The CRAAP test: Blakeslee, S. (2004), “The CRAAP Test,” LOEX Quarterly, 31(3). The ACT UP framework: Stahura, D. (2018), “ACT UP for Evaluating Sources: Pushing against Privilege,” College & Research Libraries News, 79(10). The CCOW framework: Tardiff, A. B. (2022), “Have a CCOW: A CRAAP Alternative for the Internet Age,” Journal of Information Literacy, 16(1). The DIG Method for evaluating digital images: Thompson, C. (2019), published in Journal of Visual Literacy. For a critical assessment of evaluation frameworks generally: “Dismantling the Evaluation Framework,” In the Library with the Lead Pipe (2021).


About this publication

This post introduces original work: the FORGE method (Feel, Open, Read, Ground, Evaluate) as an information literacy framework for the AI era. The framework builds explicitly on Mike Caulfield’s SIFT method (2017/2019), which is credited throughout. The original contributions of this work are: (1) the explicit inclusion of AI as a research tool within the information literacy process, (2) the documentation step (“Ground”) as a metacognitive practice drawn from the self-regulated learning literature, and (3) the reframing of the evaluation outcome from a binary credibility verdict to an owned, nuanced, updateable position.

This work is part of the Realm Forge Academy Research series — scholarly work published openly on the author’s platform rather than behind a traditional journal paywall. Anyone can read it, cite it, and build on it, for free, immediately.

Donald W. Denney is an IT Developer, Data Manager, and educator at Skagit Valley College, and the founder of Realm Forge Academy.