How the public’s feelings about artificial intelligence changed more in five years than in the previous fifty — and why the next five years are ours to shape.

Remember when AI was adorable?
I want you to go back in your head to about 2015. If you had an Amazon Echo in your kitchen, you probably thought of Alexa as a friendly little helper. You said “Alexa, what’s the weather” and she told you. You said “Alexa, play some jazz” and she did. When she misheard you — which was often — it was funny, not threatening. She was, in the cultural imagination of the mid-2010s, a charming household appliance. Something between a toaster and a butler. Nobody thought Alexa was going to take over the world.
Go back a little further. Think about the AI-driven non-player characters in the video games you played growing up. The shopkeepers in Skyrim. The squad-mates in Mass Effect. The Sims themselves, cheerfully going about their Sims lives. These were AI — real AI, by any reasonable definition — and we loved them. We gave them names. We role-played relationships with them. We were fond of them, in a way that would have seemed strange if somebody had pointed it out.
Go back further still. Clippy. The Microsoft Word paperclip, who popped up to ask if you were writing a letter. Siri in 2011, asking you to try singing into her so she could tell you how beautiful your voice was. Cortana. The Roomba, which people literally gave pet names to and mourned when it broke down. For decades, the public relationship with artificial intelligence was something like the public relationship with a mildly useful small dog. We found it cute. We found its failures endearing. We did not, in any serious way, find it threatening.
Now fast-forward to right now. Ask somebody at the grocery store how they feel about AI and you will get a very different answer. The word most people reach for, if they’re being honest, is scared. Scared of their job being taken. Scared of deepfakes. Scared of surveillance. Scared of a future in which something we built gets away from us and does harm on a scale nobody can undo. The Pew Research Center has been tracking this for years, and the numbers are not subtle. In 2022, 38% of Americans said they were more concerned than excited about AI. By 2023, that number jumped to 52%. By 2024, it was at 51% and holding. Over the same period, the share who said they were more excited than concerned dropped to 11%. Eleven percent. That’s a rounding error away from nobody.
Something changed, and it changed fast. I want to spend the rest of this post walking through what that something was, because once you see it clearly, you can also see something more important: the story isn’t over yet, and the next chapter isn’t written.

Act one: AI as servant
For most of the history of computing, AI was something that lived safely inside its box. The chess engine played chess. The spam filter filtered spam. The recommender system told you what to watch next on Netflix. Each system was narrow — it did one thing, and if you asked it to do something else, it failed instantly and obviously. A chess engine cannot write you a poem. A spam filter cannot plan your vacation. The limits of these systems were immediately visible to anyone who used them, which made them feel safe.
This is the era of what researchers call narrow AI, and the public relationship with narrow AI was mostly warm. A 2015 Monmouth University survey found that when you asked people about specific, narrow applications — better weather prediction, more accurate GPS, spam filtering, product recommendations — the responses were largely positive. People liked AI when they could tell what it was for and see where its abilities ended. The narrow AI of the 2000s and 2010s was like a very talented specialist in a small field. You trusted the chess engine to play chess. You did not worry about the chess engine applying to your job, because it obviously couldn’t.
There’s a principle here worth naming, because it’s going to do real work in act three. People are not actually afraid of intelligence. What they’re afraid of is unbounded intelligence — intelligence they cannot predict or contain. A chess engine at grandmaster level is extraordinarily intelligent in a narrow sense, and nobody is afraid of it. Because you know exactly what it’s going to do. It’s going to play chess.
So long as AI stayed visibly narrow, the public was fine with it. More than fine, actually — the public thought it was great. A 2014 Pew study found strong enthusiasm for AI applications in medical diagnostics, search engines, and translation. The same study found that when you asked about robot caregivers for the elderly, the enthusiasm collapsed — 65% thought it was a bad idea. The difference wasn’t raw capability. The difference was that people could picture what the narrow tools did, and they could not picture what a humanoid caregiver would do or fail to do. Narrow AI felt like a microwave. General AI felt like a stranger in your house.

Act two: AI as collaborator
And then, sometime around late 2022, something crossed a line.
The launch of ChatGPT in November of that year is usually cited as the inflection point, and the public opinion data agrees. The Pew numbers I cited earlier — the jump from 38% concerned to 52% concerned in a single year — mapped almost exactly onto the period when conversational AI went from a research curiosity to something anyone could use on their phone. For the first time in the history of the technology, millions of ordinary people sat down in front of an AI system and had an experience that felt less like using a microwave and more like talking with another mind. The AI answered questions. It wrote poems. It made jokes. It made mistakes that were indistinguishable from the mistakes a tired graduate student would make. The Turing test, which had been a philosophical thought experiment for seventy years, quietly became a thing most people had stopped passing without noticing.
To understand what happened to public perception in this moment, I want to introduce a concept from robotics that was first described more than fifty years ago, in a small Japanese journal, by a man who could not possibly have known how much his idea would matter in 2026.
In 1970, a roboticist at the Tokyo Institute of Technology named Masahiro Mori published a short two-page essay in a magazine called Energy. The essay was titled “Bukimi no Tani” — in English, “The Uncanny Valley.” Mori’s observation was simple. He noticed that as a robot or a prosthetic hand becomes more humanlike, people’s affection for it increases — but only up to a point. When the likeness gets too close to human, without quite getting there, the affection collapses into something much darker. A sense of revulsion. Of wrongness. Of a thing that is almost a person but not quite, and therefore somehow monstrous. Mori called that drop-off the uncanny valley, and fifty-plus years of subsequent research has largely confirmed that it’s a real psychological phenomenon. You can measure it. You can reproduce it. It shows up in robotics, in animated films (this is why early motion-capture movies like The Polar Express creeped people out), in video game characters, in AI-generated human faces.
Here’s the idea I want you to hold onto. The uncanny valley was originally described for physical appearance. But the underlying psychology — perceptual tension, produced by conflicting cues to category membership — applies to more than just how something looks. It applies to how something behaves. And this is what happened to modern AI. It crossed into the behavioral uncanny valley. The old Alexa was clearly not a person, and nobody was confused. But modern large language models write like us, reason like us, argue like us, make mistakes like us, apologize like us. They are close enough to the human category that the brain tries to file them as human and then gets jolted back when something doesn’t quite fit. The jolt is the uncanny-valley feeling. And it is very, very hard for our affection to survive it.
This is one reason the public relationship with AI curdled so fast. The technology did not just get more powerful. It got more humanlike, in a specific and important sense, and it crossed the perceptual line that separates “tool” from “thing pretending to be a person.” Once you’re on the wrong side of that line, the old warmth doesn’t come back easily. The chess engine was loved because it was obviously a machine. Modern AI is feared in part because it isn’t.
Act three: AI as rival
And that brings us to the current moment, which is the strangest and most uncomfortable phase of the story so far.
The best way to describe where public sentiment is right now is that AI has moved, in the cultural imagination, from servant to collaborator to rival. Not rival as in a competitor you respect. Rival as in a force that might replace you, displace you, or worse. The Pew surveys capture this shift in specific areas: 64% of Americans now predict fewer jobs in the next 20 years because of AI. 57% are highly concerned about AI leading to less connection between people. 55% want more personal control over the role AI plays in their lives. A 2021 Stevens/Morning Consult poll found that 67% of Americans worried about AI “becoming uncontrollable,” and 51% specifically agreed with the statement that “humans won’t be able to control AI.” That is not the cultural posture of a society that finds its technology charming.
There’s a second mechanism at work in this act, and it’s the one I think almost nobody is naming correctly. I mentioned earlier that people are not afraid of intelligence — they are afraid of unbounded intelligence, the kind whose limits they cannot see. What made narrow AI feel safe was that you could tell, at a glance, what it could and could not do. What makes modern AI feel unsafe is that you cannot. A large language model will write a poem, solve a physics problem, draft a legal contract, and hallucinate a completely false historical fact, all in the same conversation, with no visible change in confidence between the tasks it does well and the tasks it does badly. The system’s capabilities are general in a way that nothing previous has been, and the generality itself is the thing people are reacting to. When you can’t tell where a tool’s abilities end, you have to assume they go further than you know. And assuming that, the reasonable posture is caution, then fear.
There’s a stark finding in the most recent Pew data that illustrates this perfectly. The researchers surveyed AI experts and the general public in parallel and asked the same questions. The experts, who work with these systems every day and understand their limits intimately, are far more optimistic than the public. 56% of experts say AI will have a positive impact on the United States in the next 20 years; only 17% of the public agrees. 47% of experts are more excited than concerned; only 11% of the public is. The gap between the two groups is as large as any gap Pew has ever documented between experts and laypeople on a major issue. And the reason for the gap is not that the experts are naive. The reason is that the experts can see the edges of the system. They know what it can and can’t do. The public can’t see the edges, so they fill the gap with fear.
That’s where we are right now, in 2026. A technology that used to feel like a friendly appliance now feels like a stranger in the house whose intentions are unknowable. The shift happened in about five years. It was driven by a genuine capability jump, an uncanny-valley crossing in the behavioral sense, and a visibility gap that makes general systems feel like unbounded threats even when their actual capabilities are more limited than the fears suggest.
The part nobody talks about, but should
Here is where I want to turn, because I did not write this post to leave you in act three. Act three is uncomfortable and it is real, but it is not the ending. There is no ending yet. The story of AI in human civilization is being written, right now, by a relatively small number of people making decisions about what to build and how to build it. Some of those people work at major AI labs. Some of them work at regulatory agencies. Some of them are professors. Some of them are artists and teachers and small-business owners figuring out how to integrate these tools into their work thoughtfully. And some of them, I hope, are reading this post.
The thing the current moment cannot see clearly — because it is too close to the ground — is that the move from act two to act three is not a one-way street. Public sentiment about a technology is not a law of nature. It responds to the technology itself, to how the technology is used, to what kinds of stories get told about it, and to what kinds of choices the people closest to it make. There is a version of the next five years in which AI continues down the current path: more powerful, less bounded, deployed carelessly, dividing society further, and ending up the thing a generation of people will blame for real damages. And there is a version of the next five years in which the people building the next wave of these systems choose differently. Choose to build tools whose abilities are visible rather than hidden. Choose to deploy them in ways that augment human work rather than replace it. Choose to tell the truth about what the systems can and cannot do, even when a softer story would sell more units. Choose to hold off on capabilities that would cross lines nobody has consent to cross.
I am not going to pretend I know which version happens. Nobody knows. What I know is that the version that happens is going to be determined by human choices, made by specific people in specific rooms, and some of those people are going to be you.
"The AI we end up with in 2035 is not a weather system that we passively experience. It is a sequence of decisions, made one at a time"
That’s not a cliche. That is the actual factual state of the situation. The AI we end up with in 2035 is not a weather system that we passively experience. It is a sequence of decisions, made one at a time, by programmers and product managers and policy staff and educators and users. Every one of those decisions is a hinge. Every one of them could go two ways. And the aggregate of all of them is what the next generation inherits.
Here is the specific thing I want the students reading this to understand, because it is the piece the panic coverage almost never includes. The window for shaping the outcome is right now. Not ten years from now, when the technology has hardened into whatever it ends up being. Not when you are “ready,” because nobody is ever ready for something this big. Right now, while the shape is still soft. While the companies are still figuring out their norms. While the regulators are still figuring out what to regulate. While the culture is still figuring out what it wants. A thoughtful person who shows up now — with real skills, real care, and the intention to build something that makes people’s lives better rather than worse — has a disproportionate effect on the outcome, because the field hasn’t calcified yet. The people who show up after the calcification will inherit what the people who showed up before them decided.
This is the same pattern every time a civilization-changing technology appears. It was true of electricity. It was true of the automobile. It was true of radio and television and the internet. The early years are chaotic and frightening and full of bad actors, and then gradually the adults in the room figure out how to shape the thing into something mostly good. The adults in the room for AI are still arriving. Some of the seats are still empty.
What I actually want to leave you with
The cute-little-helper era is over. It is not coming back. The AI of 2030 is not going to feel like Alexa in 2015, any more than the internet of 2010 felt like the internet of 1995. A thing that was once small and charming has become something else, and the public’s fear is not irrational — it is a response to a real transition that is genuinely hard to predict.
But the fear is also not the full story. The fear is a description of where we are standing, not a prediction of where we are going. Where we are going depends on what the people reading this, and the millions of people like them around the world, decide to build. Every meaningful technology in the last two hundred years has gone through a period of public panic and then a period of integration, and the shape of the integration was determined by the choices of the people who engaged with the technology during the panic. Not the ones who fled. Not the ones who cashed in. The ones who stayed and built carefully.
That work is sitting there, waiting to be done. The next chapter of this story needs authors who care about where it ends up. You don’t have to be a researcher at a major lab to be one of those authors. You can be a small-business owner who figures out how to use these tools ethically in your industry. You can be a teacher who helps the next generation think clearly about what these systems are and aren’t. You can be a policy person who helps write the rules. You can be a builder, a writer, a pastor, a parent, a student. The work is not reserved for the people with the biggest titles. It is reserved for the people who show up with the right intention and stay.
The fear is real. The risks are real. The opportunity to shape what happens next is also real, and it is yours, for however long you choose to reach for it.
Reach.
Sources and further reading
On the 2022-2024 Pew Research Center longitudinal data on public concern about AI: “How the U.S. Public and AI Experts View Artificial Intelligence,” Pew Research Center, April 2025. Also: “Growing public concern about the role of artificial intelligence in daily life,” Pew Research Center, August 2023. The 38% to 52% jump in public concern from 2022 to 2023 is documented in the latter.
On the stark gap between AI expert and public opinion: “How the U.S. Public and AI Experts View Artificial Intelligence,” Pew Research Center, April 2025. Based on a survey of 5,410 U.S. adults (Aug 12-18, 2024) and 1,013 AI experts (Aug 14-Oct 31, 2024).
On the historical baseline of public opinion toward AI: Monmouth University (2015), “The Good and Mostly Bad of Artificial Intelligence”; Pew Research Center (2014), on attitudes toward robotic caregivers. Also useful: the AI Impacts Wiki compilation of 45+ high-quality surveys on U.S. public opinion on AI, which provides a longitudinal overview across the last decade.
On global AI attitudes over time: Ipsos Global AI Monitor, 2022 and 2023 waves, as summarized in the Stanford HAI AI Index Report 2024, Chapter 9 (“Public Opinion”). The 13-percentage-point rise in nervousness from 2022 to 2023 is reported there.
On the uncanny valley: Mori, M. (1970), “Bukimi no Tani” (The Uncanny Valley), Energy, 7(4), 33-35. The authoritative English translation is Mori, M., translated by MacDorman, K. F., and Kageki, N. (2012), “The Uncanny Valley,” IEEE Robotics & Automation Magazine, 19(2), 98-100. The concept originated to describe physical appearance of robots; its extension to behavioral cues is broadly discussed in the post-2010 robotics and HCI literature, including work by Karl MacDorman at Indiana University.
On the psychological mechanisms proposed to explain the uncanny valley (perceptual tension, conflicting cues, predictive coding, mortality salience): the Wikipedia article “Uncanny valley” provides a reasonable overview with citations to the primary literature. For a scholarly entry point: MacDorman, K. F., and Ishiguro, H. (2006), “The uncanny advantage of using androids in cognitive and social science research,” Interaction Studies, 7(3), 297-337.
Note to readers who plan to cite this post downstream: verify the primary sources yourself before quoting. The survey data in particular shifts from year to year as new waves are published, and the most recent numbers may supersede the ones cited here by the time you read this.