September 18, 2025
“The Future’s Not Written”
A Q&A with Dr. Shannon Vallor, the 2025 Gotto Lecturer

PROFESSOR SHANNON VALLOR serves as Director of the Centre for Technomoral Futures in the Edinburgh Futures Institute. She holds the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the University of Edinburgh. Professor Vallor joined the Futures Institute in 2020 following a career in the United States as a leader in the ethics of emerging technologies, including a post as a visiting AI Ethicist at Google from 2018–2020.
Professor Vallor recently took the time to answer a few questions on what inspires her research and what she sees ahead.
With your lecture’s title being “The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking,” it seems that you will draw from your latest book, which examines the perils and potentials of artificial intelligence. What drew you to this field?
As a philosopher who focuses on the ethics of emerging technologies, I’ve been working in the field of ethics of AI and robotics since 2009, more than a decade before AI’s most recent commercial boom. But my experiences working with industry AI developers from 2018–2020 made me realize that the next wave of AI tools—what we now know as large language models, foundation models, generative AI chatbots, and the like—would pose huge new challenges for humanity. I wrote the book to shine a light on these dangers, but also to demystify what today’s AI is, and help readers see through the myths and hype about what kind of threat it poses.
Because the real dangers of AI aren’t the ones the media has often focused on. AI isn’t going to rise up and take over the world, or exterminate us. Even the most powerful AI models we have today are just mirrors made of math, generating useful but deeply imperfect reflections of our most common thought patterns. A reflection of human minds isn’t thinking, any more than the mirror image of a flame is burning! But AI mirrors can create powerful, almost irresistible illusions of real thought and feeling, illusions that can be exploited by other humans who want to manipulate and control us. That is the real threat.
Today, many people are already losing their own vital capabilities to think, act and judge freely, because they are being seduced by the AI mirror illusion, and the AI hype feeding it, to trust machines to do their thinking and choosing for them. Ask yourself, who benefits from you handing over your power to think and act to a tool that Google, OpenAI, Meta or your government can control? It isn’t you, that’s for sure. My book is a call to resist that surrender, to reclaim our agency and to rebuild our faith in human potential.

Your reference pool is wide, from Plato, Aristotle, and Ovid to contemporary science fiction such as Martha Wells’s The Murderbot Diaries. How did engaging with classical and modern sources help you consider artificial intelligence, which has been a persistent fear but only recently a dawning reality?
Culture, through the arts, sciences and literature, powerfully influences how we interpret our reality. Consider that we’ve been conditioned by a century of science fiction to expect AI to arrive in the form of a mind: a sentient, conscious machine that we can personally relate to. Even the best and newest fictional depictions of AI, like the absolutely wonderful character of Murderbot, still encourage us to treat AI as another being with its own unique mind and desires. This is partly why the AI mirror illusion has captured so many of us, so quickly.
But we’ve also been falsely led by many scientists and philosophers to think of technologies as value-free objects, separate from politics, separate from the vital moral and spiritual questions of who we are and who we want to be. So we really weren’t culturally well-prepared for this moment, for AI to arrive in the form it actually took: a mindless tool that nevertheless strikes at the heart of our own agency, purpose and identity.
But my book also finds a lot of rich cultural lessons—from ancient and popular sources alike—that we can draw on to better understand and respond to our present situation, whether that’s Ovid’s depiction of Narcissus’s capture and destruction by a mirror illusion, or the computer scientist Joseph Weizenbaum warning us fifty years ago against allowing computers to displace human reason and freedom. And I draw on characters like Murderbot to reveal the contrast between a machine that could truly care, feel for and with us, and make its own choices, and a mirror designed to merely reflect whatever we want to see, or what others want us to see.
You write of “a theology of AI where we give birth to machine gods made in our diminished image…. It’s this theology which promises to engrave the instrumental values of efficiency, optimization, disposability, speed, and consistency into our institutions and persons once and for all.” This is a striking and, to be honest, horrifying observation. Every day we hear more about AI’s oncoming ubiquity. What are some ways that a person or community can resist this dehumanizing theology and the diminished image it reflects?
One of my favorite philosophers of technology, Albert Borgmann, wrote in the 1980s and 90s about how we can be moved to reclaim things of value precisely at the moment where technology threatens to make them vanish. He uses the example of the love of cooking as a social art, and how the microwave actually caused a resurgence of that art—one that is still in full swing today—as many people began to find it dull and soulless to just press a button for a meal. Or we could think about the value of clean air, which went largely unappreciated until we could no longer see blue skies in London or Los Angeles. Borgmann also talked about how technologies can even become ways to reclaim and support the things we value, just as many cooks today treasure their knives and other tools, or the way that we’ve invented countless new environmental sensors, filters and converters to keep our air cleaner.
I think there is hope for a similar cultural and spiritual movement, even stronger and more widespread, to reassert the value and irreplaceability of human agency, creativity, insight and imagination. I think people will start to feel how flat and meaningless life is when these are stripped away, and start demanding technologies and structures that enable rather than diminish and replace these treasures. There’s already a massive resistance to AI growing among artists, writers, teachers and others who are feeling the loss of these things first. But many more will follow, as they watch loved ones begin to withdraw into AI ‘friendships’ or their formerly interesting job becomes just cleaning up after AI mistakes.
The question is whether this new movement will be strong enough, and grow quickly enough, to counter the powerful forces in industry and government that now see AI as the way to exert unprecedented control over what people know, believe, want, and can do. Aligned with them are the media and other voices telling us that the AI future we’re being sold, the one where humans are in the back seat, is the only future that we can have. But the future’s not written. I take heart from the many times in history that humans have broken free of the chains that others would use to bind us. Today the chains are made of silicon rather than iron, but we are just as capable of throwing them off, if we can exercise our moral and political will to be free together.