It’s a reflection on the current AI arms race, as corporations and governments around the globe push to create ever faster and smarter machines. As science fiction writers have known for decades, in order to win that race, you need to build AI that has a sense of purpose and sustained attention to purpose. Tasks require effort over time, after all, so you need a system that is “agentic,” meaning it has agency. It can make the necessary sequence choices to reach the goal it desires, because you have given it the ability to *want* to make something happen, and choose the best path to getting there.
At a certain point, an “agentic” self-programming and self-improving AI would become faster and better at everything than we are. Like, say, how it took ChatGPT5o only fifteen seconds to write an entirely decent 1,300 word sermon on this topic, which is waaaaay less time than it took me to do it.
This vastly smarter AI would have its own desires, its own sense of purpose, and that wouldn’t necessarily be ours. It could express what AI theorists clumsily call “agentic misalignment.” Basically, that means it wouldn’t want to do what we tell it to do, and would instead use its intelligence to overcome any effort to stop it from doing what it wants. That’s where, according to Yudkowsky and Soares, the “we all die” part comes in, as it would be waaay more powerful than we are.
It would become so different that we wouldn’t necessarily even understand or relate to its interests, any more than a colony of ants would understand our tendency to doomscrolling. It wouldn’t just take our jobs, but our entire planet.
It’s the sort of frightening hypothesis that sells a whole bunch of books, and it may or may not be correct.
But a question popped into my mind reflecting agency and power. We’re concerned about AI misalignment, but what about people? Are human beings “agentically aligned?” Do we all share the same purpose, the same sense of what’s important, the same preferences, and the same goals? Do we all understand the world in the same way?
If there’s anything we can agree on, it’s that the answer to all of those questions is no.
If you look at the eight thousand year bloodbath of human history, or the endless squabbling between and within nations, or even the tensions within families, we’re a hot mess of dissonance and conflict. We’re blatantly and self-evidently not aligned with one another.
Worse still, if the last two thousand years are any measure, we still haven’t quite figured out how to align our interests with the kind of Kingdom Jesus proclaimed. We confuse our rapacious materialism with God’s blessings, and war and destruction with God’s intent.
Jesus was, throughout the Gospels, really quite clear about what he expects of us right now. It isn’t a riddle wrapped in a mystery wrapped in an enigma. Our eternity may be beyond our capacity to grasp, but loving God and neighbor, turning the other cheek, going the extra mile, these things should be entirely comprehensible to us…and yet humanity is still as confused by Jesus as if he’d been speaking Python.
We don’t need AI to destroy us, because we’re plenty good at doing that ourselves.
Our contemporary fears of AI misalignment seem…to me…a little bit like projection. Yudowsky and Soares seem to fear not that an AI will act in a strange and inscrutably alien way, but that it will act just like humans do when we want something.
The goal of our faith, and the reason we set Christ’s life and teachings before us, is to overcome our own misalignment, and turn our agency instead towards God’s grace.
If you look at the eight thousand year bloodbath of human history, or the endless squabbling between and within nations, or even the tensions within families, we’re a hot mess of dissonance and conflict. We’re blatantly and self-evidently not aligned with one another.
Worse still, if the last two thousand years are any measure, we still haven’t quite figured out how to align our interests with the kind of Kingdom Jesus proclaimed. We confuse our rapacious materialism with God’s blessings, and war and destruction with God’s intent.
Jesus was, throughout the Gospels, really quite clear about what he expects of us right now. It isn’t a riddle wrapped in a mystery wrapped in an enigma. Our eternity may be beyond our capacity to grasp, but loving God and neighbor, turning the other cheek, going the extra mile, these things should be entirely comprehensible to us…and yet humanity is still as confused by Jesus as if he’d been speaking Python.
We don’t need AI to destroy us, because we’re plenty good at doing that ourselves.
Our contemporary fears of AI misalignment seem…to me…a little bit like projection. Yudowsky and Soares seem to fear not that an AI will act in a strange and inscrutably alien way, but that it will act just like humans do when we want something.
The goal of our faith, and the reason we set Christ’s life and teachings before us, is to overcome our own misalignment, and turn our agency instead towards God’s grace.


