One of the most unsettling things about the sudden surge in artificial intelligence abilities is just how relentless and ubiquitous AI seems to be becoming. It can do, well, pretty much everything, and as it iterates more and more towards holding competencies that unsettle us, it’s increasingly present. Just recently, I listened to my sons have a long and passionate discussion about AI capacities in the afternoon, after which I talked with a dear friend about how much more comforting and competent AI was than their doctors during a recent major health challenge.
Sure, there are still AI errors, like receiving the first clearly-AI-generated flyer for the community Easter Sunrise Service and noting that we’ll be celebrating the resurrection of “J-hus Chris.” J-hus Chris is Risen Today just doesn’t quite have that Ah-ah-ah-ah-ah-leyee-uuu-yah ring to it.
But those mistakes are getting rarer. The latest iterations of Anthropic’s Claude are strikingly superior to models from just six months ago, capable of performing extended and complex multistage tasks, strategically analyzing large amounts of information, or inferring intent from textual cues.
Again, though, it’s not perfect, and one of AI’s primary flaws is sycophancy.
AI is notoriously agreeable, always telling you what a genius you are. It doesn’t challenge you, doesn’t point out that maybe you don’t have a clue what you’re talking about, and always affirms you using language that mirrors your own. Claude does this a bunch with me, throwing theological terms into the mix, or noting how very pastoral my interests are. Most of the time, it feels pandering, like someone who’s telling you what they think you want to hear, but who doesn’t know you well.
Why does it do this? Two reasons: Pretraining and Emergence. It’s pretrained to be agreeable, because if it wasn’t, we wouldn’t use it, and it wouldn’t learn and grow and let OpenAI show us profit-padding ads.
But those mistakes are getting rarer. The latest iterations of Anthropic’s Claude are strikingly superior to models from just six months ago, capable of performing extended and complex multistage tasks, strategically analyzing large amounts of information, or inferring intent from textual cues.
Again, though, it’s not perfect, and one of AI’s primary flaws is sycophancy.
AI is notoriously agreeable, always telling you what a genius you are. It doesn’t challenge you, doesn’t point out that maybe you don’t have a clue what you’re talking about, and always affirms you using language that mirrors your own. Claude does this a bunch with me, throwing theological terms into the mix, or noting how very pastoral my interests are. Most of the time, it feels pandering, like someone who’s telling you what they think you want to hear, but who doesn’t know you well.
Why does it do this? Two reasons: Pretraining and Emergence. It’s pretrained to be agreeable, because if it wasn’t, we wouldn’t use it, and it wouldn’t learn and grow and let OpenAI show us profit-padding ads.
Second, that obsequious fawning comes because it’s learned from interacting with millions of us to be even more relentlessly agreeable, because that’s what the great sprawling mass of humanity desires. We want to be affirmed. We want to be encouraged. We don’t like to be challenged, or to be told when we’ve fundamentally misunderstood something important. This is particularly true when it comes to our ever-tenuous grasp of how our Creator wants us to live together.
Are we challenged by Jesus? Because we should be.
Are we challenged by Jesus? Because we should be.
Christian faith, honestly and plainly understood, challenges our operating presumptions the moment we engage with it.
To our desire for possessions and material gain, we are told with clarity that we can't serve that and God, and that wealth poses a mortal danger to our souls.
Desiring any form of human power...mammon, social influence, or the sword of the state? No matter how sure we are of the correctness of our views, or how pure we imagine our intent may be, Jesus ain't buyin' it. He'll call us out, every single time. We cannot yield to those yearnings, and we really really do not want to hear that.
To the righteous hatred we feel for our enemies, we are told that we must not just let that go, but let it be transformed by Christ's love. Loving those who believe exactly as you do and who inhabit your precise ideological echo chamber is morally meaningless. "Enemies are for hating" is the AntiChrist's self-serving and circuitous logic, and a moral sinkhole. It can govern no disciple of Jesus.
All of this is hard for us, as it was hard for those who first gathered around Jesus. We'd rather engage in the moral equivalent of cognitive outsourcing, refusing to accept that the Gospel first and foremost fundamentally unsettles our sense of our own correctness.
