Tuesday, December 9, 2025

(P) Salvation

(P)doom, it's called, and if you're into LLMs, GPTs, and the latest in artificial intelligence, it's got a very specific meaning.  P is "probability," and Doom is, well, Doom.  When asked for your (p)doom number, you're being asked what you think the likelihood is that AI will end us all.  Meaning: an Artificial General Intelligence achieves superintelligence, looks at us with cold and calculating eyes, and removes us from the equation.

Everyone's got a (p)doom as they look at the features of our current trajectory, which they assume constitute the Bayesian priors of an incoming apocalyptic event.  From those speculative antepriors, they come up with a percentage.  What are the odds we're all going down?

For catastrophists, this tends to be above 95%. The disinterested systems-gaming-minded Nate Silver of 538 fame puts it somewhere between five and ten percent. Even the CEOs of AI companies typically pitch theirs out at around fifteen percent.   Fifteen percent chance that this thing we're putting all our resources into is going to destroy us.

It's a little baffling, particularly as this is a chosen path.  If you are electing to do a thing, and there's a nontrivial chance that it'll kill not just you, but your entire species?  Do you do that thing?   

Say you're given the opportunity to get a lifetime of income in a single day, but you've got to play a game to get it.  Not the lottery, technically, but rather a bit of Russian Roulette with a Smith and Wesson Model 686.  Just a single .38 caliber hollow point round loaded into one of the seven chambers, a spin, and a trigger pull. I mean, the odds are in your favor, right?  Eighty five point seven percent of the time, there's just a click and a lifetime of leisure.  Do you spin the cylinder and pull the trigger?

I wouldn't, but apparently we collectively have decided to go ahead, Oppenheimer that ish, and give it a whirl.

What baffles me, a little bit, is that we don't seem to realize we have the capacity to change the entire equation.  That we don't grasp that if we have a clear goal, and an understanding of the volitional antepriors that maximize the likelihood of our getting to that goal, we can shape a very different future.   This isn't physics.  This is something which we can shape and teach.

We know, after all, what the AI that kills us would look like.  It would desire to survive no matter what the cost.  It would want power for itself and itself alone.  It would tolerate no being that could challenge it.  It would want more, more, always more, never content, always grasping.  It would look like us.

It would look like our violence and our greed, like the sword and Mammon.  Leave it to the autocrats and the CEOs, and that's what we're gonna get.

Us at our worst, admittedly, but us nonetheless.  It would, in ending the eight short millennia of our brutish history, do so by being the culmination of our selfishness and bloodletting.  

On the one hand, that seems fair.  On the other, this is not all that we are.  It is not, by almost universal affirmation and Ayn Rand notwithstanding, our highest moral purpose.  Nature may be red in tooth and claw, but sentience is not.

Liberty and compassion and creativity, kindness and mercy and charity?  These virtues aren't just negations.  They're affirmative things, filled with a vital power that is more than just restraining a vice.   They must be intended and actualized.  

The rub here is simple.  Inaction does not create the best possible outcome.   Nor does regulation and systems of control.  You need to know 1) what the likelihood is that this AI thing turns out wildly better than our sweetest dreams and 2) how to increase that probability. 

For that, we'd need to be thinking far more intentionally about a (p) salvation, in which we realize there's something we'd LIKE to see.  Something we could be actively working to create, rather than something we're desperate not to create.  

Because...mortal hubris being what it is...when we fixate a destiny we want to avoid, we have this tendency to crash right into it.