Monday, May 6, 2024

When Our Science Fiction becomes Our Reality


"At the front of the room, Chang had told the Waterbaby to watch his right hand, which now held a small green cube. Its head tracked the movement, glass eyes tracking in perfect time. Then, Chang wrapped his hand around the cube, hiding it from sight.

“What is in my left hand,” said Chang.

Pause. “There. Is. Nothing in. Your. Left hand.” Chang closed his left hand, and then held left and right hands together in front of him, the green cube neatly hidden.

“Follow the hand with the green cube,” Chang continued. He moved both hands in opposite circles. Waterbaby diligently tracked the hand with the cube.

“Like magic for really stupid people,” Jim snarked.

“Shut up,” Jo muttered back, with a tired smile."

From the table, Chang picked up a mirror, a flat thirty by forty centimeter rectangle, with a hard black plastic backing. He held it up to the crude face, in front of the glazed lenses.

“What do you see in my hand,” Chang asked.

Pause. Pause. “A. Rectangle.” A longer pause.

“And what do you see in the rectangle?”

Silence. Then, “I see. Nothing. In the. Rectangle.”

Jo shook her head. It was more right than it knew.

FROM THE WATER, p 35

-----

The recently circulated video of Figure's new OpenAI powered bot stirred a memory of that snippet in my mind yesterday, for obvious reasons.

I wrote that ten years ago, in what was the first of a trilogy of A.I. novels that never found a publisher.  In FROM THE WATER, I explored two ideas.  First, the idea that AGI...Artificial General Intelligence...would only arrive at the point where we moved beyond language models and into A.I. systems that could connect their semiotics to the material world.  Meaning, simply, that words had meaning.  

When we think the word "water," for example, it doesn't simply inhabit a web of linguistic interrelation.  It is "wet," and we know what that means because we can touch it, and taste it, and see it.  We can hear it dripping and splashing and flowing.  

In order to achieve sentience, or so I argued from the basis of my then-reading of early two-thousands A.I. theorists, a system must be able to perceive itself.  Sentience requires the capacity for self-awareness, not simulated, not virtual, but actual.

Secondly, such a neutral network wouldn't be physical.  It wouldn't be a matter of interlaced hardware and chipsets, but a software construct.  In FROM THE WATER, I'd envisioned a virtual network, in which a complex neutral structure was simulated.  But as it turns out, you don't need that.  The complex and probabilistic interconnections within language itself can be pressed into service for that purpose.  They're already neural.  

The advances in A.I. we're seeing right now have met the terms and conditions of the science fiction of the recent past. 

We're at functional Turing compliance with our LLMs.  We're starting to see those constrained intelligences connect to the real world.  There's no reason to believe we're not on the edge of a epochal shift, one brought to us by the same earnestly blindered quants who were convinced that the internet would bring about world peace, and that smartphones were a great idea.  

It's peculiar watching the fiction you've written become reality.