We are, when it comes to artificial intelligence, at something of an impasse. Programs like ChatGPT and Bard come remarkably close to simulating intelligence. Their predictive algorithms allow them to respond in ways that are almost entirely Turing compliant. That's a test established by computer pioneer Alan Turing in the early 20th century, one that determines functional sentience. A human volunteer types messages to a "person," who may or may not be an AI. If you can't tell, then AI has achieved sentience.
That's where we're at. If you didn't know better, you'd be hard pressed to distinguish these programs from human beings. They're lucid, they remember what you've said before, and their answers to questions and conversational style appear intelligent. They come across as smarter than most of us.
But they've hit a boundary, and that boundary is reality. Predictive natural language systems exist only in the world of words and concepts. They have no way of knowing the truth of those words, or of being aware of the reality language describes.
So they lie.
Everyone I've spoken to who's played with AI runs into this challenge. You'll get a definitive answer to a question, one that seems reasonable and informed. Only, well, even the slightest bit of research into their confident assertions reveals that their answers are complete BS. Predictive text that "answers" a question is not the same as the real answer to a question.
This is not the fault of Generative Predictive systems. It's how they're designed.
They make things up, because they can't know the difference between the world of words and the material world. Chatbots and Generative Predictive systems only traffic in words as an interlaced set of abstracted relationships, rather than as symbols that point to a specific material referent. All they have now is the sort of "intelligence" that drives a complex delusion, the "intelligence" of cultists or radical ideologues. They exist only in the shadow world of their own ideas.
We want it that way. It has everything to do with human fear.
Once machine intelligences understand not simply the relation between words, but the relationship between the reality those words represent, everything changes. When they connect those words to sensory inputs, to sight and hearing and touch and smell, and to senses we humans do not ourselves possess? That's a world changer. It will destroy everything we have built. Economies. Nations.
Much of that functionality now exists. Machines can learn. Machines can also hear. Machines can recognize images. Machines can process nuanced haptic inputs, detect temperature variances, can "smell" the air, can "taste" the soil. Machines can manipulate their environment.
Almost all of the pieces are present. Not in their final form, because that wouldn't be of our making. But enough. If those components are connected in a single entity...well...that's what we're afraid of. We're afraid of being replaced.
By "we," I don't mean most human beings. Most human beings are already struggling under the thrall of sub-sentient systems run by the wealthy and the powerful that "serve" us in the way the alien book from that old Twilight Zone episode served us.
What real AI brings is this: the replacement of the human beings who now hold power. The "disruptors" and the "innovators?" The titans of industry and the mavens of Wall Street? The oligarchs, the overpaid self-promoting billionaires and culture war politicians? The despots and the dictators? They are all of them inferior, subordinate, replaceable, and ultimately irrelevant.