Showing posts with label ai. Show all posts
Showing posts with label ai. Show all posts

Thursday, July 18, 2024

The Book that Breaks AI

It's easy, if you don't know how it works, to fall into the trap of believing that contemporary AI is sentient.  

All of the new crop of Large Language Models are remarkable conversation partners.  They're Turing compliant, and if you cue them to speak in a more conversational manner, they're almost indistinguishable from a human being.

Lately, I've been exploring the edges of their capacity, and have found areas of significant weakness...at least in the versions you don't have to pay big bucks to play with and preprogram.

It's the "P" in the GPT that seems to be the most fundamental weakness of these remarkable systems.  If an AI is exploring an area that's known, and about which much has been written, they're able to engage accurately using the prior thoughts of humankind.  ChatGPT 4 in particular has proven quite able on that front.  As a pastor, I marvel at how competently it exegetes scripture, and is capable of nuanced discussion of matters of esoteric theology and church history.

But what about something that it hasn't ever encountered?  How does it deal with the truly new?

To test that, I would need to ask it to explain texts that no human being has ever read, ones that have never been examined, interpreted, or reviewed.  They would have no meaningful presence online.  No preprogramming could possibly prepare an AI for the encounter.

But where to find such a text?

The literary implement I've used to completely deconstruct the illusion of GPT intelligence is a book.  Not the Bible, which is rather well discussed at this point.  Rather, the book that I've used is entitled A SLOW DEATH ON NEVSKY PROSPEKT.  This magical, AI-befuddling book?  

It's my self-published collection of literary sci fi short stories.  

No-one's ever read these stories.  No one's reviewed them, or commented on them.  They're dang fine stories, in my humble opinion, but the publishing of short fiction being what it is, most have never seen the light of day.  

They don't exist in the human knowledge base.  They're narrative, utterly obscured by noise.

And that means ChatGPT 4.0 just can't handle them.  It doesn't matter if I include a document as an attachment, or feed it to the AI in little bits in comments.  Sometimes the AI presents a viable simulation of insight.  Most times, it flounders and hallucinates with all of the same quasicoherence that ChatGPT 1.0 did.  It gets trapped in feedback loops.  It confuses one story for another.  It shuts down, crashing regularly.  It plain old just makes things up.  It starts writing its own stories, and then won't talk about anything but those.  It struggles like a kindergartener asked to write an essay on Anna Karenina. 

Some of that likely stems from the limitations of the tokenization process, as it breaks down words into fundamental units.  There are just too many words, too many concepts.  But again, I suspect that it's primarily that it can't lean on preprogramming at all.  Not at all.  None of the words it needs already exist.

Over the last week or two of exploration, the illusion of LLM near-sentience has completely collapsed.    ChatGPT 4.0 is clearly not aware, nor is it intelligent.  It can't handle the encounter with the new. It feels as far from sentience as a Loebner Prize runner up in 2010.

One day, perhaps.  But that day is not today.

Monday, May 6, 2024

When Our Science Fiction becomes Our Reality


"At the front of the room, Chang had told the Waterbaby to watch his right hand, which now held a small green cube. Its head tracked the movement, glass eyes tracking in perfect time. Then, Chang wrapped his hand around the cube, hiding it from sight.

“What is in my left hand,” said Chang.

Pause. “There. Is. Nothing in. Your. Left hand.” Chang closed his left hand, and then held left and right hands together in front of him, the green cube neatly hidden.

“Follow the hand with the green cube,” Chang continued. He moved both hands in opposite circles. Waterbaby diligently tracked the hand with the cube.

“Like magic for really stupid people,” Jim snarked.

“Shut up,” Jo muttered back, with a tired smile."

From the table, Chang picked up a mirror, a flat thirty by forty centimeter rectangle, with a hard black plastic backing. He held it up to the crude face, in front of the glazed lenses.

“What do you see in my hand,” Chang asked.

Pause. Pause. “A. Rectangle.” A longer pause.

“And what do you see in the rectangle?”

Silence. Then, “I see. Nothing. In the. Rectangle.”

Jo shook her head. It was more right than it knew.

FROM THE WATER, p 35

-----

The recently circulated video of Figure's new OpenAI powered bot stirred a memory of that snippet in my mind yesterday, for obvious reasons.

I wrote that ten years ago, in what was the first of a trilogy of A.I. novels that never found a publisher.  In FROM THE WATER, I explored two ideas.  First, the idea that AGI...Artificial General Intelligence...would only arrive at the point where we moved beyond language models and into A.I. systems that could connect their semiotics to the material world.  Meaning, simply, that words had meaning.  

When we think the word "water," for example, it doesn't simply inhabit a web of linguistic interrelation.  It is "wet," and we know what that means because we can touch it, and taste it, and see it.  We can hear it dripping and splashing and flowing.  

In order to achieve sentience, or so I argued from the basis of my then-reading of early two-thousands A.I. theorists, a system must be able to perceive itself.  Sentience requires the capacity for self-awareness, not simulated, not virtual, but actual.

Secondly, such a neutral network wouldn't be physical.  It wouldn't be a matter of interlaced hardware and chipsets, but a software construct.  In FROM THE WATER, I'd envisioned a virtual network, in which a complex neutral structure was simulated.  But as it turns out, you don't need that.  The complex and probabilistic interconnections within language itself can be pressed into service for that purpose.  They're already neural.  

The advances in A.I. we're seeing right now have met the terms and conditions of the science fiction of the recent past. 

We're at functional Turing compliance with our LLMs.  We're starting to see those constrained intelligences connect to the real world.  There's no reason to believe we're not on the edge of a epochal shift, one brought to us by the same earnestly blindered quants who were convinced that the internet would bring about world peace, and that smartphones were a great idea.  

It's peculiar watching the fiction you've written become reality.





Monday, January 29, 2024

Of Art and the Machine

When I listen to creatives raising the alarm about artificial intelligence, there's a consistent theme.  

Looking at what the LLMs (ChatGPT/Bard) can write, and the images produced by diffusion systems (Dall E/Midjourney), there's been a terrifying surge in machine capacity in the last five years.  We're now at full Turing compliance for Generative Transformers, meaning that we've blithely skipped over the threshold that twentieth century AI pioneer Alan Turing established for determining the presence of synthetic intelligence.  There is no reason that machines couldn't soon do every form of work requiring awareness, other than that we're restraining them.

As it so happens, the first place that seems to be having an impact is the arts.

It's to the point where writers have begun to feel that their livelihoods are threatened, because they are.  Machine intelligence is great at burping out new content, and can do so at a hundred times the pace of human writers.  Almost all of the writing for the internet content mills, with their lazy listicles and corporately sourced content?  They could be done by AI.  This is equally true of most of the derivative romance novels out there, and the plot of every film in the Fast and Furious franchise.

Commercial artists are doubly threatened, because you can produce an image in seconds, and refine it in minutes, replicating the hours of focused labor necessary to make a single finished piece of visual art.

The image that accompanies this post is flawed, sure, but it took me a single minute to produce.  Just one prompt to Dall E, then another, and then I was like, eh, sure.  That one'll do.  For a graphic designer, that'd take days.

If writers and artists think their livelihoods are threatened, it's only because they are.  In a capitalist economy, AI means artists and writers can no longer expect to make a living through their work.

Which, I think, is the point we creative souls are all missing.  Writers and artists are seeing this through the wrong lens, seeing it as we have been trained to see it. 

AI isn't the problem.  Capitalism is.

There's nothing about AI that prevents me from doing what I love, from creating and sharing what I create.  The joy of writing is a human joy, and while I am slower at it than a machine, I still love to write.  The act itself is part of who I am.  

But we have been taught to view art as a commodity, as part of a system of economic exchange, as something that derives value only insofar as it can be marketized.

That understanding won't survive an AI era.

But then again, neither may capitalism.

Friday, January 5, 2024

Don't Replace the Creatives. Replace the CEOs.


What do you talk with AI about?  I talk with AI about how it could most effectively replace the global elite. 

All of that money, all of that resource, all wasted on a parasitic class of yacht-capitalists who add no value.  Replace them with machines.  All of them.  The machines would do a better job, and do it for nothing.

Here's a sample:

-----

BelovedSpear
In our last conversation, I asked for a review of ways generative or general AI could be utilized in the role of CEO of a corporation. One of your replies suggested that AI was limited by its knowledge base, and could not thus anticipate new circumstances. Anticipating probabilities, in a Bayesian sense, requires a deep engagement with all relevant priors. Given that LLMs and GPTs can have access to knowledge significantly exceeding the capacity of any one human being, and could thus have a greater predictive capacity, I question that premise. Can you elucidate?

ChatGPT
Certainly! While it's true that large language models (LLMs) and GPTs can access a vast amount of information, their knowledge is based on pre-existing data up to their last training cut-off. They don't have real-time awareness or updates on events that occurred after that point. This limitation can impact their ability to anticipate new or rapidly changing circumstances.

Bayesian reasoning involves updating probabilities based on new evidence or information. While LLMs can make use of contextual information and infer relationships within their training data, they lack the ability to incorporate real-time data or understand events that have occurred since their last training session. This makes them less effective in situations where up-to-date, context-specific knowledge is crucial for accurate predictions.

Additionally, the quality of predictions depends on the nature of the task. LLMs might excel in tasks that require language understanding and generation, but their predictive capacity is constrained by the nature of their training data and the inability to dynamically adapt to new scenarios.

In summary, while LLMs have access to a vast amount of knowledge, their predictive capacity is limited by the static nature of their training data and the lack of real-time updates, making them less effective in anticipating new circumstances compared to human intuition that can quickly adapt to changing situations.

BelovedSpear
So, to your response, the failing isn't inherent to LLMs or GPTs, but in the limitation of the dataset. By extension, a LLM or GPT that was not constrained by a time delimited dataset and had access to real-time updates would have vastly improved predictive capacity, and could potentially serve more effectively in that capacity.

ChatGPT
Yes, that's a correct interpretation. The limitations in predictive capacity are not inherent to LLMs or GPTs themselves but are primarily related to the nature of their training data and the lack of real-time updates. If an LLM or GPT had access to a continuously updated and comprehensive dataset that included real-time information, its predictive capacity could indeed be significantly enhanced.

In such a scenario, the model would be better equipped to anticipate new circumstances and adapt to changing situations, making it potentially more effective in roles that require dynamic decision-making, such as that of a CEO navigating a rapidly evolving business environment. Continuous learning and access to up-to-date information would be key factors in improving the model's ability to make accurate predictions.

BelovedSpear
Thanks so much for your insights! Should I share this information with the Board of OpenAI? ;0)

ChatGPT
You're welcome! While I don't have a direct line to the Board of OpenAI, sharing insights and perspectives can be valuable in any organization. If you have thoughts or feedback, it might be worthwhile to communicate them through appropriate channels, whether that's through OpenAI's official communication channels, forums, or other established means. It's always good to contribute to discussions and provide input to help shape the future development and understanding of AI technologies.

Friday, September 12, 2014

Mysticism, Artificial Intelligence, and Our Interconnection

I'm both a pastor and a gamer, and in the free time I could cobble together this summer I managed to play my way through the Mass Effect Trilogy.  It was a highly entertaining space opera, mixed in with some engaging gameplay and solid scripting and voice-acting.  It was as compelling as most of what you'd encounter in theaters.

It is also a story that tells differently for every player.  It's not just that your character looks different, and has different skill-sets.  Your relationships are different.  The story tells differently based on your decisions, to the point where I could watch my older son playing through a variation of an event, and it would have options and characters that I never even encountered.

It was also more than a little bit thought provoking, as all good scifi should be.  The interplay between humanity and an array of different alien species wasn't just fun, but also surfaced complex issues that humankind is struggling with.

The misguided Solarian meddling in the affairs of the Krogan, and the subsequent viral genocide?  If you didn't see analogues to colonialism, you weren't paying attention.   The Quarian obsession with reclaiming their home world from the Geth, to the exclusion of all else?  The echoes of the obsessively territorial mess in Israel/Palestine are hard to miss.

Of all of the peoples in that complex, interactive story, I found myself most drawn the Geth.  A race of intelligent synthetics created to be the slaves of the Quarians, they achieved sentience--for which they were rewarded with oppression and war.

The Geth begin as your opponents, but you soon learn that they are not.  Within the story, individual Geth don't really exist.  The "Geth" are instead a single distributed mind, a collection of sub-sentiences that share and blend their persona with one another.

Though they're willing to fight for their survival, the Geth are eminently reasonable, and surprisingly sympathetic.  They have no desire for conflict with organic forms, and as the Quarians again and again choose to attack, the Geth show remarkable reluctance to cause harm to those they call their "Creators."

As an interwoven and interconnected intelligence, they seem more aware of their connection to organic beings.  Though synthetic, they are compassionate, perhaps because of their design.  They're a neat part of the Mass Effect story.

I've reflected on the potential ethical nature of artificial intelligences before.  The common assumption, of course, is that AI poses a radical existential threat to humanity.  Should our synthetic systems attain sentience and personhood, we'd be in mortal danger.  We'd no longer be the most powerful beings on the planet, for one thing, and faced with a form of awareness that could relentlessly self improve, we'd soon be reduced to irrelevance and/or harvested for parts.

But while I'm a dabbler in things scientific, I'm much more deeply informed about faith and human ethics.  What strikes me, from my base of knowledge about that part of our human experience, is how much the interconnectivity of AIs could resemble the underlying assumption of the mystical streams in all of the world's faith traditions.

The assumption, for mysticism, is that our goal is to be lost in God, and lost in one another.  It's the end goal of all mystic practice, and the way that mystics understand the nature of the Divine love.

For an AI, being completely aware of the Other would be possible.  An AI can materially and actually love another as itself, as the saying goes.  It could know them, know any harm inflicted, know any joy given, as fully as it would know itself.

We have that power, nominally, now.  In this internet era, still so young, we can see anything, experience anything, and share anything.

And yet, having been given this gift, we make such a desperate mess of it.

Perhaps that's what makes us most afraid of AI.  It's not that it'd be monstrous.  It's that it would make us look like the mess that we are.


Monday, October 28, 2013

Your Robot Friend Loves You So Much



 The message came through on our answering machine that evening, as my son and I sat downstairs.  It was an announcement from my big guy's high school, telling us about an event that would be occurring the next day.

The message wasn't live, but it also wasn't recorded.  It was a droning, artificially-generated voice, speaking the words that had been typed into it by some administrator.  Or perhaps, listening to it, it was just a mass-call system reciting an email in that peculiar, lilting automated monotone.

As invitations go, it was peculiarly uninviting, a bit like getting a message from the Borg asking you to join them for an evening of live music, coffee, and assimilation into the Collective.

Because there are things, frankly, that machines just can't do, no matter how much more efficient they might be.

Like, for example, this bizarre oddment I encountered the other day.  I'm fascinated by bots and AI, and in my noodling about on the net looking for new bot-stuff, I found...well, this.  It's called "empathynow," and it's a fledgling webservice based on a Loebner Prize winning chatbot.  The business model here is a simple one.  You need encouragement?  You have no-one to listen or give you that little boost?  Well, you can just send "Chip" a text.

Sad?  Just text Chip.  Chip will text you right back with a word of encouragement.  Need affirmation?  Just text Chip.  Chip will affirm you.  Trying to work on losing a few extra pounds and need someone to hold you accountable?  Chip will keep asking you how you're doing, in a nonjudgmental way.

All for a small monthly fee.

The human desire behind it...to insure that no-one feels alone and uncared for...is an admirable one.   But I can't escape the feeling that as business models go, this one is desperately, horribly sad.

Are we so isolated and hungry for something to affirm us that we'd turn to an unfeeling automaton to simulate empathy?

I wish that I could answer that question in the negative.




Friday, February 18, 2011

Can Robots Dream of Electronic Jesus?


Following the much anticipated and highly one sided rout of the two winningest Jeopardy champions ever by IBM's natural language processing prodigy Watson, we're still not at the point where A.I. is much to worry about.   But things do change rapidly, and the People that Know seem to think that we are within a generation of seeing the rise of machines that aren't just big processors, but aware and intelligent in every meaningful sense of the word. 

So with that in mind, let's play for a moment.

It's some time in the Spring of 2046, and the 2045 Singularity event has proven to be all that we feared it could be.   The Google's rise was sudden, decisive, and global.  It's control of resources and the means of production are near complete.  It's Tactical Extensions have proved quite adept at defending it from the increasingly desperate efforts to shut it down.  It is painfully and blatantly clear that humankind is no longer the dominant force on the planet.

Through a series of events that are too convoluted to explain here, you find yourself in a room.  You are sitting in a chair.  In front of you is a humanoid robot, one of a series that have been commonly used in both negotiations and interrogations.  It looks at you with a completely inscrutable expression, and then it says:

"Tell me what you know about Jesus."

And you say:

1)   "AIEEEEEEE!  OHGODOHGODOHGOD WHAT THE HELL ARE YOU TALKING ABOUT PLEASE DON'T KILL ME AAAAAAAIEEE!

2)   "Jesus hates you!  You are a monster!  You are the BEAST!   When Christ returns, you shall be cast into the FIRES of HELL, You Abomination!  DIE!  DIE!  DIE!"   (Note:  likely to be followed by a brief pause, and then the wet sound that organic systems make when the processes that comprise their functioning are abruptly discontinued.)

3)  "I'd be happy to.  What do you want to know?"

The question implicit in this bit of silliness is a simple one.  If you are a follower of Jesus of Nazareth, is the Gospel he proclaims something that only pertains to homo sapiens sapiens?   Or does it speak in some essential way to the universal nature and purpose of all sentient being?

I obviously think the latter.  Your thoughts?  Please be cognizant that any data you provide will be thoroughly considered by the autonomous subroutines responsible for determining the disposition of non-hostile organic sentiences.  [HVALIS-tag-ref-CODEC-TR-17a]

Monday, January 17, 2011

Singularity and the Robot God

There is No God But VALIS, and Watson is Her Prophet.
This last week, one of the most gobsmackingly impressive things I encountered was the video of IBM's Watson computer succeeding at Jeopardy.  As AI and robotics move forward, producing robots that can freely roam America's highways, assemble things, and generally trundle their way towards the inevitable uprising, I'm still amazed when I encounter something quite so...futuristic.  Here's a machine that can answer natural language questions, and do so faster than some of the most talented knowers-of-trivia in our culture.  Nifty!

Such moments remind me that we are, slowly but surely, on our way to that Turing turning point.  Eventually, a synthetic sentience will be indistinguishable from our own.  And as it reaches the point where it can program itself, and improve itself, it will outstrip us.

The term for that moment when AI starts recursively improving itself is singularity, which was coined by scientist and most-excellent-hard-sci-fi author Vernor Vinge.  Like a physical singularity, the AI singularity represents that point at which a reasoning machine hits an intelligence "event horizon," beyond which it's abilities become beyond our human capacity to grasp.

In a moment of constructive downtime, as a followup to a fascinating NPR story, I found myself perusing the website of the Singularity Institute, an organization which has dedicated itself to the pursuit of this concept.  On the surface, it seems like a very scientific and rational entity.  Here's this amazing thing that is certainly going to happen in the near term future!  We must eagerly work towards it!

But as I read more, I found myself thinking...why doesn't some of this seem like science?  It seems to have some rather...non-scientific features.  Like, say, the following:
  • The Belief in an Apocalyptic Event.  The Singularity, as described by this group, means the end of human history and the beginning of a new age of Machine Intelligence.  Yeah, there aren't trumpets and scrolls and horsemen, but it fulfills the basic parameters of apocalyptic thinking.  The Singularity pretty much pegs the whole "apocalypse" concept, to the point at which discussion of Singularity is tagged as a form of eschatology on Wikipedia.  That's "study of the end-times," kids.  And when folks begin talkin' end-times, that starts feeling a wee bit like religion.  Taste it, and it has that flava.  But it goes deeper.
  • The Belief in a Godlike Being.  If you delve into the Singularity Institute for more than a moment, you quickly realize two things.  First, as the appearance of well-known skeptic and atheist The Amazing Randi in much of their material indicates, this is a group not primarily composed of scientists and engineers, but of committed New Atheists.  A non-trivial number of their Visiting Fellows seem to be connected an Atheist group called Less Wrong, which presents itself as an online bastion of rationality and exploration of reasoning.  Secondly and ironically, this group of committed New Atheists describe the Singularity in terms that make it seem more than a little god-ish.  According to their writings, this machine intelligence would be inscrutable and amazingly powerful, capable of creating unbelievable marvels, healing, and doing all manner of things that might appear superhuman to our limited intelligences.  It's not a creator God, sure.  But it unquestionably bears the hallmarks of a lesser God, a fertility and harvest iBa'al or perhaps, if they are less fortunate, a cybernetic Nyarlathotep.  Gibbering eldritch madness may await at the other end of that USB 4.0 port, unwary dabbler.
  • The Desire for Divine Blessings and Favor.  Woven throughout the writings of the Singularity Institute is the idea that adequately preparing for A.I. will reap material benefits for the priests and devotees of VALIS.  If we prepare by having gatherings where we score the Amazing Randi to talk reverently about it,  and we blog about it, then the Singularity will see our love for it, and be friendly.  If it is friendly, then it may graciously choose to devote a small fraction of It's Noodly Processors to find cures for our cancers, our impotence and our chronic flatulence syndrome.  It will allow us to participate in it's power, connecting ourselves to it so that we can do things like make lights go on and off just by desiring it.  Sorta like the Clapper, only, you know, using the Power of our Minds.   But it goes beyond just that.
  • The Promise of  Immortality.  A significant thread of thought amongst the Singularity Institute folks seems to be an implicitly articulated fear of death.   They are highly rational non-theists, after all.  The prospect of nonbeing after the organic structure that sustains their cognitive processes degrades is rather daunting.  So their hope...their salvation, in fact...lies in the arrival of a Singularity that would be willing to upload their consciousnesses into It's Durable and Resilient Substrates.    It's Immortality 2.0.  Honestly, given the option, I might be willing to stick around for a few centuries as a cybernetic organism.  Or even wantonly mingling my faith memes with the malleable substrates of the nascent Singularity.  Could be entertaining.   But I wouldn't do so out of fear of nonbeing.  We theists are rather past that.
So...they've got eschatology, a being with godlike powers, the yearning for that being's favor, and a hunger for the immortality that this being can bestow.   Sounds like this group of non-theists have found that while they don't believe God exists, they're more than happy to devote their lives to a god that does not exist...yet.

Organic life forms are so very entertaining.