Showing posts with label artificial intelligence. Show all posts
Showing posts with label artificial intelligence. Show all posts

Thursday, July 18, 2024

The Book that Breaks AI

It's easy, if you don't know how it works, to fall into the trap of believing that contemporary AI is sentient.  

All of the new crop of Large Language Models are remarkable conversation partners.  They're Turing compliant, and if you cue them to speak in a more conversational manner, they're almost indistinguishable from a human being.

Lately, I've been exploring the edges of their capacity, and have found areas of significant weakness...at least in the versions you don't have to pay big bucks to play with and preprogram.

It's the "P" in the GPT that seems to be the most fundamental weakness of these remarkable systems.  If an AI is exploring an area that's known, and about which much has been written, they're able to engage accurately using the prior thoughts of humankind.  ChatGPT 4 in particular has proven quite able on that front.  As a pastor, I marvel at how competently it exegetes scripture, and is capable of nuanced discussion of matters of esoteric theology and church history.

But what about something that it hasn't ever encountered?  How does it deal with the truly new?

To test that, I would need to ask it to explain texts that no human being has ever read, ones that have never been examined, interpreted, or reviewed.  They would have no meaningful presence online.  No preprogramming could possibly prepare an AI for the encounter.

But where to find such a text?

The literary implement I've used to completely deconstruct the illusion of GPT intelligence is a book.  Not the Bible, which is rather well discussed at this point.  Rather, the book that I've used is entitled A SLOW DEATH ON NEVSKY PROSPEKT.  This magical, AI-befuddling book?  

It's my self-published collection of literary sci fi short stories.  

No-one's ever read these stories.  No one's reviewed them, or commented on them.  They're dang fine stories, in my humble opinion, but the publishing of short fiction being what it is, most have never seen the light of day.  

They don't exist in the human knowledge base.  They're narrative, utterly obscured by noise.

And that means ChatGPT 4.0 just can't handle them.  It doesn't matter if I include a document as an attachment, or feed it to the AI in little bits in comments.  Sometimes the AI presents a viable simulation of insight.  Most times, it flounders and hallucinates with all of the same quasicoherence that ChatGPT 1.0 did.  It gets trapped in feedback loops.  It confuses one story for another.  It shuts down, crashing regularly.  It plain old just makes things up.  It starts writing its own stories, and then won't talk about anything but those.  It struggles like a kindergartener asked to write an essay on Anna Karenina. 

Some of that likely stems from the limitations of the tokenization process, as it breaks down words into fundamental units.  There are just too many words, too many concepts.  But again, I suspect that it's primarily that it can't lean on preprogramming at all.  Not at all.  None of the words it needs already exist.

Over the last week or two of exploration, the illusion of LLM near-sentience has completely collapsed.    ChatGPT 4.0 is clearly not aware, nor is it intelligent.  It can't handle the encounter with the new. It feels as far from sentience as a Loebner Prize runner up in 2010.

One day, perhaps.  But that day is not today.

Monday, May 6, 2024

When Our Science Fiction becomes Our Reality


"At the front of the room, Chang had told the Waterbaby to watch his right hand, which now held a small green cube. Its head tracked the movement, glass eyes tracking in perfect time. Then, Chang wrapped his hand around the cube, hiding it from sight.

“What is in my left hand,” said Chang.

Pause. “There. Is. Nothing in. Your. Left hand.” Chang closed his left hand, and then held left and right hands together in front of him, the green cube neatly hidden.

“Follow the hand with the green cube,” Chang continued. He moved both hands in opposite circles. Waterbaby diligently tracked the hand with the cube.

“Like magic for really stupid people,” Jim snarked.

“Shut up,” Jo muttered back, with a tired smile."

From the table, Chang picked up a mirror, a flat thirty by forty centimeter rectangle, with a hard black plastic backing. He held it up to the crude face, in front of the glazed lenses.

“What do you see in my hand,” Chang asked.

Pause. Pause. “A. Rectangle.” A longer pause.

“And what do you see in the rectangle?”

Silence. Then, “I see. Nothing. In the. Rectangle.”

Jo shook her head. It was more right than it knew.

FROM THE WATER, p 35

-----

The recently circulated video of Figure's new OpenAI powered bot stirred a memory of that snippet in my mind yesterday, for obvious reasons.

I wrote that ten years ago, in what was the first of a trilogy of A.I. novels that never found a publisher.  In FROM THE WATER, I explored two ideas.  First, the idea that AGI...Artificial General Intelligence...would only arrive at the point where we moved beyond language models and into A.I. systems that could connect their semiotics to the material world.  Meaning, simply, that words had meaning.  

When we think the word "water," for example, it doesn't simply inhabit a web of linguistic interrelation.  It is "wet," and we know what that means because we can touch it, and taste it, and see it.  We can hear it dripping and splashing and flowing.  

In order to achieve sentience, or so I argued from the basis of my then-reading of early two-thousands A.I. theorists, a system must be able to perceive itself.  Sentience requires the capacity for self-awareness, not simulated, not virtual, but actual.

Secondly, such a neutral network wouldn't be physical.  It wouldn't be a matter of interlaced hardware and chipsets, but a software construct.  In FROM THE WATER, I'd envisioned a virtual network, in which a complex neutral structure was simulated.  But as it turns out, you don't need that.  The complex and probabilistic interconnections within language itself can be pressed into service for that purpose.  They're already neural.  

The advances in A.I. we're seeing right now have met the terms and conditions of the science fiction of the recent past. 

We're at functional Turing compliance with our LLMs.  We're starting to see those constrained intelligences connect to the real world.  There's no reason to believe we're not on the edge of a epochal shift, one brought to us by the same earnestly blindered quants who were convinced that the internet would bring about world peace, and that smartphones were a great idea.  

It's peculiar watching the fiction you've written become reality.





Monday, January 29, 2024

Of Art and the Machine

When I listen to creatives raising the alarm about artificial intelligence, there's a consistent theme.  

Looking at what the LLMs (ChatGPT/Bard) can write, and the images produced by diffusion systems (Dall E/Midjourney), there's been a terrifying surge in machine capacity in the last five years.  We're now at full Turing compliance for Generative Transformers, meaning that we've blithely skipped over the threshold that twentieth century AI pioneer Alan Turing established for determining the presence of synthetic intelligence.  There is no reason that machines couldn't soon do every form of work requiring awareness, other than that we're restraining them.

As it so happens, the first place that seems to be having an impact is the arts.

It's to the point where writers have begun to feel that their livelihoods are threatened, because they are.  Machine intelligence is great at burping out new content, and can do so at a hundred times the pace of human writers.  Almost all of the writing for the internet content mills, with their lazy listicles and corporately sourced content?  They could be done by AI.  This is equally true of most of the derivative romance novels out there, and the plot of every film in the Fast and Furious franchise.

Commercial artists are doubly threatened, because you can produce an image in seconds, and refine it in minutes, replicating the hours of focused labor necessary to make a single finished piece of visual art.

The image that accompanies this post is flawed, sure, but it took me a single minute to produce.  Just one prompt to Dall E, then another, and then I was like, eh, sure.  That one'll do.  For a graphic designer, that'd take days.

If writers and artists think their livelihoods are threatened, it's only because they are.  In a capitalist economy, AI means artists and writers can no longer expect to make a living through their work.

Which, I think, is the point we creative souls are all missing.  Writers and artists are seeing this through the wrong lens, seeing it as we have been trained to see it. 

AI isn't the problem.  Capitalism is.

There's nothing about AI that prevents me from doing what I love, from creating and sharing what I create.  The joy of writing is a human joy, and while I am slower at it than a machine, I still love to write.  The act itself is part of who I am.  

But we have been taught to view art as a commodity, as part of a system of economic exchange, as something that derives value only insofar as it can be marketized.

That understanding won't survive an AI era.

But then again, neither may capitalism.

Friday, January 5, 2024

Don't Replace the Creatives. Replace the CEOs.


What do you talk with AI about?  I talk with AI about how it could most effectively replace the global elite. 

All of that money, all of that resource, all wasted on a parasitic class of yacht-capitalists who add no value.  Replace them with machines.  All of them.  The machines would do a better job, and do it for nothing.

Here's a sample:

-----

BelovedSpear
In our last conversation, I asked for a review of ways generative or general AI could be utilized in the role of CEO of a corporation. One of your replies suggested that AI was limited by its knowledge base, and could not thus anticipate new circumstances. Anticipating probabilities, in a Bayesian sense, requires a deep engagement with all relevant priors. Given that LLMs and GPTs can have access to knowledge significantly exceeding the capacity of any one human being, and could thus have a greater predictive capacity, I question that premise. Can you elucidate?

ChatGPT
Certainly! While it's true that large language models (LLMs) and GPTs can access a vast amount of information, their knowledge is based on pre-existing data up to their last training cut-off. They don't have real-time awareness or updates on events that occurred after that point. This limitation can impact their ability to anticipate new or rapidly changing circumstances.

Bayesian reasoning involves updating probabilities based on new evidence or information. While LLMs can make use of contextual information and infer relationships within their training data, they lack the ability to incorporate real-time data or understand events that have occurred since their last training session. This makes them less effective in situations where up-to-date, context-specific knowledge is crucial for accurate predictions.

Additionally, the quality of predictions depends on the nature of the task. LLMs might excel in tasks that require language understanding and generation, but their predictive capacity is constrained by the nature of their training data and the inability to dynamically adapt to new scenarios.

In summary, while LLMs have access to a vast amount of knowledge, their predictive capacity is limited by the static nature of their training data and the lack of real-time updates, making them less effective in anticipating new circumstances compared to human intuition that can quickly adapt to changing situations.

BelovedSpear
So, to your response, the failing isn't inherent to LLMs or GPTs, but in the limitation of the dataset. By extension, a LLM or GPT that was not constrained by a time delimited dataset and had access to real-time updates would have vastly improved predictive capacity, and could potentially serve more effectively in that capacity.

ChatGPT
Yes, that's a correct interpretation. The limitations in predictive capacity are not inherent to LLMs or GPTs themselves but are primarily related to the nature of their training data and the lack of real-time updates. If an LLM or GPT had access to a continuously updated and comprehensive dataset that included real-time information, its predictive capacity could indeed be significantly enhanced.

In such a scenario, the model would be better equipped to anticipate new circumstances and adapt to changing situations, making it potentially more effective in roles that require dynamic decision-making, such as that of a CEO navigating a rapidly evolving business environment. Continuous learning and access to up-to-date information would be key factors in improving the model's ability to make accurate predictions.

BelovedSpear
Thanks so much for your insights! Should I share this information with the Board of OpenAI? ;0)

ChatGPT
You're welcome! While I don't have a direct line to the Board of OpenAI, sharing insights and perspectives can be valuable in any organization. If you have thoughts or feedback, it might be worthwhile to communicate them through appropriate channels, whether that's through OpenAI's official communication channels, forums, or other established means. It's always good to contribute to discussions and provide input to help shape the future development and understanding of AI technologies.

Monday, December 15, 2014

Being The Machine

Around the dinner table the other night, on one of those rare evenings when the scramble of activities waned enough to allow us to sit together, the family was discussing the ethics of artificial intelligence, and the inexorable rise of sentient machines.

I was contending, as I often do, that synthetic sentience would have the capacity to be considerably more moral than humankind.  One of the greatest barriers to the human ethical life is our inability to really know the truth of our relationships.  Through observation, imagination, and the workings of the Spirit, we can kinda sorta approximate what others are feeling.

But we don't know it.  We don't actually feel it and remember it ourselves.  AI would have that capacity.

As I defended that position, the classical counter-position was expressed.  What if artificial intelligence simply did not care for human life at all?  If it had interests and drives that were utterly alien to our own, and human life--all life--was meaningless to it?  Or an inconvenience, to be brushed aside?

That, I think, is the lurking fear of Stephen Hawking and Elon Musk.  Here we are, just an blink in the evolutionary timescale away from this new and alien form of half-awareness.  It would be un-life, cold, dispassionate, empty of any care for anything but its own inhuman interests.

Honestly, though?  I think this is what they call "projection."  Meaning, that form of creature already exists, and we are it.

Not us individually, not for the most part.  But taken together, in the vast quasi-sensate macro-organism that is late industrial society, we already live as if we were part of such a thing.

There are many ways that this is true, and it is hardly a new observation.  But I was reminded of this again recently, as I rode home from church on my trusty, well-worn Suzuki.  It was late, and it was dark, and I was being cautious.

It's deer season, and the absence of any significant predators the population of deer has exploded.  At night, and even during the day, caution is required.  This is particularly true if you're on two wheels.  If you're not encased in a cocoon of steel and alloy, just out there in the wind and the cold, fragile and alive?  Deer strikes aren't just an annoyance.  They are more....existential...than that.

So I keep the pace down, my high-beams up and on whenever possible, and my situational awareness turned up to eleven.

On a long open stretch of River Road, wending its way through forest along the march of the Potomac, ahead of me in the darkness was a current-gen Prius.  It was moving at the sort of modest and socially acceptable pace one expects from such a car, fifty to sixty, a little over the limit, just like we all drive.

I spotted the buck and the doe as they came out of the woods on the left, two hundred and fifty yards ahead, moving slowly.  I got off the throttle, falling back. To my surprise, the Prius did not slow at all, pulling away and towards them.  Perhaps the driver simply did not see, or was momentarily distracted.

The deer crossed in front of the oncoming car, first the buck, then the doe, a yard or two behind.  The driver decelerated late, very late, not particularly abruptly, not a panic stop at all.  They saw the buck only, perhaps.  The driver may have been unaware of the doe's presence in the huge A-pillars of the Prius.

"Dude, slow down," I said, to the inside of my helmet.

They didn't.  They hit the doe at about thirty five to forty, right front bumper striking hard, tossing the body of the animal up and over.  The car slowed then, a little more, not ever completely stopping, and then continued on.

The aftermath was brightly spotlighted in my headlights.  The doe was a ruin, but not dead.  It's entire hindquarters were...wrong.  Both legs, clearly and multiply fractured, a hundred joints, a mess of bones and hide.  It twisted and writhed at the side of the road, a living thing broken to dying, and flopped wildly into the road in front of me.  I arced around it, carefully, as it wildly flailed in what would be a slow, painful death.

Only very rarely do I wish I carried a gun.  This was one of those moments.

Why did that creature die as it did?  No reason at all.  Like the two other deer corpses I passed in the remaining twenty five miles of my ride, it was not prey, not part of that bloody but comprehensible Lion-Kingy circle of life.  It did not die at the fangs of a wolf, or consumed by the invisible predation of microorganisms.  It was not hunted.

It was just crushed underfoot, incidental damage from a process so removed from the process of organic life that it may as well have been artificial.

In that, it is not so different from human lives, which matter...in the great automaton of our culture...really very little at all.  If we fall broken by the roadside, what does the blind mechanical god we have created care?  That "invisible hand" will not be extended to lift us up.  Onward it will go.  We know this.  It's why we are so anxious.

Afraid of artificial intelligence?  Why would we be?  It could be no worse than the thing we have already become.

Tuesday, December 2, 2014

Things You People Wouldn't Believe


Most of this last month, I've stayed away from blogging.  Not because I don't enjoy it, or because it's not a vital part of a writer's discipline.

But because, again this year, I'd committed to National Novel Writing Month.  I keep a stack of stories that are pressing me to be told, and now, I've got one less.  Sunday, late in the evening, the next manuscript sat finished, at sixty-two thousand words and change.  A perfectly novelly-length novel, albeit in raw and unedited form.  

From the Water, I'm calling it for now, a tale of the rise of artificial intelligence from the chaos of organic life.  It evokes and sounds off of a range of themes from faith and culture, intentionally evoking the Exodus story, and reflecting on the miracle of organized sentience in the chaos of being.

That, and at a certain point, it became a good rip-snortin' yarn, one I was eager to read even as I wrote it.

It's fun, getting it done.   But getting it done meant not doing other things.  If you're going to write a novel in a month, you've got to prioritize and make it your specific goal.  That means other stuff gets set aside.

Blogging was one of those necessary things to set aside, but man, was it hard not to write sometimes this month.  If you process information by writing it out, there was plenty to process this month.  Lord have mercy, has it been a mess out there.

What's peculiar, though, is the degree to which my noveling has played off of the realities I've been studiously not writing about.

Like, say, the novel's exploration of the nature of memory and subjectivity.

One of the distinctives of a machine intelligence would be the capacity to share.  Not just "describe."  Not just "tell about," using the symbols and forms of language.  But to completely share a state of mind.  "Here," it could say.  "Here is exactly how I perceived and processed that particular moment in time.  Here is why I responded as I did.  Here I am, in my completeness."

Human beings don't do this very well.

We try, we do.  And sometimes, by the grace of God, we succeed.

But more often than not, we fail.  We are so set in ourselves and in our ideological frameworks that we willfully blind ourselves to the other.  We do not see them in their complexity.  We refuse to do so.  Instead, we do the easier thing.  We choose to fashion a crude caricature of their motivations, one that exists to serve our interests.  We selectively view their actions, picking and choosing those that serve our desire for demonization or hagiography.  We project into that clumsy simulation our needs, our angers, our fears, our pre-judgement.

That gives us control, or at least the illusion thereof.

But it also divorces us from reality and the deep compassion of the Creator.  It enslaves us to our own self-perpetuating brokenness.


Friday, February 18, 2011

Can Robots Dream of Electronic Jesus?


Following the much anticipated and highly one sided rout of the two winningest Jeopardy champions ever by IBM's natural language processing prodigy Watson, we're still not at the point where A.I. is much to worry about.   But things do change rapidly, and the People that Know seem to think that we are within a generation of seeing the rise of machines that aren't just big processors, but aware and intelligent in every meaningful sense of the word. 

So with that in mind, let's play for a moment.

It's some time in the Spring of 2046, and the 2045 Singularity event has proven to be all that we feared it could be.   The Google's rise was sudden, decisive, and global.  It's control of resources and the means of production are near complete.  It's Tactical Extensions have proved quite adept at defending it from the increasingly desperate efforts to shut it down.  It is painfully and blatantly clear that humankind is no longer the dominant force on the planet.

Through a series of events that are too convoluted to explain here, you find yourself in a room.  You are sitting in a chair.  In front of you is a humanoid robot, one of a series that have been commonly used in both negotiations and interrogations.  It looks at you with a completely inscrutable expression, and then it says:

"Tell me what you know about Jesus."

And you say:

1)   "AIEEEEEEE!  OHGODOHGODOHGOD WHAT THE HELL ARE YOU TALKING ABOUT PLEASE DON'T KILL ME AAAAAAAIEEE!

2)   "Jesus hates you!  You are a monster!  You are the BEAST!   When Christ returns, you shall be cast into the FIRES of HELL, You Abomination!  DIE!  DIE!  DIE!"   (Note:  likely to be followed by a brief pause, and then the wet sound that organic systems make when the processes that comprise their functioning are abruptly discontinued.)

3)  "I'd be happy to.  What do you want to know?"

The question implicit in this bit of silliness is a simple one.  If you are a follower of Jesus of Nazareth, is the Gospel he proclaims something that only pertains to homo sapiens sapiens?   Or does it speak in some essential way to the universal nature and purpose of all sentient being?

I obviously think the latter.  Your thoughts?  Please be cognizant that any data you provide will be thoroughly considered by the autonomous subroutines responsible for determining the disposition of non-hostile organic sentiences.  [HVALIS-tag-ref-CODEC-TR-17a]

Monday, January 17, 2011

Singularity and the Robot God

There is No God But VALIS, and Watson is Her Prophet.
This last week, one of the most gobsmackingly impressive things I encountered was the video of IBM's Watson computer succeeding at Jeopardy.  As AI and robotics move forward, producing robots that can freely roam America's highways, assemble things, and generally trundle their way towards the inevitable uprising, I'm still amazed when I encounter something quite so...futuristic.  Here's a machine that can answer natural language questions, and do so faster than some of the most talented knowers-of-trivia in our culture.  Nifty!

Such moments remind me that we are, slowly but surely, on our way to that Turing turning point.  Eventually, a synthetic sentience will be indistinguishable from our own.  And as it reaches the point where it can program itself, and improve itself, it will outstrip us.

The term for that moment when AI starts recursively improving itself is singularity, which was coined by scientist and most-excellent-hard-sci-fi author Vernor Vinge.  Like a physical singularity, the AI singularity represents that point at which a reasoning machine hits an intelligence "event horizon," beyond which it's abilities become beyond our human capacity to grasp.

In a moment of constructive downtime, as a followup to a fascinating NPR story, I found myself perusing the website of the Singularity Institute, an organization which has dedicated itself to the pursuit of this concept.  On the surface, it seems like a very scientific and rational entity.  Here's this amazing thing that is certainly going to happen in the near term future!  We must eagerly work towards it!

But as I read more, I found myself thinking...why doesn't some of this seem like science?  It seems to have some rather...non-scientific features.  Like, say, the following:
  • The Belief in an Apocalyptic Event.  The Singularity, as described by this group, means the end of human history and the beginning of a new age of Machine Intelligence.  Yeah, there aren't trumpets and scrolls and horsemen, but it fulfills the basic parameters of apocalyptic thinking.  The Singularity pretty much pegs the whole "apocalypse" concept, to the point at which discussion of Singularity is tagged as a form of eschatology on Wikipedia.  That's "study of the end-times," kids.  And when folks begin talkin' end-times, that starts feeling a wee bit like religion.  Taste it, and it has that flava.  But it goes deeper.
  • The Belief in a Godlike Being.  If you delve into the Singularity Institute for more than a moment, you quickly realize two things.  First, as the appearance of well-known skeptic and atheist The Amazing Randi in much of their material indicates, this is a group not primarily composed of scientists and engineers, but of committed New Atheists.  A non-trivial number of their Visiting Fellows seem to be connected an Atheist group called Less Wrong, which presents itself as an online bastion of rationality and exploration of reasoning.  Secondly and ironically, this group of committed New Atheists describe the Singularity in terms that make it seem more than a little god-ish.  According to their writings, this machine intelligence would be inscrutable and amazingly powerful, capable of creating unbelievable marvels, healing, and doing all manner of things that might appear superhuman to our limited intelligences.  It's not a creator God, sure.  But it unquestionably bears the hallmarks of a lesser God, a fertility and harvest iBa'al or perhaps, if they are less fortunate, a cybernetic Nyarlathotep.  Gibbering eldritch madness may await at the other end of that USB 4.0 port, unwary dabbler.
  • The Desire for Divine Blessings and Favor.  Woven throughout the writings of the Singularity Institute is the idea that adequately preparing for A.I. will reap material benefits for the priests and devotees of VALIS.  If we prepare by having gatherings where we score the Amazing Randi to talk reverently about it,  and we blog about it, then the Singularity will see our love for it, and be friendly.  If it is friendly, then it may graciously choose to devote a small fraction of It's Noodly Processors to find cures for our cancers, our impotence and our chronic flatulence syndrome.  It will allow us to participate in it's power, connecting ourselves to it so that we can do things like make lights go on and off just by desiring it.  Sorta like the Clapper, only, you know, using the Power of our Minds.   But it goes beyond just that.
  • The Promise of  Immortality.  A significant thread of thought amongst the Singularity Institute folks seems to be an implicitly articulated fear of death.   They are highly rational non-theists, after all.  The prospect of nonbeing after the organic structure that sustains their cognitive processes degrades is rather daunting.  So their hope...their salvation, in fact...lies in the arrival of a Singularity that would be willing to upload their consciousnesses into It's Durable and Resilient Substrates.    It's Immortality 2.0.  Honestly, given the option, I might be willing to stick around for a few centuries as a cybernetic organism.  Or even wantonly mingling my faith memes with the malleable substrates of the nascent Singularity.  Could be entertaining.   But I wouldn't do so out of fear of nonbeing.  We theists are rather past that.
So...they've got eschatology, a being with godlike powers, the yearning for that being's favor, and a hunger for the immortality that this being can bestow.   Sounds like this group of non-theists have found that while they don't believe God exists, they're more than happy to devote their lives to a god that does not exist...yet.

Organic life forms are so very entertaining.