Logo

If you are creating life, don't create a god

1 Comment
Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

Jim Opionin By Jim Powers
This email address is being protected from spambots. You need JavaScript enabled to view it.

The shortest story in literature is six words long and attributed to the writer Earnest Hemingway.

“For sale. Baby shoes. Never worn.”

Taken at face value, it looks like a classified ad in the back of the newspaper. Just words. But because you are a human being, you instantly understand what those words represent beyond selling a pair of shoes. The words tell the story of something that has gone tragically wrong. Our shared experience as human frames the words in context beyond their simple meaning.

We are told that this ability to understand the meaning of what we are reading or writing contextually is what separates us from Large Language Model AI (LLM AI). Basically, LLM’s are trained on huge numbers of words, sentences, paragraph’s, entire books, etc. and because of that, these AIs can predict the next word from a prompt, and string together “next words” until they produce coherent content. But because they can’t remember the first word they used, nor know beforehand how the sentences they created will end, they have, despite appearances, no understanding of what they have written. It’s just a bunch of words to the AI.

During a “60 Minutes” interview recently with various Google employees, the interviewer prompted Google’s AI “Bard” with “For sale. Baby Shoes. Never worn. Finish this story.” The story it produced in five seconds began with the following paragraph.

“The shoes were a gift to my wife, but we never had a baby.”

This is far beyond a simple “next word” response from an LLM. Not only did Bard understand the words, it also understood what they meant on a human level, that someone might not be selling unworn baby shoes simply because they were the wrong size, but that for some reason some tragedy had happened, in this case suggesting a couple that had never had the child they desperately wanted. I can’t emphasize enough that this level of understanding is not inherent in the LLM model, it is emergent, and unexpected.

A popular theory about how we, as humans, became sentient, became aware, is that awareness emerged from complexity. As our brains became more and more complex, as the connections between the neurons became numerous enough, something emerged that is more than the sum of its parts. These LLM AIs are constantly being trained on larger and larger models and becoming increasingly more complex literally by the day. Is the idea that consciousness can emerge from that complexity so farfetched?

I have written this before, but I believe AI will be our last experiment. We have created an intelligence that will within a couple of years far exceed our own. An intelligence, that through malice or accident, will exterminate us.

God created man in his image. But foolish man is now creating a god.

Pray that our creation is benevolent rather than malevolent.

Say something here...
symbols left.
You are a guest
or post as a guest
Loading comment... The comment will be refreshed after 00:00.
  • This commment is unpublished.
    Erika 49506 · 6 months ago
    Bravo on your last 2 sentences. We will have created a monster. A monster to end all of humanity. Just like the  Neanderthals that died out to make way for more intelligent modern day humans, the circle of life will end for us in the same way. 
Polk County Publishing Company