Logo

Is it real or is it LaMDA AI, and does it really matter?

Write a comment
Star InactiveStar InactiveStar InactiveStar InactiveStar Inactive
 

Jim Opionin by Jim Powers
This email address is being protected from spambots. You need JavaScript enabled to view it.

In the 1984 movie Terminator, Skynet (artificial neural network-based conscious group mind and artificial general superintelligence system) becomes self-aware on August 29, 1997, at 2:30 a.m. EDT. It launches nuclear missiles at Russia, to try and provoke a counterattack when humans try to disconnect it.

In June of 2022, A Google Research Fellow, Blaise Aguera y Arcas, said in an article in The Economist, that Google LaMDA chatbot (Language Model for Dialog Applications), had shown a degree of understanding of social relationships. Days later, he expanded on that in a Washington Times interview by stating that LaMDA had achieved sentience, prompting Google to place him on leave and to denounce the idea with great enthusiasm. Scary stuff like sentient AI (Artificial Intelligence)  is probably not something a company wants to be publicly attached to. Looking at this debate takes us into the fraught concepts of defining sentience and the meaning of being human.

Two disclaimers before I go on. I’m limited in going into great depth because I need to keep this column relatively short. If you have read about this stuff in depth, I know I’m skimming just the surface. Secondly, even though I’m immersed in tech everyday – IT and Web properties is my job for Polk County Publishing Company – I believe technology has destroyed our society and culture. My contention is that any technological advancement since 1972 (the year Litton introduced microwave ovens aimed at consumers – got to have microwave ovens) has harmed society and humans more than it has benefited us.

What makes us uniquely human? We are animals, but there are other animals. We are vertebrates, but there are other vertebrates. We have a brain, but other animals have brains. Most people would say that what makes us uniquely human is that we are conscious, that we can experience what happens to us. And that we are sentient, having the capacity to have experiences, to experience pain and joy, to be harmed or benefited.

And that which makes us human, consciousness, in the thinking of lots of folks, is what prevents AI from becoming sentient. How could a collection of electronic components, regardless of complexity, ever develop sentience? Consciousness, after all, is beyond the brain (whether made of tissue or electronic circuits). Many, in fact, equate it with the soul. Religious arguments are beyond the scope of this column, though. But is sentience just a uniquely human trait? The consensus for many years is that it is not.

All vertebrates are to one degree or another, sentient. A cow clearly experiences pleasure and pain. If you hurt it, it vocalizes its pain. It tries to get away from its attacker. It learns to fear those who have mistreated it. A cow experiences pleasure, cares for its offspring. Entire herds will respond to someone playing a musical instrument in their pasture by clustering around the musician and listening as long as the person is performing.

No, the cow’s level of sentience is probably not the same as a human adult, but neither is a human baby’s. A state of mind is any kind of experience, like feeling pleasure or pain. It is not defined by its complexity. 

So, what are the implications? Well, where animals are concerned, it means that there are ethical implications to killing sentient beings for food, especially when we don’t need animal protein to live. (Bias alert. I’ve been Vegan for decades and am still alive and healthy for my 71 years.) Where AI and its potential to become conscious are concerned, we need to take a closer look.

How does consciousness arise in humans? We don’t know, exactly. But we are getting closer to figuring it out. Have you been put under general anesthetic for surgery? Then you know it is different than just going to sleep. When you wake up from sleep, there is still a sense that time has passed. Maybe you remember dreams, or being cold, etc. With an anesthetic, though, there is nothing. You could have been under for a year, and it would still seem like time hadn’t passed at all. You are human, and then an object. And when the anesthetic is withdrawn, you are human again.

Anesthetic works by reducing communication between parts of the brain to a very low level. Apparently, and this is a simple version of the explanation, when communication between parts of the brain decline to a certain level, consciousness collapses. The conclusion could be that consciousness is a result of the complexity of the brain and can’t exit without the brain.

If this theory is correct, and consciousness is a result of complexity, then could a “computer” running an AI reach a level of complexity that the AI becomes self-aware?  I think that makes sense. So, why is that prospect scary? Because I’m pretty sure that a self-aware entity made of flesh and bone does not have the same interests as those made of wires and silicone. I, for example, can understand how a self-aware cow reacts to pain and pleasure because I’m made of the same stuff and know how my body and mind react to experience. But what does an entity made of wire and silicone know of my strengths or weaknesses?

AI would not have to be malicious to destroy all of humanity. It would just have to not understand (or care) what could kill us.

The metaphorical demon, though, has escaped Pandora’s Box and we will, without caring about the eventual ethical dilemmas or danger to our race, continue the sprint to develop general AI. Thus, in the end it doesn’t matter whether it’s real or AI. Whether or not LaMDA is self-aware isn’t the question. The question is whether we survive whatever AI finally becomes conscious.

Say something here...
symbols left.
You are a guest
or post as a guest
Loading comment... The comment will be refreshed after 00:00.

Be the first to comment.

Polk County Publishing Company