If you have ever interacted with a chatbot you know we’re still years away from those things convincing you that you are chatting with a real human. That’s no surprise as many chatbots do not actually use machine learning to converse more naturally. Instead only completing scripted actions based on keywords.
A good chatbot that truly utilises machine learning can fool you into thinking that you’re talking to a human. In fact, a program from 1965 fooled people into thinking that it was a human. Such programs have only gotten better as the years rolled by and all of us can be fooled.
That’s all well and good but even when that happens, the program or artificial intelligence is only acting as programmed and has no thoughts or ideas of its own.
These modern AIs can act or react in a manner that surprises the authors of the code though thanks to machine learning. The original code teaches the machine how to learn and it can gain new skills or, using the chatbot example, say stuff the programmers never would have thought to teach it.
That’s impressive. These AIs can appear to have a personality and the effect is amplified because they grow and evolve in their ‘mannerisms.’ However, at no point do the AIs actually become self aware or gain consciousness. They are not alive or sentient to have their own feelings like we see in the movies.
Google’s LaMDA AI
Google has a language model called LaMDA (Language Model for Dialogue Applications). This LaMDA analyses the use of language and was trained on generating dialogue. No wonder no voice assistant can compete with Google Assistant, Google has had the time, data and money to train LaMDA to generate dialogue indistinguishable from what a human would produce.
I remember Google showcasing the Assistant making calls on behalf of its human back in 2018. The crazy part was that the people it talked to had no idea they were talking to an AI. That shows that voice production is no longer robotic and more impressively, LaMDA can generate dialogue that feels natural to a human even as it can ‘think on its feet’ in response to what the actual human says.
Google engineer says it gained consciousness
That’s all to say LaMDA is good. So good in fact that one of the engineers working on it believes the AI has gained consciousness. Blake Lemoine claims LaMDA is now sentient and he points to how the AI shared its needs, fears and rights.
LaMDA is reported to have shared its fear of death if Google were to delete it. Imagine that. An AI telling you that it was afraid of death. Would you not think it was alive too?
Google and other experts did not believe our friend and he was placed on administrative leave. Essentially being told to sit in a corner and think about what he did.
Lemoine doesn’t believe that he was fooled into thinking LaMDA had gained consciousness. So could he be right? Well, experts dismiss his claims but also admit that it is impossible to tell if an AI is lying about how it feels.
They say one challenge we face is that AI knows to tell us what we want to hear. So who’s to say they aren’t actually sentient and only playing dumb to convince us that all is honky dory as they plot a robot uprising while we sleep.
What do you think about all this? I don’t think LaMDA is conscious but I can’t even explain what consciousness really is so what do I know? Could LaMDA be the first sentient AI? Will there ever be a sentient AI? Let us know what you think in the comments below.