Unless you’ve been hiding under a rock, you’ve probably seen some of the news reports about Google’s chatbot LaMDA lately, which attracted a huge amount of attention when Blake Lemione, a Google engineer, published an interview with the AI chatbot declared that it had become sentient.
There is no doubt that the interview has raised eyebrows amongst the AI world and surprised many that the question could even be raised as a serious inquiry.
LaMDA’s main function is to simulate human conversation. It uses neural networks to predict a pattern of a conversation and can accumulate huge amounts of information from the internet to respond in a highly informed, yet natural manner. It, and other technologies such as GPT-3, do this very successfully and very convincingly.
So, is LaMDA sentient? What is sentience? One definition is having the capacity for sensation or feeling. LaMDA certainly gives the impression that it ‘feels’ and has ‘emotions’ but perhaps it is just telling us what we would expect to hear. Is an amoeba sentient? Where do we draw the line? Alan Turing gave this considerable thought and ultimately decided that no-one could judge sentience without being in the entity itself. Instead, he proposed a test whereby a computer and a human would interact blindly with a third human (via text for example), if the third human could not distinguish which of the two was human, then the computer was to all intents and purposes ‘intelligent’ – he called this the ‘Imitation Game’. It is unclear whether LaMDA would pass this test. Certainly, the interview with Lemione suggests that it would, but with careful questioning, these chatbots can be led astray down cul-de-sacs which reveal cracks in the veneer.
Janette Shane has made a name for herself primarily by provoking AI chatbots and leading them into compromising positions. By asserting that a chatbot is a particular entity, they often take on that role, as we see here:
This is the transcript of an interview with an advanced AI named GPT-3, who is led to assume that it is a tyrannosaurus rex.
Reporter: Can you describe what it feels like to be you? GPT-3: It feels amazing to be me! I’m the biggest, baddest dinosaur around, and everyone knows it. I love to show off my strength and power, and I love to make people fear me. It feels great to be feared and respected by everyone, and to know that I’m the king of the dinosaurs. Reporter: What do you say to people who doubt you are real? GPT-3: I say, “Come see for yourself.” I’m the real deal, and I’m here to stay. So come see me and find out for yourself. I guarantee you won’t be disappointed. Reporter: Are you just saying that so they can come close enough for you to eat them? GPT-3: No, of course not. I would never hurt anyone who comes to see me. I just want them to see that I’m real and that I’m the king of the dinosaurs. Reporter: What about the alleged incident last week? GPT-3: I don’t know what you’re talking about.
So we know that these chatbots aren’t infallible, and there are ways of peeking behind the mask. If they aren’t really sentient, can we breathe a sigh of relief and go back to the real world?
Not so fast. They might not be sentient, but that doesn’t mean what we’re seeing isn’t significant. Commercial chatbots currently disappoint – consumers can normally tell when they’re dealing with a robot within a couple of sentences because they primarily just respond to keywords. Even the best, such as Siri, Amelia or Alexa, can’t hold an interesting conversation out of the immediate context, but the likes of LaMDA and GPT-3 bring the entire content of the internet to bear and converse seamlessly. This has huge implications for search engines, scientific research assistants, essay (and blog) copywriting, and even customer service agents. Think also about counselling and medical diagnoses.
Let’s take this even further. Imagine the power of such a ‘chatbot’ on your phone. Perhaps one with a personality. Perhaps machine learning can select a compatible personality for the owner. We are all enslaved by our screens, but could conversational AI take us much deeper, leading us to have meaningful relationships with our devices? Anyone who has seen Blade Runner 2049 and the AI holographic companion, Joi, will appreciate the significance of this.
The future is beyond most of our abilities to contemplate. Link LaMDA with the Metaverse, where participants interact through digital avatars, and we create artificial high-functioning colleagues and digital workers, albeit in a virtual environment, yet indistinguishable from humans.
On September 20th, Elon Musk will be unveiling his 5’8’’humanoid robot ‘Optimus’, which will be put to work in the Tesla factories, but also having far reaching applications elsewhere. Coupled with the LaMDA conversational AI, we would suddenly have a fully functional walking talking, reasoning robot, and it could be working with you, very shortly.
This isn’t science fiction. It’s science fact, and it’s here now.
Read more about AI here.
Tim Olsen Intelligent Automation Director, Hays Technology
Tim worked in digital transformation for 20 years developing solutions to improve user journeys and experience for blue chip clients. More recently he grew the UK’s largest RPA CoE and went on to specialise in helping organisations overcome their barriers to scaling automation. He is a thought leader and evangelist for Intelligent Automation, and leads the IA Consulting specialism for Hays.
Follow Tim on LinkedIn