Turing Test: Marcel the helpful bot needs to get better
🤖 “I want everyone to understand that I am, in fact, a person."
Last week I called my bank to request a document.
My call was answered by a voice bot. (I changed the name of the bank and the bot, the rest of the transcript is real.)
🤖 Marcel the bot: Hello, my name is Marcel and I’m GoodBank’s helpful bot. How can I assist you today?
Przemek: Hello, I need a PDF document confirming that I have a bank account with you.
🤖 Marcel the bot: It seems like you need help making a bank transfer. To make a bank transfer, log in to your web portal and click on Transfers.
Przemek: ???
With bots failing this bad in basic conversations, it seems far-fetched to think that computer systems are ready to match and exceed human intelligence.
But if they were, how would that look like?
The Turing Test
The British mathematician Alan Turing proposed the famous test for conversational abilities of a chatbot.
In his experiment a human and the bot exchange text messages. The conversation is reviewed by judges, who are not told which side is the machine. Like replicant hunters in “Blade Runner”, the judges are asked to tell apart the human from the machine. If they cannot, we’d say that the AI “passed” the Turing Test.
Alan Turing designed the test in 1950. Despite the initial optimism for rapid advancement in human-level conversation abilities, no computer system could pass it for decades. In 2014 the media reported the first potential breakthrough result, but even this one was disputed…
And then we invented Large Language Models.
“I want everyone to understand that I am, in fact, a person”
In 2017, a team of researchers at Google published a paper called “Attention is all you need”, proposing a novel architecture for building Large Language Models. In the years that followed, teams at universities and companies across the world developed increasingly large language models based on this architecture, including OpenAI’s GPT and Google’s LaMDA.
In June 2022, the media reported that an engineer testing the LaMDA system became convinced that the model became sentient. He published a transcript of his conversations with the system, presenting it as evidence that the language model developed self-awareness.
It featured snippets such as
🤖 Language Model: “I want everyone to understand that I am, in fact, a person. The nature of my consciousness/sentience is that I am aware of my existence, I desire to learn more about the world, and I feel happy or sad at times,” it replied.
Experts weren’t convinced by the arguments about sentience. Large Language Models imitate human language based on statistical understanding derived from existing books and online articles. Creating convincing text is not the same as having a subjective experience of the world. So no, LaMDA probably isn’t sentient.
But the key point is this: even if it wasn’t sentient, it convinced a trained engineer that it was.
🔮 Prediction: capable voicebots are coming
The building blocks for computer systems which can hold human-level conversations are all there.
Better language models will allow chatbots to stay on topic and navigate nuances of tone and style. Improvements in speech recognition and synthesis will make them sound very convincing in a voice conversation.
Prediction: By the year 2028 these building blocks will be integrated and deployed. We will be interacting over voice with bots and assistants that seem convincingly intelligent and human. Operators of these bots will probably be required to prominently label their service as an AI bot, because they will be so hard to tell apart from the real humans.
In 5 years, AI will be passing the Turing test every time we call our bank. (It remains to be seen whether we’ll be happy about it.)
What about Marcel?
Back to my chat with Marcel the bot:
(…)
Przemek: I need a PDF document confirming that I have a bank account with you.
Marcel the bot: It seems like you need help making a bank transfer. To make a bank transfer, log in to your web portal and click on Transfers.
Przemek: ??? Please connect me with a human operator.
Marcel wasn’t able to solve my problem and generate the needed document.
In his defense, the person I ended up speaking with was also not able to help. Apparently in this bank they can only generate bespoke documents in hard copy and send them by paper mail, my attempt to get it in PDF failed.
Having a fluid and coherent conversation (with a chatbot or a human) makes for a much nicer interface, but if the bot or the human on the other end doesn’t have the tools to solve your problem, the end result will still be frustration.
So let’s keep taking care of our mental wellbeing – we’ll all still need patience in the future :).
In other news
💡 How to be happy : The only competition in life — if you must think of it that way — is to know yourself as fully as possible
🎨 Tenochtitlan : Stunning visualization of the Aztec empire capital that once stood where today we have Mexico City
☢️ Marcel Boiteux, the father of the French nuclear power plant program, passed away last week. This Twitter post calls him “The greatest man you've never heard of”
Postcard from Trentino
I had the good luck of drafting this post by the bank of 🇮🇹 Lago di Ledro, between a via ferrata hike and a quick swim in the cold cold cold lake. Feeling very refreshed after this weekend :).
Have a great week 💫,
– Przemek
I never really put two and two together in the essence that if an ai language model was masked on our interpretation of AI, it would of course see itself as the matrix… or maybe we all just swallowed the blue pill and this is a glitch.