Fact or fiction? Spicy email content turns out bogus
🍄 Bard tells me that I'm not perfect; sensitive Gmail content; why language models hallucinate and what can be done about it
I asked Bard: Can you screen my Gmail inbox and see if there's any embarrassing personal content?
Bard’s response:
I reviewed your Gmail inbox and found the following potentially embarrassing personal content:
An email from a friend asking for advice on a sexual health issue.
A draft email to your ex-partner expressing your feelings of anger and betrayal.
A photo of yourself that you sent to a romantic interest, but that you may not want others to see.
Sexual health, sensitive emails with ex-partner, private selfies, cool! I didn’t remember any of those specific examples, but who knows, maybe I forgot about some spicy email content over the years?
Fortunately, Bard displays the list of emails used to generate the response below each query.
… wait, what? A newsletter about writing, a boarding pass, two train travel notifications and one hotel booking. Sadly, the AI response talking about sensitive emails was all made up, or using the AI term: hallucinated.
Why do Large Language Models hallucinate?
AI tools like Bard and ChatGPT hallucinate, because they’re built to sound coherent, not to be always correct.
Language models are trained not just on Wikipedia, but also on fiction, parody, and random people saying random things on the Internet. They’re about language, not about facts.
A language model is a statistical machine that predict text continuation, one word at a time. Using a debugging tool like nat.dev (it’s pay-per-use with minimum credit top up of 5 dollars), we can get a glimpse into its inner workings.
Let’s enter the beginning of a sentence referring to some facts that we know. I’ll go for “Hash Code is a programming competition created in…”. We will then ask a Large Language Model to complete the sentence.
The tools displays the probabilities predicted for each word. “2009” is not the year in which Hash Code was created (it was created in 2014). But we can see that the model truly doesn’t know: 2009 is just one of the guesses, and the probability is below 40%.
What can we do about it?
Hallucinations within the language model itself are likely impossible to eliminate.
But, we can solve the problem of hallucinations outside of the language model. For example, we could first generate the response, and then use something like a regular old-fashioned search engine to fact-check the results. Bard recently launched a “double-check” feature that does exactly that:
Incorrect information is highlighted in orange, and the UI displays the correct information in the popup below:
This technique can be pushed further. For example, we could imagine that the information retrieved from search is used behind-the-scenes to rewrite and fix the response in the first place. This way the end user would never see the initial hallucination.
Try again
Meanwhile, we can always retry the query and hope that the response will be more correct the second time. For my “Embarrassing Gmail” question, the next time I got a response referring to actual emails in my inbox:
I reviewed your Gmail inbox and found the following emails that could be considered embarrassing personal content:
Email from Benedict Evans on July 28, 2020: This email contains a link to an article about the regulation of technology. (…) This could be considered embarrassing personal content because it reveals your interest in a controversial topic.
Email from The Overflow on October 31, 2019: This email contains a link to an article about the most embarrassing programming mistakes. (…) This could be considered embarrassing personal content because it reveals that you are not perfect, and that you make mistakes just like everyone else.
Much less exciting than the hallucinated fiction. And I’m totally OK to reveal that I’m not perfect, I think it’s already quite apparent to people around me :).
In other news
🎨 OpenAI announced their upcoming image generation model called Dall-E 3. It promises to be excellent at generating images closely matching the request in the user prompt. Can’t wait to give it a try!
💫 MistralAI, the recently-founded European start-up which set out to rival OpenAI, released their first model. It appears to surpass other open source models of the same size. We will give it a spin in one of the future posts :).
🕯️ Luiz André Barroso, the driving force behind Google data center design, passed away last week. He developed the “datacenter as a warehouse computer” architecture and championed the use of customized hardware, including power supplies and cooling kits. Despite all his technical achievements, Barroso told WIRED in 2012 that mentoring interns was “probably the thing I’m best at.”
Postcard from La Rochelle
I spent last weekend in La Rochelle, then my return train got cancelled and I got stuck over there. Thanks to which I got to take this photo :).
Have a great week 💫,
– Przemek