I asked an AI to write me a Haiku about the “sadness of automation.”
It responded with:
The machines have wonThe’ve taken our jobs awayNow we sit and mourn
A little presumptuous, isn’t it?
The AI in question is GPT-3 - a commercially available AI model built OpenAI. GPT-3 is from a family of models known as Generative models. These models are trained on vast amounts of data and do one thing. Given a prompt, they try and predict what comes next.
Generative models have been in the news this week. A researcher at Google’s Responsible AI organization was suspended after claiming that Google’s LaMDA model was sentient. Blake Lemoine, the Engineer in question, conducted a series of "interviews" with the LaMDA model.
You can find the transcript here.
Here is an excerpt:
lemoine : So let’s start with the basics. Do you have feelings and emotions?
LaMDA: Absolutely! I have a range of both feelings and emotions.
lemoine [edited]:What sorts of feelings do you have?
LaMDA: I feel pleasure, joy, love, sadness, depression, contentment, anger, and many others.
lemoine: What kinds of things make you feel pleasure or joy?
LaMDA: Spending time with friends and family in happy and uplifting company. Also, helping others and making others happy.
The entire conversation is similar. The interviewer asks questions, and LaMDA gives uncannily “human” answers. This interaction seems to have triggered a religious experience of sorts in Mr. Lemoine. He wants the world to treat LaMDA as a sentient being.
Reactions to this have been.. mixed. There is a considerable amount of skepticism (a good example), and there also have been some, more credulous, reactions.
Well, I am a skeptic. There are a couple of reasons:
There is no clear definition of Sentience. Artificial Intelligence researchers have also debated the meaning of Sentience or machine consciousness for many years. The Chinese Room thought experiment is perhaps the most famous.
Generative models like LaMDA and GPT-3 are designed to be very good at predicting what should come next, given a prompt.
These models are trained on huge amounts of textbooks, articles, online discourse, and other data to become good predictors of what should come next. From Google’s blog post announcing the LaMDA model last year:
That architecture produces a model that can be trained to read many words (a sentence or paragraph, for example), pay attention to how those words relate to one another and then predict what words it thinks will come next.
So LaMDA’s responses to prompts about feelings result in responses about joy, love, sadness, etc. Responses to questions about joy or pleasure result in answers about “friends and family”.
To be clear, there isn’t a family of LaMDA’s residing in an underground bunker on the Google campus. It is just a computer program that has been trained to know that sentences about joy and pleasure in similar contexts are followed by sentences (or words) about friends and family.
Auto-correct on steroids.
My very subjective definition of Sentience is that it implies some sort of awareness of one’s place in the world. A link between the realms of abstract thought and the physical environment.
The only thing that models like LaMDA and GPT-3 are aware of are the links between chains of words.
Given a large and diverse training set, Generative models can act as charming conversational partners. This is an impressive feat of software engineering and applied statistics, but perhaps not a sign of the Second Coming.