Artificial Conversation: A Substack Newsletter About A.I.
Subscribe to learn more about the development of conversational bots
Conversational A.I. was supposed to be smart. It was the voice in the 2013 movie Her about a man who falls in love with his A.I. virtual assistant. It was R2-D2 and C-3PO in Star Wars.
So when former Google engineer Blake Lemoine said in June 2022 that the A.I. bot he worked on was sentient, I was intrigued. Had this technology advanced faster than I’d imagined? Was it sentient after all?
Lemoine’s interview with LaMDA was frighteningly human-like to me. The bot seemed to possess an extreme amount of general intelligence, and after reading the interview he published, I decided to explore this topic more. How could someone as smart as a Google engineer think a bot was sentient?
Unlike Lemoine, I’m no engineer. I’m a Silicon Valley technology marketer with journalism and business degrees, and I’ve found myself more and more curious about the evolving state of A.I. I don’t work on artificial intelligence, and I don’t even have a formal education in computer science. I’m simply on a philosophical quest to answer for myself, what does it mean for a computer algorithm to act human?
That’s why I’ve decided to start this new Substack newsletter, Artificial Conversation. It’s a newsletter for the A.I.-curious. I’ll discuss the general intelligence of A.I., and I’ll post conversations that I’ve had with A.I. bots. Ultimately, my aim is to test this technology, question whether it’s human enough, and explore the ethical questions that arise from its evolution.
There are a handful of technology companies that are conducting public research on artificial intelligence chatbots today. Here are a few of the major projects I’ve learned about so far:
The world’s leading search engine company announced LaMDA in 2021. The system is built on Transformer, a neural network architecture that Google Research invented and open-sourced in 2017. Unlike most other language models such as BERT by Google, LaMDA was trained on dialogue. Once trained, it could be fine-tuned to significantly improve the “sensibleness” and specificity of its responses. Google defines “sensibleness” as whether a response makes sense, given the conversational context. In addition, Google has said it is exploring “interestingness,” by assessing whether responses are “insightful, unexpected or witty.”
Google has also modeled an open-domain chatbot called Meena. It said in early 2020 that the AI can conduct conversations that are “more sensible and specific than existing state-of-the-art chatbots,” through a new metric called the “Sensibleness and Specificity Average (SSA).” Google’s objective with Meena was to minimize perplexity.
These discoveries by Google have created a number of ethical questions, and Google has published a set of A.I. principles to establish that it will do no evil. Meanwhile, its ethical A.I. team is in a state of upheaval. Google’s A.I. ethics researcher Timnit Gebru was fired over an academic paper scrutinizing the technology in 2020, and the small team has faced several departures since then. The future of this research at Google hang in the balance.
Meta
BlenderBot 3 is Meta’s freshest attempt at releasing a chatbot that can talk about nearly any topic. It is a conversational A.I. that was opened to residents in the US on August 5, 2022, and it combines skills of personality, empathy, and knowledge. It has long-term memory, and it is based on a language model called Open Pretrained Transformer, OPT-175B. This natural language processing (NLP) system has more than 100 billion parameters and is trained on a massive and varied volume of text. The data are open to the public.
Microsoft
Xiaoice, released in 2014, is a China-based intelligent chatbot released by the tech giant from Redmond with more than 660M users around the world. The A.I. uses deep learning techniques to build up emotional intelligence under a framework that Microsoft calls “Empathic Computing Framework.”
Originally, the character was set as a 16-year-old. The creators eventually raised her age to 18, and have said she won’t grow older. The teenager has learned to write literature and compose songs. It even published a book of poems.
In addition to Xiaoice, Microsoft also released Ruuh in India, Rinna in Japan, Rinna in Indonesia, and Zo in the United States. In 2016 it released a chatbot named Tay on Twitter that was pulled offline in less than 24 hours after users trained it on racist, antisemitic, and misogynistic statements.
Microsoft has also made deep investments in OpenAI, the makers of ChatGPT.
There are a number of other companies working on conversational AI, and I plan to explore these systems in my newsletter. Companies such as IBM, Salesforce, Cisco, and Intel are developing with OpenAI, which has provided access to GPT-3, a natural language interface. GPT-3 has been trained on conversation, and there are a variety of examples of use cases for this project in the wild.
The field of conversational artificial intelligence is a rabbit hole, and there are a variety of papers in the public domain that discuss its evolution. In this newsletter, I will dive in deep.
My sincere hope is that it will open up a conversation with my readers where we find more questions than answers.