Conversational Technologies
7 min read

Language Models, Conversational AI, and Chatbots Explained

Ziv Gidron Head of Content, Hyro
Language Models, Conversational AI, and Chatbots Explained

The Power of Language 

 

If there’s one superpower that humans possess over all other living creatures we share this planet with, it’s language. 

 

For centuries, inventors, scientists, and entrepreneurs have dreamed of building machines that can not only understand human language but use it in the same way that humans do. For the longest time this ambition was but a dream firmly in the realm of science fiction. 

 

Today, that dream is becoming a reality. With advancements in machine learning, neural networks, natural language programming, and other disciplines within artificial intelligence, human language is no longer just for humans. 

 

In the data-driven and customer-centric world we live in today, AI-based engagement solutions are the key to helping modern businesses thrive. Chatbots and conversational AI allow organizations to converse easily with customers, leading to more fulfilling interactions that cause conversions to skyrocket. 

 

And yet, despite monumental advancements and widespread adoption of these technologies, many people still aren’t clear on the differences between language models, conversational AI, and chatbots. Knowing the differences is crucial and can help you determine the right approach for your business. 

 

Let’s get started!

The Complex World of Language Models

Language is complex and even the smallest variation in spelling, punctuation, sentence structure, or vocabulary can dramatically alter meaning. For example, many people can attest to feeling a tad uneasy if they receive a reply to a text message they sent that simply says “ok” or worse, if they receive the dreaded “k.” But, are “ok” or “k.” really so different from a response like “okay :)”?

 

After all, when you break it down, the meaning is essentially the same. But, in spite of this, many people would agree that there is something unnerving about the full stop that communicates a more serious or final tone, and further shortening an already short word could be interpreted as reluctance to communicate at all. 

 

We’re not here to debate whether it’s reasonable to spiral into an anxious mess if you receive a  “k.” in response to a message you sent. However, this tiny, relatable example of language in action raises an important point: human beings, the natural experts of human language, often struggle to decipher the meaning of text accurately. If we ourselves have trouble with this, then it’s presumably even harder for a machine to pull this feat off. 

 

So, are language models the same as conversational AI and chatbots? 

 

No, and here’s why. 

 

A language model is a statistical tool that learns to predict the probability of a sequence of words. You can think of language models as the technology underpinning other tech-based language disciplines like chatbots and conversational AI. Chatbots and conversational AI may make use some of the same technologies, like machine learning, Natural Language Processing (NLP), advanced dialog management, and more, but they are not synonymous with one another. 

 

It may sound counterintuitive at first glance, but consider the following analogy: all men born in the U.S. are American citizens, but not all American citizens are men, and not all American citizens were born in the U.S. The point is, it’s possible to share the same qualities without arriving there in an identical way. 

 

The main takeaway is that language models predict words, but they’re not required to have identifiable goals beyond this. They are broad in scope and can be applied to many different fields. 

Language Models Hitting the Headlines

With all the work going into language models over the last few years, it should come as no surprise that some exciting examples have garnered significant public attention, particularly GPT-3 and Google’s latest trillion parameter language model.

Google’s New Language Model

In January of 2021, researchers at Google Brain announced that they had developed new techniques to train a language model with no less than 1.6 trillion parameters that is also four times faster than their previous model. In the realm of machine learning algorithms, a parameter refers to parts of the model that are learned from historical data. This was exciting news because there has historically been a correlation between the number of parameters and how sophisticated the model is.  

 

The excitement doesn’t end there. While large-scale training is invaluable for developing powerful language models, it also comes at a cost as the process is incredibly computationally intensive. However, Google solved this issue with something researchers call a Switch Transformer, a technique that only uses a small subset of parameters to transform the data input within the model. It essentially leverages ‘experts’ (i.e. smaller, specialized models) that exist within the larger model. 

What is GPT-3?

The Generative Pre-trained Transformer (GPT-3) is an autoregressive language model owned by the artificial intelligence laboratory OpenAI and backed by everyone’s favorite space enthusiast Elon Musk. Put simply, GPT-3 generates text using pre-trained algorithms. It’s been fed a colossal amount of textual information from all around the public web, as well as other datasets from the OpenAI lab. As a result, it’s capable of generating various types of text, including scripts, poems, guitar tabs, essays, and even computer code.

 

With an impressive 175 billion parameters, GPT-3 has been widely praised for generating amazing human-like speech on an even grander scale than its predecessor GPT-2. When it comes to language models, parameter size matters and it’s for this reason that the GPT-3 neural network had been described as the largest one ever created. What is a neural network, you may ask? It’s a set of algorithms that aims to recognize underlying relationships in data through a process that mimics the way the human brain operates. 

 

OpenAI and Google’s language modeling capabilities are certainly exciting and clearly show the incredible advancements that have been made in the field, but what does it mean for businesses on the ground? Are these behemoth scope models appropriate for companies, or do they exist in their own domain? To answer that question, we first have to look at the tools businesses are utilizing today: conversational AI and chatbots. 

A Primer on Conversational AI

Before we get started, here’s a quick refresher on some of the terms you may want to familiarize yourself with before diving in:

  • Natural Language Processing (NLP): A field of artificial intelligence aimed at reading, deciphering, and understanding human languages in a manner that adds value. NLP relies on computer science on statistical linguistics. 
  • Natural Language Understanding (NLU): A sub-field of NLP. While both NLP and NLU aim to make sense of unstructured data, NLP focuses more on a “natural” back and forth communication between humans and computers. NLU is focused on the machine’s ability to understand human language. 
  • Automatic Speech Recognition (ASR): Software that recognizes human speech on a syntactical level. The technology has recently become popular in customer service operations, particularly in call centers. 
  • Spoken Language Understanding (SLU): Interpreting semantic meaning conveyed in spoken utterance. SLU can be complex because spoken language doesn’t always follow the same grammatical structure as written language as people often hesitate, repeat themselves, use slang, or self-correct when they speak

Conversational AI leverages all of the above technologies to comprehend and engage in a contextual dialog. It uses NLU to understand what the human is trying to say, or their intent. A sophisticated conversational AI system should be able to understand intent, even with grammatical errors, mistakes, or idiosyncrasies. Through machine learning algorithms, the AI gets better over time at understanding meaning and learning how to respond. Conversational AI can be text-based, speech-based (using ASR and SLU), or both. 

In many ways, conversational AI is much more like talking to a human when compared to its chatbot cousin. When we read text or listen to speech, we take in a lot of data and our brains leverage several different sources to deduce the meaning, allowing us to comprehend and respond appropriately. While conversational AI is flexible in a similar manner to the human brain, chatbots, by contrast, rely on a pre-written script rendering them much more rigid and prone to error. 

The Architecture of Conversational AI

Conversational AI can be broken down into three distinct areas:

  • Understanding: Because conversational AI utilizes NLP and NLU, it comprehends the meaning and intent behind text or speech inputs. 
  • Learning at scale: Conversational AI is scalable because it feeds off of various sources like websites, databases, APIs, etc. If the original source is amended (updated or added to), these modifications are automatically applied to the conversational AI interface, allowing the system to adapt on the fly.  
  • Omnichannel: Conversational AI can operate across mediums. It can be a voice assistant like Siri, Google, or Cortana if needed and can also function as a smart speaker, a virtual call agent, and more. It’s not limited to any one channel and can be applied across all digital channels.

What Are Chatbots?

Many people can recall an incident when they felt the urge to hurl their keyboard at the wall in frustration when talking to a chatbot. And while bots have improved over time, they are ultimately limited in their potential to communicate like human beings. 

 

Why? It all comes down to their design. 

 

Chatbots follow a predetermined script based on strict rules. When a user inputs a word or phrase, the chatbot will analyze these words and respond based on how it has been trained to respond to those specific inputs. Herein lies the main fault: If the user inputs something that the chatbot doesn’t have any rules for, it will be unable to respond. In other words, it’s rigid and not adaptable. 

 

While conversational AI is broad in scope and promotes dynamic interactions, chatbots can only perform narrow tasks and provide canned responses. Another major limitation of chatbots is that they are single-channel. They can only be used as a chat interface, which limits their functionality and scalability, particularly for organizations with a diverse customer base or companies experiencing rapid growth. 

 

Although it’s possible to update chatbots to allow them to perform at a higher level, this is something that must be done routinely and is incredibly time-consuming and labor-intensive. In contrast to conversational AI systems that improve or become more relevant each time their sources are updated, chatbots are only capable of improving through dedicated manual intervention. Most companies struggle to keep up with updating chatbots to reflect constant changes or lack the personnel entirely to be able to do so. 

Flashy Language Models, Conversational AI, and Chatbots: What's the Verdict?

The models that generate a lot of hype, like GPT-3, absolutely have their place in the history of artificial intelligencr. While they are capable of doing incredible things, they’re not suited for every purpose, particularly when it comes to real-world business needs. 

 

Firstly, pricing for GPT-3 is through the roof because of the sheer amount of computing power it requires. Not only is GPT-3 beyond the budget of most organizations, it’s also too grand in its scope. Using GPT-3 to better converse with customers would be like undergoing general anesthesia because you’re having trouble sleeping. Sure, it’ll do the trick, but it’s doing more than you need, and you’ll have a better experience by choosing a tool that’s more closely aligned with your goals. 

 

Secondly, the colossal amount of parameters these models have consumed means they are inherently biased. A model is only as good as the data it’s trained on and with data sets at that scale, it stands to reason that some negative biases may be present. Lastly, and most importantly, these models were never designed to be customer-facing. Rather they are geared towards academic study and that’s what they excel at. 

 

When it comes to day-to-day customer interactions, conversational AI is the way to go. According to Gartner’s 2021 report, the COVID-19 pandemic has dramatically accelerated the adoption of conversational AI solutions. In some industries, the increase in conversational AI has been as high as 250%, according to Deloitte. It’s clear that conversational AI has become the go-to choice for companies faced with increased volumes of customer traffic, particularly in a world that has switched to remote forms of communication.

About the author
Ziv Gidron Head of Content, Hyro

Ziv is Hyro’s Head of Content, a conversational AI expert, and a passionate storyteller devoted to delivering his audiences with insights that matter when they matter most. When he’s not obsessively consuming or creating content on digital health and AI, you can find him rocking out to Fleetwood Mac with his four-year-old son.