The next lens we’ll use to think about bots is related to intelligence of the bot. Some bots use elements of machine learning (ML) and artificial intelligence (AI) in order to understand language, process complex requests, and manage dynamic outputs. And while it’s true that some bots are heavily reliant on AI & ML, other bots are far simpler.
The media seems to spend a fair amount of time talking about bots in the context of AI. As a result, my suspicion is that many people are conflating the two concepts (bots + AI = all bots are intelligent agents). That’s not the case at all.
Bots exist along a continuum. At the simple end there are Script Bots, and at the complex end are Intelligent Agents (called “Cutting edge bots” in the graphic):
The simplest bots are script bots. The entire interaction is based off of a pre-determined model (the “script”) that determines what the bot can and cannot do. The “script” is a decision tree where responding to one question takes you down a specific path, which opens up a new, pre-determined set of possibilities. It’s basically like a Choose Your Own Adventure (for those old enough to remember Choose Your Own Adventure, or books).
The important thing to recognize with a script bot is that the bot’s domain is necessarily limited. If a customer service bot allows you to select from red, blue, or green, and you try to select magenta, the interaction fails. Limiting the interaction with a bot by defining a narrow set of acceptable inputs might feel restrictive, but there are strong arguments for it. By being very explicit about the limits of the bot’s domain (and grammar of acceptable responses), you keep the interaction very directed, and the quality of the user experience stays very high.
Sometimes a script bot may use natural language processing (NLP) on the front end of the interaction, to parse out words that may match an answer in their script. This is enticing, but kinda dangerous from a user experience perspective. Language is a really hard problem. If you give people the impression that they can talk with the bot the way they would talk with a human, the bot may have a hard time understanding the inputs. This leads to aggravating error-recovery behavior in this example with the Poncho weather bot:
Script bots that want to use NLP as a “chatty” front end need to think very carefully about this. People will be people, and they will go off script. How does your bot handle these unplanned-for interactions?
One method is to fail over to a human customer service agent, which brings us up the Bot Intelligence Continuum to Smart Bots.
Much of the excitement around bots focuses around the *possibilities* of bots, given the massive advances in ML and AI in recent years. And some of this excitement is well-founded. Many bots have a heavy server-side processing component, which allows them access to massive computing power in understanding and responding to queries. Couple that with the open-sourcing of AI software libraries like Theano and TensorFlow, and you have the ingredients for some amazing human-bot interactions.
Many of the bots getting the most media coverage leverage AI for the first response mechanism. If the interaction takes a turn that the AI can’t handle, the system falls back on a human agent to sort things out. Examples of this are Clara, Fin and Facebook M.
When you think about the AI + Human Agent model, it seems like a natural for customer service applications. Maybe you just want to know how much your next bill is, or when it’s due, which would be easy enough for a bot to handle. If the query gets much more complex (“Why didn’t I get the bill credit I expected?”) then the interaction is transferred to a human agent.
Intelligent Agents is a deliberate kludge of all the other customer-facing AI technology. They range from DeepMind’s AlphaGo to Tesla developing self-driving cars. This is a very diverse, rapidly accelerating space.
The main differentiator between Intelligent Agents and Smart Bots is that Intelligent Agents they are designed to be autonomous. If operating correctly, they should require no human intervention in order to perform their tasks correctly. Google’s self-driving cars are designed without steering wheels for humans, because they shouldn’t be necessary. x.ai has a bot that schedules meetings for you, Amy Ingram, and she manages all the back-and-forth with zero oversight.
Because research in artificial intelligence and machine learning is accelerating so rapidly, this area is the most difficult to make predictions about, and the most difficult to encapsulate. But the public also needs to have their expectations set correctly. We’re still at least 50 years from being able to expect something that is true Artificial General Intelligence (AGI). So we can marvel at self-driving cars, at the same time we realize it’ll be 2066 before we get to fall in love with Her.