Picture this: It’s 11 p.m. and you and your friends are hungry for McDonald’s. When you pull up to the drive-thru, instead of a human voice over the intercom, you hear a robot asking what you’d like to order. A company called Clinc is working to make this hypothetical situation a reality.
Four University of Michigan professors founded Clinc in Ann Arbor in 2016. Their website claims that they’re the “world leader in conversational AI research and its application for the enterprise.” Their mission is “to push the boundaries of conversational AI, empowering enterprises to deliver a new and revolutionary AI experience for their customers.”
Clinc has traditionally worked in the financial technology, or fintech, sector. Their roster of clients include such giants as Barclays, USAA and S&P Global. Now, however, they’re venturing into the world of quick-service restaurants (QSR) with a drive-thru voice assistant. Clinc hopes to use this technology to enhance the drive-thru experience and to eventually drive sales via mobile apps associated with those restaurants.
AI has been around for a while, so why has this taken so long for someone to attempt? It seems fairly simple, right?
Think about the way you order when you go through a drive-through, especially when compared to the way your friends and family members order. We all have verbal idiosyncrasies that can sometimes make it difficult for human drive-thru workers to understand what we’re trying to tell them. Now imagine the difficulty of programming a robot that can assess the various ways people might place the same meal order in order to figure out that they’re all trying to communicate the same thing.
When you place an order, you’re giving the robot a lot of information at once. What meal would you like? What size? Any condiments? Do you want lettuce, tomato and cheese on your burger? These drive-thru robots need to process every aspect of this order to prepare the meal to every customer’s exact specifications. That’s no small feat.
Clinc says they’re accomplishing this task by “decoupling the dialogue management from the response logic.” This essentially means that, rather than having the AI respond to each individual piece of information, it can take in the entire order at once, contextualize it and formulate a single response.
This also means that you can place your order just as you would if you were talking to a person. You won’t need to order via a branching voice menu or reframe your phrasing to be understood.
This AI platform will allow programmers to add menu items to the service simply by dragging and dropping elements in a graphical interface. Customers will also be able to change their order just by asking the robot to change or add an item.
Restaurants integrate this type of system into their drive-thru operations will need to decide between a cloud-based deployment or an on-premise implementation. On-premise would take longer because you’d need to set up a complete system at a physical location (or multiple locations, if that’s what you’re going for). On the other hand, cloud-based deployments would take only a few days.
This kind of automation could prove controversial, as many people believe it’s an example of robots taking human jobs. Supporters of this development say that enacting it could open up work in other, less menial positions.
Earlier this year, Creator designed a robot that could make a burger on its own in five minutes, most of which is simply spent waiting for the burger to cook. Do you think we’ll ever see a day where fast food restaurants are run entirely by AI? What other industries do you think could benefit from this type of voice service technology?