Expedite

Expedite

Voice AI + restaurants: a Q&A

Writer and investor M.G. Siegler on the future of voice AI in real life (and restaurants)

Kristen Hawley's avatar
Kristen Hawley
Jan 29, 2026
∙ Paid

Expedite’s peeking into the (near) future this month with a series of interviews covering hot topics changing the business of hospitality. This week: an exchange with tech writer and investor M.G. Siegler.


Voice computing is (finally) having its moment.

It wasn’t that long ago that talking to your phone (or your car, or your television, or your speaker, all things I’ve done in the last 24 hours) felt strange or impossible or just… wrong. Thanks to AI, voice technology is changing fast, with potentially huge implications for the restaurant industry.

I asked fellow technology journalist and voice computing enthusiast M.G. Siegler about the history of voice in tech and its potential to transform everyday life. We’re now at an inflection point, he said.

So why now? And what does this mean for restaurants?

Voice AI has already shown up (and in some cases, crashed out) inside restaurant drive-thrus and on its phone lines. But as the tech evolves, so does the opportunity. OpenAI, maker of ChatGPT, promised its first piece of hardware later this year; rumors suggest it’ll be a kind of screen-free AI ‘companion.’ Apple’s almost certainly working on an AI wearable, too. (Though, as M.G. notes, “Apple never confirms anything.”) New devices working inside restaurants might unlock an entirely new style of service.

I asked M.G. about this and more in a recent email exchange, which is reprinted below for paid subscribers. He answers with a blend of clarity, context, and history that makes big tech’s hype excitement make sense to the rest of us.

This interview has been lightly edited for length and clarity.


Expedite: You’ve been excited about voice AI for some time, and while progress seems to have come in fits and starts, you’ve recently written that you believe that we are at an inflection point for voice and computing. Why is now the right time?

M.G. Siegler: “I think it’s a combination of factors finally coming together. Most people undoubtedly don’t recall how bad even just voice dictation/transcription used to be. I recall using software in the 1990s that seemed futuristic at the time but in hindsight was horrible. Obviously, that technology is good enough now to operate in real time on your phone.

“Related to that are the computer systems being able to understand what you’re saying and take some simple actions — Siri and Alexa finally made this viable. But those systems lacked the ‘smarts’ to do much beyond playing music, setting timers, and the like. With the rise of LLM1-based AI systems, actual conversations — back and forths — are possible. And now with ‘agents’ we’re getting the first wave of AI that can take your voice command and take action upon some more elaborate commands. It’s still early, but progressing rapidly.

“Next will be the rise of a new range of devices to marry voice and AI together. Obviously, most people will do this on their phones, but purpose-built devices could help make it more ubiquitous and used.”

You covered Siri, Apple’s voice assistant, when it came to iOS 5 some 15 years ago. I feel like the consensus was then that Siri was going to change the way we interacted with technology. In hindsight.. it did not, at least not profoundly. What did tech companies learn from these early days of voice experimentation? (Did they learn?)

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 KHCreative, LLC · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture