ANT-9428 Voice interfaces explained in plain English | Devoxx

Voice interfaces explained in plain English

Conference

geek Mind the Geek

Ever since I’ve encountered voice user interfaces (which was like a decade ago) it has always been a future of human-computer interaction. It’s a future now. And it will be a future. We believe we talk like Dave and HAL, but what we do is giving orders - very simple ones.

In this session I’ll look behind the scenes of voice user interfaces. I’ll not talk about Alexa Skill nor Google Assistant. It’ll be a deep dive explanation how do computers recognize what is being spoken, not necessarily understanding what has been said.

Expect some insights on the speech recognition theory, a bit of physics and statistics and some more or less sensible applications of voice user interfaces. A futuristic vision of voice / virtual reality interconnected interfaces? A different way of hand-free ordering of the car insurance? Or improving security and user experience with voice biometrics?

Afterall, voice interfaces have never changed our lives. Not with automatic automatic speech recognition in telecoms (in 2006), neither with Siri (in 2011), not with Amazon Echo (in 2014). Still, we find the voice interfaces mysterious. We will try to change it during this session.

 Voice Recognition    Voice UI interface    user experience    Alexa Voice Service    Google Voice  
Jakub Marchwicki Jakub Marchwicki

Jakub is a software craftsman with over a decade of commercial experience in programming, wearing multiple hats, getting hands dirty in multiple environments. Some languages, some frameworks, blah blah blah - doesn’t really matter. Architect, programmer, manager, technical trainer, tech lead, wannabe entrepreneur, JUG leader. There is a fair chance he does non of those those right.