How does Siri hear you?

22 November 2019, 4.00 PM - 22 November 2019, 5.00 PM

John Bridle, Machine Learning Research Engineer, Apple

Psychology Common Room, Social Sciences Complex, 12a Priory Road

Abstract
Voice assistants such as Siri are most conveniently actuated by saying a "wake-up word" such as "Hey Siri!".  A smart music loudspeaker needs to be able to hear voice commands whilst it is playing music, and to hear the commands even though there may be interfering speech and sounds from elsewhere in the room. We review the adaptive signal processing and machine learned pattern processing that goes into making a high-performance voice-first device such as HomePod. The talk will take us through techniques for cancelling outgoing signals, directional listening and noise cancellation, the keyword detection and checking, plus some of the techniques used to make the main speech recognition work well. 

Biography
John Bridle has worked on pattern processing principles and applications to automatic Speech Recognition in industry and government research labs since 1967. In the 1970s he was one of the first to apply Dynamic Programming to automatic speech recognition - both word spotting and continuous speech transcription. In the 1980s he explored links between a variety of stochastic models and connectionist approaches to pattern processing and introduced the SoftMax nonlinearity to the "neural network" community. 

In the 1990s he moved from Government research to industry, and applied himself to practical commercial automatic speech recognition, on PCs, over the telephone and in cars. In the 2000s he was a founder of a small speech technology company specialising in voice access to large databases.  Since joining Apple in 2013 he has been working on Siri, and particularly voice activation using keywords such as "Hey Siri!".

Contact information

For any queries, please contact bvi-enquiries@bristol.ac.uk

John Bridle, Machine Learning Research Engineer, Apple, Cheltenham

Edit this page