Supervisors: Irina Illina, MdC, Dominique Fohr, CR CNRS
Team: Multispeech, LORIA-INRIA
Contact: email@example.com, firstname.lastname@example.org
Duration of post-doc or research engineer: 12-18 months
Duration of PhD Thesis : 3 years
Deadline to apply : May 15th, 2019
Required skills: background in statistics, natural language processing and computer program skills (Perl, Python). Candidates should email a detailed CV with diploma
Under noisy conditions, audio acquisition is one of the toughest challenges to have a successful automatic speech recognition (ASR). Much of the success relies on the ability to attenuate ambient noise in the signal and to take it into account in the acoustic model used by the ASR. Our DNN (Deep Neural Network) denoising system and our approach to exploiting uncertainties have shown their combined effectiveness against noisy speech.
The ASR stage will be supplemented by a semantic analysis. Predictive representations using continuous vectors have been shown to capture the semantic characteristics of words and their context, and to overcome representations based on counting words. Semantic analysis will be performed by combining predictive representations using continuous vectors and uncertainty on denoising. This combination will be done by the rescoring component. All our models will be based on the powerful technologies of DNN.
- study and implementation of a noisy speech enhancement module and a propagation of uncertainty module;
- design a semantic analysis module;
- design a module taking into account the semantic and uncertainty information.
Strong background in mathematics, machine learning (DNN), statistics
Following profiles are welcome, either:
- Strong background in signal processing
- Strong experience with natural language processing
Excellent English writing and speaking skills are required in any case.
[Nathwani et al., 2018] Nathwani, K., Vincent, E., and Illina, I. DNN uncertainty propagation using GMM-derived uncertainty features for noise robust ASR, IEEE Signal Processing Letters, 2018.
[Nathwani et al., 2017] Nathwani, K., Vincent, E., and Illina, I. Consistent DNN uncertainty training and decoding for robust ASR, in Proc. IEEE Automatic Speech Recognition and Understanding Workshop, 2017.
[Nugraha et al., 2016] Nugraha, A., Liutkus, A., Vincent E. Multichannel audio source separation with deep neural networks. IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2016.
[Sheikh, 2016] Sheikh, I. Exploitation du contexte sémantique pour améliorer la reconnaissance des noms propres dans les documents audio diachroniques”, These de doctorat en Informatique, Université de Lorraine, 2016.
[Peters et al., 2017] Matthew Peters, Waleed Ammar, Chandra Bhagavatula, and Russell Power. 2017. “Semi-supervised sequence tagging with bidirectional language models.” In ACL.
[Peters et al., 2018] Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. “Deep contextualized word representations”. In NAACL.