MULTISPEECH is a joint research team between the Université of Lorraine, Inria, and CNRS. It is part of department D4 “Natural language and knowledge processing” of LORIA.
Its research focuses on speech processing, with particular emphasis to multisource (source separation, robust speech recognition), multilingual (computer assisted language learning), and multimodal aspects (audiovisual synthesis).
The research program is organized along the three following axes:
- explicit speech modeling, which exploits the physical properties of speech,
- statistical speech modeling, which relies on machine learning tools such as Bayesian models (HMM-GMM) and deep neural networks (DNN),
- modeling of the uncertainties due to the strong variability of the speech signal and to model imperfections.