Methodology and technology for the polymodal allophonic speech transcription
Andrzej Czyzewski , Tomasz Ciszewski , Bożena Kostek
AbstractA method for automatic audiovisual transcription of speech employing: acoustic, electromagnetical articulography and visual speech representations is developed. It adopts a combining of audio and visual modalities, which provide a synergy effect in terms of speech recognition accuracy. To establish a robust solution, basic research concerning the relation between the allophonic variation of speech, i.e., the changes in the articulatory setting of speech organs for the same phoneme produced in different phonetic environments and the objective signal parameters (both audio and video) is carried out. The method is sensitive to minute allophonic detail as well as to accentual differences. It is shown that by using the analysis of video signals together with the acoustic signal, speech transcription can be performed more accurately and robustly than by using the acoustic modality alone. In particular, various features extracted from the visual signal are tested for their abilities to encode allophonic variations in pronunciation. New methods for modeling the accentual and allophonic variation of speech are developed.
|Journal series||Journal of the Acoustical Society of America, ISSN 0001-4966|
|Score|| = 25.0, 20-12-2017, ArticleFromJournal|
= 35.0, 20-12-2017, ArticleFromJournal
|Publication indicators||: 2016 = 1.547 (2) - 2016=1.85 (5)|
* presented citation count is obtained through Internet information analysis and it is close to the number calculated by the Publish or Perish system.