Τρίτη 15 Μαρτίου 2016

Audiovisual perceptual learning with multiple speakers

S00954470.gif

Publication date: May 2016
Source:Journal of Phonetics, Volume 56
Author(s): Aaron D. Mitchel, Chip Gerfen, Daniel J. Weiss
One challenge for speech perception is between-speaker variability in the acoustic parameters of speech. For example, the same phoneme (e.g. the vowel in "cat") may have substantially different acoustic properties when produced by two different speakers and yet the listener must be able to interpret these disparate stimuli as equivalent. Perceptual tuning, the use of contextual information to adjust phonemic representations, may be one mechanism that helps listeners overcome obstacles they face due to this variability during speech perception. Here we test whether visual contextual cues to speaker identity may facilitate the formation and maintenance of distributional representations for individual speakers, allowing listeners to adjust phoneme boundaries in a speaker-specific manner. We familiarized participants to an audiovisual continuum between /aba/ and /ada/. During familiarization, the "b-face" mouthed /aba/ when an ambiguous token was played, while the "D-face" mouthed /ada/. At test, the same ambiguous token was more likely to be identified as /aba/ when paired with a stilled image of the "b-face" than with an image of the "D-face." This was not the case in the control condition when the two faces were paired equally with the ambiguous token. Together, these results suggest that listeners may form speaker-specific phonemic representations using facial identity cues.



from Speech via a.lsfakia on Inoreader http://ift.tt/22iAqp1
via IFTTT

Δεν υπάρχουν σχόλια:

Δημοσίευση σχολίου