请输入您要查询的百科知识:

 

词条 Lip reading
释义

  1. Process

     Phonemes and visemes  Co-articulation  How can it 'work' with so few visemes?  Variation in readability and skill 

  2. Lipreading and language learning in hearing infants and children

     The first few months  The next six months; a role in learning a native language  Early language production: one to two years  In childhood 

  3. In hearing adults: lifespan considerations

     In specific (hearing) populations 

  4. Deafness

  5. Teaching and training

      Tests  

  6. Lipreading and lip-speaking by machine

  7. The brain

  8. References

      Bibliography  

  9. External links

Lip reading, also known as lipreading or speechreading, is a technique of understanding speech by visually interpreting the movements of the lips, face and tongue when normal sound is not available. It relies also on information provided by the context, knowledge of the language, and any residual hearing. Although ostensibly used by deaf and hard-of-hearing people, most people with normal hearing process some speech information from sight of the moving mouth.[1]

Process

Although speech perception is considered to be an auditory skill, it is intrinsically multimodal, since producing speech requires the speaker to make movements of the lips, teeth and tongue which are often visible in face-to-face communication. Information from the lips and face supports aural comprehension [2] and most fluent listeners of a language are sensitive to seen speech actions (see McGurk effect). The extent to which people make use of seen speech actions varies with the visibility of the speech action and the knowledge and skill of the perceiver.

Phonemes and visemes

The phoneme is the smallest detectable unit of sound in a language that serves to distinguish words from one another. /pit/ and /pik/ differ by one phoneme and refer to different concepts. Spoken English has about 44 phonemes. For lip reading, the number of visually distinctive units - visemes - is much smaller, thus several phonemes map onto a few visemes. This is because many phonemes are produced within the mouth and throat, and cannot be seen. These include glottal consonants and most gestures of the tongue. Voiced and unvoiced pairs look identical, such as [p] and [b], [k] and [g], [t] and [d], [f] and [v], and [s] and [z]; likewise for nasalisation (e.g. [m] vs. [b]). [https://en.wiktionary.org/wiki/homophene Homophenes] are words that look similar when lip read, but which contain different phonemes. Because there are about three times as many phonemes as visemes in English, it is often claimed that only 30% of speech can be lip read. Homophenes are a crucial source of mis-lip reading.

Co-articulation

Visemes can be captured as still images, but speech unfolds in time. The smooth articulation of speech sounds in sequence can mean that mouth patterns may be ‘shaped’ by an adjacent phoneme: the ‘th’ sound in ‘tooth’ and in ‘teeth’ appears very different because of the vocalic context. This feature of dynamic speech-reading affects lip-reading 'beyond the viseme'.[4]

How can it 'work' with so few visemes?

The statistical distribution of phonemes within the lexicon of a language is uneven. While there are clusters of words which are phonemically similar to each other ('lexical neighbors', such as spit/sip/sit/stick...etc.), others are unlike all other words: they are 'unique' in terms of the distribution of their phonemes ('umbrella' may be an example). Skilled users of the language bring this knowledge to bear when interpreting speech, so it is generally harder to identify a heard word with many lexical neighbors than one with few neighbors. Applying this insight to seen speech, some words in the language can be unambiguously lip-read even when they contain few visemes - simply because no other words could possibly 'fit'.[5]

Variation in readability and skill

Many factors affect the visibility of a speaking face, including illumination, movement of the head/camera, frame-rate of the moving image and distance from the viewer (see e.g.[6]). Head movement that accompanies normal speech can also improve lip-reading, independently of oral actions.[7] However, when lip-reading connected speech, the viewer's knowledge of the spoken language, familiarity with the speaker and style of speech, and the context of the lip-read material[8] are as important as the visibility of the speaker. While most hearing people are sensitive to seen speech, there is great variability in individual speechreading skill. Good lipreaders are often more accurate than poor lipreaders at identifying phonemes from visual speech.

A simple visemic measure of 'lipreadability' has been questioned by some researchers. The 'phoneme equivalence class' measure takes into account the statistical structure of the lexicon[9] and can also accommodate individual differences in lip-reading ability.[10][11] In line with this, excellent lipreading is often associated with more broad-based cognitive skills including general language proficiency, executive function and working memory.[12][13]

Lipreading and language learning in hearing infants and children

The first few months

Seeing the mouth plays a role in the very young infant's early sensitivity to speech, and prepares them to become speakers at 1 – 2 years. In order to imitate, a baby must learn to shape their lips in accordance with the sounds they hear; seeing the speaker may help them to do this.[14] Newborns imitate adult mouth movements such as sticking out the tongue or opening the mouth, which could be a precursor to further imitation and later language learning.[15] Infants are disturbed when audiovisual speech of a familiar speaker is desynchronized [16] and tend to show different looking patterns for familiar than for unfamiliar faces when matched to (recorded) voices.[17] Infants are sensitive to McGurk illusions months before they have learned to speak.[18][19] These studies and many more point to a role for vision in the development of sensitivity to (auditory) speech in the first half-year of life.

The next six months; a role in learning a native language

Until around six months of age, most hearing infants are sensitive to a wide range of speech gestures - including ones that can be seen on the mouth - which may or may not later be part of the phonology of their native language. But in the second six months of life, the hearing infant shows perceptual narrowing for the phonetic structure of their own language - and may lose the early sensitivity to mouth patterns that are not useful. The speech sounds /v/ and /b/ which are visemically distinctive in English but not in Castilian Spanish are accurately distinguished in Spanish-exposed and English-exposed babies up to the age of around 6 months. However, older Spanish-exposed infants lose the ability to 'see' this distinction, while it is retained for English-exposed infants.[20] Such studies suggest that rather than hearing and vision developing in independent ways in infancy, multimodal processing is the rule, not the exception, in (language) development of the infant brain.[21]

Early language production: one to two years

Given the many studies indicating a role for vision in the development of language in the pre-lingual infant, the effects of congenital blindness on language development are surprisingly small. 18-month-olds learn new words more readily when they hear them, and do not learn them when they are shown the speech movements without hearing.[22] However, children blind from birth can confuse /m/ and /n/ in their own early production of English words – a confusion rarely seen in sighted hearing children, since /m/ and /n/ are visibly distinctive, but auditorilly confusable.[23] The role of vision in children aged 1–2 years may be less critical to the production of their native language, since, by that age, they have attained the skills they need to identify and imitate speech sounds. However, hearing a non-native language can shift the child's attention to visual and auditory engagement by way of lipreading and listening in order to process, understand and produce speech.[24]

In childhood

Studies with pre-lingual infants and children use indirect, non-verbal measures to indicate sensitivity to seen speech. Explicit lip-reading can be reliably tested in hearing preschoolers by asking them to 'say aloud what I say silently'.[25] In school-age children, lipreading of familiar closed-set words such as number words can be readily elicited.[26] Individual differences in lip-reading skill, as tested by asking the child to 'speak the word that you lip-read', or by matching a lip-read utterance to a picture,[27] show a relationship between lip-reading skill and age.[28][29]

In hearing adults: lifespan considerations

While lip-reading silent speech poses a challenge for most hearing people, adding sight of the speaker to heard speech improves speech processing under many conditions. The mechanisms for this, and the precise ways in which lip-reading helps, are topics of current research.[30]

Seeing the speaker helps at all levels of speech processing from phonetic feature discrimination to interpretation of pragmatic utterances.[31]

The positive effects of adding vision to heard speech are greater in noisy than quiet environments,[32]

where by making speech perception easier, seeing the speaker can free up cognitive resources, enabling deeper processing of speech content.

As hearing becomes less reliable in old-age people may tend to rely more on lip-reading, and are encouraged to do so. However, greater reliance on lip-reading may not always make good the effects of age-related hearing loss. Cognitive decline in aging may be preceded by and/or associated with measurable hearing loss.[33][34] Thus lipreading may not always be able to fully compensate for the combined hearing and cognitive age-related decrements.

In specific (hearing) populations

A number of studies report anomalies of lipreading in populations with distinctive developmental disorders. Autism: People with autism may show reduced lipreading abilities and reduced reliance on vision in audiovisual speech perception.[35][36] This may be associated with gaze-to-the-face anomalies in these people.[37] Williams syndrome: People with Williams syndrome show some deficits in speechreading which may be independent of their visuo-spatial difficulties.[38] Specific Language Impairment: Children with SLI are also reported to show reduced lipreading sensitivity,[39] as are people with dyslexia.[40]

Deafness

"When you are deaf you live inside a well-corked glass bottle. You see the entrancing outside world, but it does not reach you. After learning to lip read, you are still inside the bottle, but the cork has come out and the outside world slowly but surely comes in to you.".[41][42] Debate has raged for hundreds of years over the role of lip-reading ('oralism') compared with other communication methods (most recently, total communication) in the education of deaf people. The extent to which one or other approach is beneficial depends on a range of factors, including level of hearing loss of the deaf person, age of hearing loss, parental involvement and parental language(s). Then there is a question concerning the aims of the deaf person and her community and carers. Is the aim of education to enhance communication generally, to develop sign language as a first language, or to develop skills in the spoken language of the hearing community? Researchers now focus on which aspects of language and communication may be best delivered by what means and in which contexts, given the hearing status of the child and her family, and their educational plans.[43] Bimodal bilingualism (proficiency in both speech and sign language) is one dominant current approach in language education for the deaf child.[44]

Deaf people are often better lip-readers than people with normal hearing.[45] Some deaf people practice as professional lipreaders, for instance in forensic lipreading. In deaf people who have a cochlear implant, pre-implant lip-reading skill can predict post-implant (auditory or audiovisual) speech processing.[46] For many deaf people, access to spoken communication can be helped when a spoken message is relayed via a trained, professional lip-speaker.[47][48]

In connection with lipreading and literacy development, children born deaf typically show delayed development of literacy skills[49] which can reflect difficulties in acquiring elements of the spoken language.[50] In particular, reliable phoneme-grapheme mapping may be more difficult for deaf children, who need to be skilled speech-readers in order to master this necessary step in literacy acquisition. Lip-reading skill is associated with literacy abilities in deaf adults and children[51][52] and training in lipreading may help to develop literacy skills.[53]

Cued Speech uses lipreading with accompanying hand shapes that disambiguate the visemic (consonant) lipshape. Cued speech is said to be easier for hearing parents to learn than a sign language, and studies, primarily from Belgium, show that a deaf child exposed to cued speech in infancy can make more efficient progress in learning a spoken language than from lipreading alone

.[54] The use of cued speech in cochlear implantation for deafness is likely to be positive.[55] A similar approach, involving the use of handshapes accompanying seen speech, is Visual Phonics, which is used by some educators to support the learning of written and spoken language.

Teaching and training

The aim of teaching and training in lipreading is to develop awareness of the nature of lipreading, and to practice ways of improving the ability to perceive speech 'by eye'.[56] Lipreading classes, often called lipreading and managing hearing loss classes, are mainly aimed at adults who have hearing loss. The highest proportion of adults with hearing loss have an age related, or noise related loss, and with both these the high frequency sounds are lost first. Since many of the consonants in speech are high frequency sounds, speech becomes distorted. Hearing aids help, but may not cure this. Lipreading classes have been shown to be of benefit in UK studies commissioned by the charity, Action on Hearing Loss[57] in 2012.

Trainers recognise that lipreading is an inexact art. Students are taught to watch the lips, tongue and jaw movements, to follow the stress and rhythm of language, to use their residual hearing, with or without hearing aids, to watch expression and body language, and use their ability to put two and two together. They are taught the lipreaders alphabet, groups of sounds that look alike on the lips (visemes) like p, b, m, or f, v. The aim is to get the gist, so as to have the confidence to join in conversation, and avoid damaging social isolation that often accompanies hearing loss. Lipreading classes are recommended for anyone who struggles to hear in noise, and help to adjust to hearing loss.

ATLA(the association for teaching lipreading to adults) is the professional association in the UK for qualified lipreading tutors.

Tests

Most tests of lipreading were devised to measure individual differences in performing specific speech processing tasks, and to detect changes in performance following training. Lipreading tests have been used with relatively small groups in experimental settings, or as clinical indicators with individual patients and clients. That is, lipreading tests to date have limited validity as markers of lipreading skill in the general population.

Lipreading and lip-speaking by machine

Automated lip-reading has been a topic of interest in computational engineering, as well as in science fiction movies. The computational engineer Steve Omohundro, among others, pioneered its development. In facial animation, the aim is to generate realistic facial actions, especially mouth movements, that simulate human speech actions. Computer algorithms to deform or manipulate images of faces can be driven by heard or written language. Systems may be based on detailed models derived from facial movements (motion capture); on anatomical modelling of actions of the jaw, mouth and tongue; or on mapping of known viseme- phoneme properties.[58][59] Facial animation has been used in speechreading training (demonstrating how different sounds 'look').[60] These systems are a subset of speech synthesis modelling which aim to deliver reliable 'text-to-(seen)-speech' outputs. A complementary aim—the reverse of making faces move in speech—is to develop computer algorithms that can deliver realistic interpretations of speech (i.e. a written transcript or audio record) from natural video data of a face in action: this is facial speech recognition. These models too can be sourced from a variety of data.[61] Automatic visual speech recognition from video has been quite successful in distinguishing different languages (from a corpus of spoken language data).[62] Demonstration models, using machine-learning algorithms, have had some success in lipreading speech elements, such as specific words, from video[63] and for identifying hard-to-lipread phonemes from visemically similar seen mouth actions.[64] Machine-based speechreading is now making successful use of [https://www.technologyreview.com/s/602949/ai-has-beaten-humans-at-lip-reading/ neural-net based algorithms] which use large databases of speakers and speech material (following the successful model for auditory automatic speech recognition).[65]

Uses for machine lipreading could include automated lipreading of video-only records, automated lipreading of speakers with damaged vocal tracts, and speech processing in face-to-face video (i.e. from videophone data). Automated lipreading may help in processing noisy or unfamiliar speech.[66] Automated lipreading may contribute to biometric person identification, replacing password-based identification.[67][68]

The brain

Following the discovery that auditory brain regions, including Heschl's gyrus, were activated by seen speech,[69] the neural circuitry for speechreading was shown to include supra-modal processing regions, especially superior temporal sulcus (all parts) as well as posterior inferior occipital-temporal regions including regions specialised for the processing of faces and biological motion.[70] In some but not all studies, activation of Broca's area is reported for speechreading,[71][72] suggesting that articulatory mechanisms can be activated in speechreading.[73] Studies of the time course of audiovisual speech processing showed that sight of speech can prime auditory processing regions in advance of the acoustic signal.[74][75] Better lipreading skill is associated with greater activation in (left) superior temporal sulcus and adjacent inferior temporal (visual) regions in hearing people.[76][77] In deaf people, the circuitry devoted to speechreading appears to be very similar to that in hearing people, with similar associations of (left) superior temporal activation and lipreading skill.[78]

References

1. ^{{cite journal | last1 = Woodhouse | first1 = L | last2 = Hickson | first2 = L | last3 = Dodd | first3 = B | year = 2009 | title = Review of visual speech perception by hearing and hearing-impaired people: clinical implications | url = | journal = International Journal of Language and Communication Disorders | volume = 44 | issue = 3| pages = 253–70 | doi = 10.1080/13682820802090281 | pmid = 18821117 }}
2. ^{{cite journal | pmid = 5808871 | volume=12 | issue=2 | title=Interaction of audition and vision in the recognition of oral speech stimuli | year=1969 | journal=J Speech Hear Res | pages=423–5 | last1 = Erber | first1 = NP | doi=10.1044/jshr.1202.423}}
3. ^Sam Loyd's Cyclopedia of Puzzles, 1914
4. ^{{cite journal | pmid = 7162162 | volume=25 | issue=4 | title=Coarticulation effects in lipreading | year=1982 | journal=J Speech Hear Res | pages=600–7 | last1 = Benguerel | first1 = AP | last2 = Pichora-Fuller | first2 = MK | doi=10.1044/jshr.2504.600}}
5. ^{{cite journal | last1 = Auer | first1 = ET | year = 2010 | title = Investigating speechreading and deafness | url = | journal = Journal of the American Academy of Audiology | volume = 21 | issue = 3| pages = 163–8 | doi = 10.3766/jaaa.21.3.4 | pmid = 20211120 | pmc = 3715375 }}
6. ^{{cite journal | last1 = Jordan | first1 = TR | last2 = Thomas | first2 = SM | year = 2011 | title = When half a face is as good as a whole: effects of simple substantial occlusion on visual and audiovisual speech perception | url = | journal = Atten Percept Psychophys | volume = 73 | issue = 7| pages = 2270–85 | doi = 10.3758/s13414-011-0152-4 | pmid = 21842332 }}
7. ^{{cite journal | last1 = Thomas | first1 = SM | last2 = Jordan | first2 = TR | year = 2004 | title = Contributions of oral and extraoral facial movement to visual and audiovisual speech perception | url = | journal = J Exp Psychol Hum Percept Perform | volume = 30 | issue = 5| pages = 873–88 | doi = 10.1037/0096-1523.30.5.873 | pmid = 15462626 }}
8. ^{{cite journal | last1 = Spehar | first1 = B | last2 = Goebel | first2 = S | last3 = Tye-Murray | first3 = N | year = 2015 | title = Effects of Context Type on Lipreading and Listening Performance and Implications for Sentence Processing | url = | journal = J Speech Lang Hear Res | volume = 58 | issue = 3| pages = 1093–102 | doi = 10.1044/2015_JSLHR-H-14-0360 | pmid = 25863923 | pmc=4610295}}
9. ^{{cite journal | last1 = Files | first1 = BT | last2 = Tjan | first2 = BS | last3 = Jiang | first3 = J | last4 = Bernstein | first4 = LE | year = 2015 | title = Visual speech discrimination and identification of natural and synthetic consonant stimuli | url = | journal = Front Psychol | volume = 6 | issue = | page = 878 | doi = 10.3389/fpsyg.2015.00878 | pmid = 26217249 | pmc=4499841}}
10. ^{{cite journal | pmid = 9407662 | volume=102 | issue=6 | title=Speechreading and the structure of the lexicon: computationally modeling the effects of reduced phonetic distinctiveness on lexical uniqueness | year=1997 | journal=J Acoust Soc Am | pages=3704–10 | last1 = Auer | first1 = ET | last2 = Bernstein | first2 = LE | doi=10.1121/1.420402}}
11. ^Feld J1, Sommers M 2011 There Goes the Neighborhood: Lipreading and the Structure of the Mental Lexicon. Speech Commun. Feb;53(2):220-228
12. ^{{cite journal | last1 = Tye-Murray | first1 = N | last2 = Hale | first2 = S | last3 = Spehar | first3 = B | last4 = Myerson | first4 = J | last5 = Sommers | first5 = MS | year = 2014 | title = Lipreading in school-age children: the roles of age, hearing status, and cognitive ability | url = | journal = J Speech Lang Hear Res | volume = 57 | issue = 2| pages = 556–65 | doi = 10.1044/2013_JSLHR-H-12-0273 | pmid=24129010| pmc = 5736322 }}
13. ^{{cite journal | last1 = Feld | first1 = JE | last2 = Sommers | first2 = MS | year = 2009 | title = Lipreading, processing speed, and working memory in younger and older adults | url = | journal = J Speech Lang Hear Res | volume = 52 | issue = 6| pages = 1555–65 | doi = 10.1044/1092-4388(2009/08-0137) | pmid = 19717657 | pmc=3119632}}
14. ^http://www.huffingtonpost.com/2012/01/16/babies-learning-to-talk_n_1209219.html
15. ^{{cite journal | pmid = 897687 | volume=198 | issue=4312 | title=Imitation of facial and manual gestures by human neonates | year=1977 | journal=Science | pages=74–8 | last1 = Meltzoff | first1 = AN | last2 = Moore | first2 = MK | doi=10.1126/science.897687}}
16. ^Dodd B.1976 Lip reading in infants: attention to speech presented in- and out-of-synchrony. Cognitive Psychology Oct;11(4):478-84
17. ^{{cite journal | last1 = Spelke | first1 = E | year = 1976 | title = Infants intermodal perception of events | url = | journal = Cognitive Psychology | volume = 8 | issue = 4| pages = 553–560 | doi=10.1016/0010-0285(76)90018-9}}
18. ^{{cite journal | last1 = Burnham | first1 = D | last2 = Dodd | first2 = B | year = 2004 | title = Auditory-visual speech integration by prelinguistic infants: perception of an emergent consonant in the McGurk effect | url = | journal = Developmental Psychobiology | volume = 45 | issue = 4| pages = 204–20 | doi = 10.1002/dev.20032 | pmid = 15549685 }}
19. ^{{cite journal | pmid = 9136265 | volume=59 | issue=3 | title=The McGurk effect in infants | year=1997 | journal=Percept Psychophys | pages=347–57 | last1 = Rosenblum | first1 = LD | last2 = Schmuckler | first2 = MA | last3 = Johnson | first3 = JA| doi=10.3758/BF03211902 }}
20. ^{{cite journal | last1 = Pons | first1 = F |display-authors=etal | year = 2009 | title = Narrowing of intersensory speech perception in infancy | url = | journal = Proceedings of the National Academy of Sciences | volume = 106 | issue = 26| pages = 10598–602 | doi = 10.1073/pnas.0904134106 | pmid = 19541648 | pmc=2705579}}
21. ^{{cite journal | last1 = Lewkowicz | first1 = DJ | last2 = Ghazanfar | first2 = AA | year = 2009 | title = The emergence of multisensory systems through perceptual narrowing | url = | journal = Trends in Cognitive Sciences | volume = 13 | issue = 11| pages = 470–8 | doi = 10.1016/j.tics.2009.08.004 | pmid=19748305| citeseerx = 10.1.1.554.4323 }}
22. ^Havy, M., Foroud, A., Fais, L., & Werker, J.F. (in press; online January 26, 2017). The role of auditory and visual speech in word-learning at 18 months and in adulthood. Child Development. (Pre-print version)
23. ^Mills, A.E. 1987 The development of phonology in the blind child. In B.Dodd & R.Campbell(Eds) Hearing by Eye: the psychology of lipreading, Hove UK, Lawrence Erlbaum Associates
24. ^{{cite journal | last1 = Lewkowicz | first1 = DJ | last2 = Hansen-Tift | first2 = AM | date = Jan 2012 | title = Infants deploy selective attention to the mouth of a talking face when learning speech | url = | journal = Proceedings of the National Academy of Sciences | volume = 109 | issue = 5| pages = 1431–6 | doi = 10.1073/pnas.1114783109 | pmid = 22307596 | pmc=3277111}}
25. ^{{cite journal | last1 = Davies R1 | first1 = Kidd E | last2 = Lander | first2 = K | year = 2009 | title = Investigating the psycholinguistic correlates of speechreading in preschool age children | url = | journal = International Journal of Language and Communication Disorders | volume = 44 | issue = 2| pages = 164–74 | doi = 10.1080/13682820801997189 | pmid = 18608607 }}
26. ^Dodd B. 1987 The acquisition of lipreading skills by normally hearing children. In B.Dodd & R.Campbell (Eds) Hearing by Eye, Erlbaum NJ pp163-176
27. ^{{cite journal | last1 = Jerger | first1 = S |display-authors=etal | year = 2009 | title = Developmental shifts in children's sensitivity to visual speech: a new multimodal picture-word task | url = | journal = J Exp Child Psychol | volume = 102 | issue = 1| pages = 40–59 | doi = 10.1016/j.jecp.2008.08.002 | pmid = 18829049 | pmc=2612128}}
28. ^{{cite journal | last1 = Kyle | first1 = FE | last2 = Campbell | first2 = R | last3 = Mohammed | first3 = T | last4 = Coleman | first4 = M | last5 = MacSweeney | first5 = M | year = 2013 | title = Speechreading development in deaf and hearing children: introducing the test of child speechreading | url = | journal = Journal of Speech, Language, and Hearing Research | volume = 56 | issue = 2| pages = 416–26 | doi = 10.1044/1092-4388(2012/12-0039) | pmid = 23275416 | pmc=4920223}}
29. ^{{cite journal | last1 = Tye-Murray | first1 = N | last2 = Hale | first2 = S | last3 = Spehar | first3 = B | last4 = Myerson | first4 = J | last5 = Sommers | first5 = MS | year = 2014 | title = Lipreading in school-age children: the roles of age, hearing status, and cognitive ability | url = | journal = J Speech Lang Hear Res | volume = 57 | issue = 2| pages = 556–65 | doi = 10.1044/2013_JSLHR-H-12-0273 | pmid = 24129010 }}
30. ^{{cite journal | last1 = Peelle | first1 = JE | last2 = Sommers | first2 = MS | year = 2015 | title = Prediction and constraint in audiovisual speech perception | journal = Cortex | volume = 68 | issue = | pages = 169–81 | doi = 10.1016/j.cortex.2015.03.006 | pmid=25890390 | pmc=4475441}}
31. ^{{cite journal | last1 = Campbell | first1 = R | year = 2008 | title = The processing of audio-visual speech: empirical and neural bases | journal = Philosophical Transactions of the Royal Society B | volume = 363 | issue = 1493| pages = 1001–1010 | doi = 10.1098/rstb.2007.2155 | pmid=17827105 | pmc=2606792}}
32. ^{{cite journal | last1 = Sumby | first1 = WH | last2 = Pollack | first2 = I | year = 1954 | title = Visual contribution to speech intelligibility in noise | doi = 10.1121/1.1907309 | journal = Journal of the Acoustical Society of America | volume = 26 | issue = 2| pages = 212–215 }}
33. ^{{cite journal | last1 = Taljaard | first1 = Schmulian |display-authors=etal | year = 2015 | title = The relationship between hearing impairment and cognitive function: A meta-analysis in adults | url = | journal = Clin Otolaryngol | volume = 41| issue = 6| pages = 718–729 | doi = 10.1111/coa.12607 | pmid = 26670203 }}
34. ^{{cite journal | last1 = Hung | first1 = SC |display-authors=etal | year = 2015 | title = Hearing Loss is Associated With Risk of Alzheimer's Disease: A Case-Control Study in Older People | url = | journal = J Epidemiol | volume = 25 | issue = 8| pages = 517–21 | doi = 10.2188/jea.JE20140147 | pmid = 25986155 }}
35. ^{{cite journal | last1 = Smith | first1 = EG | last2 = Bennetto | first2 = L.J | year = 2007 | title = Audiovisual speech integration and lipreading in autism | url = | journal = Child Psychol Psychiatry | volume = 48 | issue = 8| pages = 813–21 | doi = 10.1111/j.1469-7610.2007.01766.x | pmid = 17683453 }}
36. ^{{cite journal | last1 = Irwin | first1 = JR | last2 = Tornatore | first2 = LA | last3 = Brancazio | first3 = L | last4 = Whalen | first4 = DH | year = 2011 | title = Can children with autism spectrum disorders "hear" a speaking face? | url = | journal = Child Dev | volume = 82 | issue = 5| pages = 1397–403 | doi = 10.1111/j.1467-8624.2011.01619.x | pmid = 21790542 | pmc=3169706}}
37. ^{{cite journal | last1 = Irwin | first1 = JR | last2 = Brancazio | first2 = L | year = 2014 | title = Seeing to hear? Patterns of gaze to speaking faces in children with autism spectrum disorders | url = | journal = Front Psychol | volume = 5| page = 397 | doi = 10.3389/fpsyg.2014.00397 | pmid = 24847297 | pmc=4021198}}
38. ^{{cite journal | last1 = Böhning | first1 = M | last2 = Campbell | first2 = R | last3 = Karmiloff-Smith | first3 = A | year = 2002 | title = Audiovisual speech perception in Williams syndrome | url = | journal = Neuropsychologia | volume = 40 | issue = 8| pages = 1396–406 | pmid = 11931944 | doi=10.1016/s0028-3932(01)00208-1}}
39. ^{{cite journal | last1 = Leybaert | first1 = J | last2 = Macchi | first2 = L | last3 = Huyse | first3 = A | last4 = Champoux | first4 = F | last5 = Bayard | first5 = C | last6 = Colin | first6 = C | last7 = Berthommier | first7 = F | year = 2014 | title = Atypical audio-visual speech perception and McGurk effects in children with specific language impairment | url = | journal = Front Psychol | volume = 5 | issue = | page = 422 | doi = 10.3389/fpsyg.2014.00422 | pmid = 24904454 | pmc=4033223}}
40. ^{{cite journal | last1 = Mohammed T1 | first1 = Campbell R | last2 = Macsweeney | first2 = M | last3 = Barry | first3 = F | last4 = Coleman | first4 = M | year = 2006 | title = Speechreading and its association with reading among deaf, hearing and dyslexic individuals | url = | journal = Clinical Linguistics and Phonetics | volume = 20 | issue = 7–8| pages = 621–30 | doi=10.1080/02699200500266745| pmid = 17056494 }}
41. ^{{Citation| last = Clegg| first = Dorothy| year = 1953| title = The Listening Eye: A Simple Introduction to the Art of Lip-reading| publisher = Methuen & Company}}
42. ^{{Cite web | url=https://www.huffingtonpost.com/lydia-l-callis/lip-reading-is-no-simple-task_b_9526300.html | title=Lip Reading is No Simple Task| date=2016-03-23}}
43. ^{{Cite web | url=http://www.handsandvoices.org/articles/research/v9-2_marschark.htm | title=Hands & Voices :: Articles}}
44. ^{{cite journal | last1 = Swanwick | first1 = R | year = 2016 | title = Deaf Children's bimodal bilingualism and education | url = | journal = Language Teaching | volume = 49 | issue = 1| pages = 1–34 | doi = 10.1017/S0261444815000348 }}
45. ^{{cite journal | last1 = Bernstein | first1 = LE | last2 = Demorest | first2 = ME | last3 = Tucker | first3 = PE | year = 2000 | title = Speech perception without hearing | url = | journal = Perception & Psychophysics | volume = 62 | issue = 2| pages = 233–52 | doi=10.3758/bf03205546}}
46. ^{{cite journal | last1 = Bergeson TR1 | first1 = Pisoni DB | last2 = Davis | first2 = RA | year = 2005 | title = Development of audiovisual comprehension skills in prelingually deaf children with cochlear implants | url = | journal = Ear & Hearing | volume = 26 | issue = 2| pages = 149–64 | doi=10.1097/00003446-200504000-00004}}
47. ^{{Cite web | url=http://www.nidirect.gov.uk/communication-support-for-deaf-people | title=Communication support for deaf people| date=2015-11-24}}
48. ^{{Cite web | url=http://www.lipspeaker.co.uk | title=Lipspeaker UK - Communication services for deaf & hard of hearing people}}
49. ^{{Cite web | url=http://www.nuffieldfoundation.org/reading-and-dyslexia-deaf-children | title=Reading and dyslexia in deaf children | Nuffield Foundation}}
50. ^{{cite journal | year = 2007 | title = What really matters in the early literacy development of deaf children | url = | journal = J Deaf Stud Deaf Educ | volume = 12 | issue = 4| pages = 411–31 | doi=10.1093/deafed/enm020| last1 = Mayer | first1 = C. }}
51. ^{{cite journal|doi=10.1080/02699200500266745 | volume=20 | issue=7–8 | title=Speechreading and its association with reading among deaf, hearing and dyslexic individuals | journal=Clinical Linguistics & Phonetics | pages=621–630| year=2006 | last1=Mohammed | first1=Tara | last2=Campbell | first2=Ruth | last3=MacSweeney | first3=Mairéad | last4=Barry | first4=Fiona | last5=Coleman | first5=Michael }}
52. ^{{cite journal | last1 = Kyle | first1 = F. E. | last2 = Harris | first2 = M. | year = 2010 | title = Predictors of reading development in deaf children: a 3-year longitudinal study | url = | journal = J Exp Child Psychol | volume = 107 | issue = 3| pages = 229–243 | doi = 10.1016/j.jecp.2010.04.011 }}
53. ^{{Cite journal | doi=10.1044/1092-4388(2012/12-0039)|pmc = 4920223| title=Speechreading Development in Deaf and Hearing Children: Introducing the Test of Child Speechreading| journal=Journal of Speech, Language, and Hearing Research| volume=56| issue=2| pages=416–426| year=2013| last1=Kyle| first1=Fiona E.| last2=Campbell| first2=Ruth| last3=Mohammed| first3=Tara| last4=Coleman| first4=Mike| last5=MacSweeney| first5=Mairéad}}
54. ^{{cite journal | last1 = Nicholls | first1 = GH | last2 = Ling | first2 = D | year = 1982 | title = Cued Speech and the reception of spoken language | url = | journal = J Speech Hear Res | volume = 25 | issue = 2| pages = 262–9 | doi=10.1044/jshr.2502.262}}
55. ^{{cite journal | last1 = Leybaert | first1 = J | last2 = LaSasso | first2 = CJ | year = 2010 | title = Cued speech for enhancing speech perception and first language development of children with cochlear implants | url = | journal = Trends in Amplification | volume = 14 | issue = 2| pages = 96–112 | doi = 10.1177/1084713810375567 | pmid = 20724357 | pmc = 4111351 }}
56. ^https://www.lipreading.org/lipreading-
57. ^{{Cite web | url=https://www.actiononhearingloss.org.uk/notjustlipservice.aspx | title=Campaigns and influencing}}
58. ^http://www-bcf.usc.edu/~rwalker/Walker/Publications_files/1996_CohenWalker%26Massaro_VisualSpeech.pdf
59. ^{{Cite web | url=http://www.diva-portal.org/smash/record.jsf?pid=diva2%3A318099&dswid=2041 | title=Rule-Based Visual Speech Synthesis| pages=299–302| year=1995}}
60. ^{{cite journal | doi=10.1023/B:JADD.0000006002.82367.4f | volume=33 | issue=6 | title=Development and Evaluation of a Computer-Animated Tutor for Vocabulary and Language Learning in Children with Autism | journal=Journal of Autism and Developmental Disorders | pages=653–672|year = 2003|last1 = Bosseler|first1 = Alexis| last2=Massaro | first2=Dominic W. }}
61. ^{{Cite web | url=https://www.uea.ac.uk/computing/visual-speech-synthesis | title=Visual Speech Synthesis - UEA}}
62. ^{{Cite web | url=http://www.cnet.com/news/lip-reading-computer-can-distinguish-languages/ | title=Lip-reading computer can distinguish languages}}
63. ^https://www.youtube.com/watch?v=Tu2vInqqHX8
64. ^{{Cite news | url=https://www.theguardian.com/business/2016/apr/24/the-innovators-can-computers-be-taught-to-lip-read-artificial-intelligence | title=The innovators: Can computers be taught to lip-read?| newspaper=The Guardian| date=2016-04-24| last1=Hickey| first1=Shane}}
65. ^{{Cite web | url=https://www.newscientist.com/article/2113299-googles-deepmind-ai-can-lip-read-tv-shows-better-than-a-pro/ | title=Google's DeepMind AI can lip-read TV shows better than a pro}}
66. ^{{Cite book | chapter-url=https://www.researchgate.net/publication/234819242 | doi=10.1145/57167.57170| chapter=An improved automatic lipreading system to enhance speech recognition| title=Proceedings of the SIGCHI conference on Human factors in computing systems - CHI '88| pages=19–25| year=1988| last1=Petajan| first1=E.| last2=Bischoff| first2=B.| last3=Bodoff| first3=D.| last4=Brooke| first4=N. M.| isbn=978-0201142372}}
67. ^http://www.asel.udel.edu/icslp/cdrom/vol1/954/a954.pdf
68. ^http://www.planetbiometrics.com-article-details-i-2250
69. ^{{cite journal | last1 = Calvert | first1 = GA | last2 = Bullmore | first2 = ET | last3 = Brammer | first3 = MJ |display-authors=etal | year = 1997 | title = Activation of auditory cortex during silent lipreading | url = | journal = Science | volume = 276 | issue = 5312| pages = 593–6 | pmid = 9110978 | doi=10.1126/science.276.5312.593}}
70. ^{{cite journal | last1 = Bernstein | first1 = LE | last2 = Liebenthal | first2 = E | year = 2014 | title = Neural pathways for visual speech perception | url = | journal = Front Neurosci | volume = 8 | issue = | page = 386 | doi = 10.3389/fnins.2014.00386 | pmid = 25520611 | pmc=4248808}}
71. ^{{cite journal | last1 = Skipper | first1 = JI | last2 = van Wassenhove | first2 = V | last3 = Nusbaum | first3 = HC | last4 = Small | first4 = SL | year = 2007 | title = Hearing Lips and Seeing Voices: How Cortical Areas Supporting Speech Production Mediate Audiovisual Speech Perception | journal = Cerebral Cortex | volume = 17 | issue = 10| pages = 2387–2399 | doi = 10.1093/cercor/bhl147 | pmid=17218482 | pmc=2896890}}
72. ^{{cite journal | pmid = 11587893 | volume=12 | issue=2 | title=Cortical substrates for the perception of face actions: an fMRI study of the specificity of activation for seen speech and for meaningless lower-face acts (gurning) | year=2001 | journal=Brain Res Cogn Brain Res | pages=233–43 | last1 = Campbell | first1 = R | last2 = MacSweeney | first2 = M | last3 = Surguladze | first3 = S | last4 = Calvert | first4 = G | last5 = McGuire | first5 = P | last6 = Suckling | first6 = J | last7 = Brammer | first7 = MJ | last8 = David | first8 = AS | doi=10.1016/s0926-6410(01)00054-4}}
73. ^{{cite journal | last1 = Swaminathan | first1 = S. | last2 = MacSweeney | first2 = M. | last3 = Boyles | first3 = R. | last4 = Waters | first4 = D. | last5 = Watkins | first5 = K. E. | last6 = Möttönen | first6 = R. | year = 2013 | title = Motor excitability during visual perception of known and unknown spoken languages | journal = Brain and Language | volume = 126 | issue = 1| pages = 1–7 | doi=10.1016/j.bandl.2013.03.002| pmid = 23644583 | pmc = 3682190 }}
74. ^{{cite journal | last1 = Sams | first1 = M |display-authors=etal | year = 1991| title = Aulenko et al. 1991 Seeing Speech: visual information from lip movements modifies activity in the human auditory cortex | url = | journal = Neuroscience Letters | volume = 127 | issue = | pages = 141–145 | doi=10.1016/0304-3940(91)90914-f}}
75. ^{{cite journal | last1 = Van Wassenhove | first1 = V | last2 = Grant | first2 = KW | last3 = Poeppel | first3 = D | date = Jan 2005 | title = Visual speech speeds up the neural processing of auditory speech | url = | journal = Proceedings of the National Academy of Sciences | volume = 102 | issue = 4| pages = 1181–6 | doi=10.1073/pnas.0408949102| pmid = 15647358 | pmc = 545853 }}
76. ^Hall DA1, Fussell C, Summerfield AQ. 2005Reading fluent speech from talking faces: typical brain networks and individual differences.J. Cogn Neurosci. 17(6):939-53.
77. ^{{cite journal | last1 = Bernstein | first1 = LE | last2 = Jiang | first2 = J | last3 = Pantazis | first3 = D | last4 = Lu | first4 = ZL | last5 = Joshi | first5 = A | year = 2011 | title = Visual phonetic processing localized using speech and nonspeech face gestures in video and point-light displays | url = | journal = Hum Brain Mapp | volume = 32 | issue = 10| pages = 1660–76 | doi = 10.1002/hbm.21139 | pmid = 20853377 | pmc=3120928}}
78. ^{{cite journal | last1 = Capek | first1 = CM | last2 = Macsweeney | first2 = M | last3 = Woll | first3 = B | last4 = Waters | first4 = D | last5 = McGuire | first5 = PK | last6 = David | first6 = AS | last7 = Brammer | first7 = MJ | last8 = Campbell | first8 = R | year = 2008 | title = Cortical circuits for silent speechreading in deaf and hearing people | url = | journal = Neuropsychologia | volume = 46 | issue = 5| pages = 1233–41 | doi = 10.1016/j.neuropsychologia.2007.11.026 | pmid = 18249420 | pmc=2394569}}

Bibliography

  • D.Stork and M.Henneke (Eds) (1996) Speechreading by Humans and machines: Models Systems and Applications. Nato ASI series F Computer and Systems sciences Vol 150. Springer, Berlin Germany
  • E.Bailly, P.Perrier and E.Vatikiotis-Bateson (Eds)(2012) Audiovisual Speech processing, Cambridge University press, Cambridge UK
  • [https://books.google.com/books/about/Hearing_by_Eye.html?id=styYQgAACAAJ&redir_esc=y Hearing By Eye (1987)], B.Dodd and R.Campbell (Eds), Erlbaum Asstes, Hillsdale NJ, USA; [https://books.google.com/books/about/Hearing_by_Eye_II.html?id=SOAQe97Fo04C&redir_esc=y Hearing by Eye II], (1997) R.Campbell, B.Dodd and D.Burnham (Eds), Psychology Press, Hove UK
  • D. W. Massaro (1987, reprinted 2014) [https://www.questia.com/library/78558929/speech-perception-by-ear-and-eye-a-paradigm-for-psychological Speech perception by ear and by eye], Lawrence Erlbaum Associates, Hillsdale NJ

External links

  • Laura Ringham (2012) Why it’s time to recognise the value of lipreading and managing hearing loss support (Action on Hearing Loss, full report) [https://www.actiononhearingloss.org.uk/~/media/Documents/Policy%20research%20and%20influencing/Research/Not%20just%20lip%20service/Not%20just%20lip%20service_Full%20report_FINAL_A0639_9.ashx]
  • Scottish Sensory Centre 2005: workshop on lipreading  
  • Lipreading Classes in Scotland: the way forward. 2015 Report
  • AVISA; International Speech Communication Association special interest group focussed on lip-reading and audiovisual speech
  • Speechreading for information gathering: a survey of scientific sources [https://www.ucl.ac.uk/dcal/projects/tabs/images/speechreading ]
{{Deaf education}}

5 : Deaf culture|Human communication|Perception|Audiology|Education for the deaf

随便看

 

开放百科全书收录14589846条英语、德语、日语等多语种百科知识,基本涵盖了大多数领域的百科知识,是一部内容自由、开放的电子版国际百科全书。

 

Copyright © 2023 OENC.NET All Rights Reserved
京ICP备2021023879号 更新时间:2024/9/20 8:56:15