- Validation of Spanish Speech Recognition Tests
- Corpus of Deaf Speech for Acoustic and Speech Production Research
- Algorithms for Unsupervised and Online Learning of Hierarchy of Features for Tuning Cochlear Implants for the Hearing-Impaired
- Speech Understanding Using Surgical Masks
- Confidence Intervals for the Maryland CNC Test
- A Study of Recorded versus Live Voice Word Recognition
- Bilingualism and Its Effects on Speech Perception in Noise
- Speech Perception in Noise for Bilingual Listeners with Normal Hearing
- Subjective and Objective Assessment of Hearing Aid Outcomes
- Speech Intelligibility and Hearing Function in Navy Divers
Validation of Spanish Speech Recognition Tests
In this series of studies, we have developed the Spanish Pediatric Speech Recognition Threshold (SPSRT) test and the Spanish Pediatric Picture Identification Test (SPPIT). Both of these tests are now available from Auditec, Inc.. Please click on the link below if you are interested in purchasing these tests.
- Spanish Pediatric Speech Recognition Threshold (SPSRT)
- Spanish Pediatric Picture Identification Test (SPPIT)
Click on poster for readable version.
Corpus of deaf speech for acoustic and speech production research
In this project a corpus of recordings of deaf speech is introduced. Adults who were pre- or post-lingually deafened as well as those with normal hearing read standardized speech passages totaling 11 hours of .wav recordings. Preliminary acoustic analyses are included to provide a glimpse of the kinds of analyses that can be conducted with this corpus of recordings. Long term average speech spectra as well as spectral moment analyses provide considerable insight into differences observed in the speech of talkers judged to have low, medium, or high speech intelligibility (Mendel, et al 2017). If you are interested in obtaining access to the corpus of deaf speech recordings, please email Dr. Lisa Lucks Mendel or Monique Pousson
Fig. 3. Four spectral moments for the NH and HI groups (H, M, and L) for all ﬁve passages combined. (A) Average spectral mean; (B) SD; (C) skewness; and (D) kurtosis. Error bars represent 61 standard deviation.
ALGORITHMS FOR UNSUPERVISED AND ONLINE LEARNING OF HIERARCHY OF FEATURES FOR TUNING COCHLEAR IMPLANTS FOR THE HEARING IMPAIRED
In this project we are developing machine learning algorithms to tune hearing instruments, particularly cochlear implants, based on each individual's hearing characteristics and speech production errors. The speech production capabilities of individuals with severe to profound sensorineural hearing loss are being analyzed with the assumption that deficiencies in their speech production output are a reflection of their poor speech perception capabilities. The speech production analysis and the algorithms will help to determine modifications that can be made to hearing instruments to improve speech perception. Ongoing samples of normal hearing and hearing-impaired speech will be analyzed to document the speech characteristics and deficiencies from these two populations. The missing and distorted features from the hearing-impaired speech are being identified, and algorithms are being developed that will ultimately be used to improve the signal processing strategies used in hearing instruments to enhance the audibility of speech features for hearing-impaired individuals.
This award is through the Smart Health and Wellbeing (SHB) Program of NSF that seeks to address fundamental technical and scientific issues that support transformation of healthcare from reactive and hospital-centered to preventive, proactive, evidence-based, person-centered and focused on wellbeing rather than disease.
SPEECH UNDERSTANDING USING SURGICAL MASKS
In this project we evaluated whether surgical masks have an effect on speech understanding in listeners with normal hearing and hearing impairment. In Phase One of this project, speech perception was assessed for individuals with normal hearing and hearing loss using a traditional paper surgical mask with speech stimuli administered in the presence and absence of dental office noise (Mendel, L.L, Gardino, J.A., & Atcherson, S.R., 2008).
A total of 31 adults participated in the first study (1 talker, 15 listeners with normal hearing, and 15 with hearing impairment). The normal hearing group had thresholds of 25 dB HL or better at the octave frequencies from 250 through 8000 Hz while the hearing loss group had varying degrees and configurations of hearing loss with thresholds equal to or poorer than 25 dB HL for the same octave frequencies.
Selected lists from the Connected Speech Test (CST) were digitally recorded with and without a surgical mask present and then presented to the listeners in four conditions: without a mask in quiet, without a mask in noise, with a mask in quiet, and with a mask in noise. A significant difference was found in the spectral analyses of the speech stimuli with and without the mask. The presence of a surgical mask, however, did not have a detrimental effect on speech understanding in either the normal-hearing or hearing-impaired groups. The dental office noise did have a significant effect on speech understanding for both groups. These findings suggest that the presence of a surgical mask did not negatively affect speech understanding. However, the presence of noise did have a deleterious effect on speech perception and warrants further attention in health-care environments.
Phase Two of this study focused on assessing the effect of three masks (a traditional paper mask and two different masks that allow some visual cues) with three groups of listeners: normal hearing, moderately hearing impaired, and severe-to-profoundly hearing impaired (Atcherson, et al., 2016).
A total of 31 adults participated in the study: 1 talker, 10 listeners with normal hearing, 10 listeners with moderate sensorineural hearing loss, and 10 listeners with severe-to-profound hearing loss. A significant difference was found in the spectral analyses of the speech stimuli with and without the masks; however, no more than ~2 dB (RMS). Listeners with normal hearing performed consistently well across all conditions. Both groups of listeners with hearing impairment benefitted from visual input from the transparent mask. The magnitude of improvement in speech perception in noise was greatest for the severe-to-profound group. Findings confirm improved speech perception performance in noise in listeners with hearing impairment when visual input is provided using a transparent surgical mask. Most importantly, the use of the transparent mask did not negatively affect speech perception performance in noise.
Figure 1. Mean percent correct performance on the Connected Speech Test following arcsine transformation for listeners with normal (blue), moderate SNHL (green), and severe SNHL (yellow) in the following conditions: no mask audio-only (NMA), no mask audio-visual (NMAV), transparent mask audio-only (TMA), transparent mask audio-visual (TMAV), and paper mask audio-only (PMA).
CONFIDENCE INTERVALS FOR THE MARYLAND CNC TEST
In this retrospective study, records of veterans who had audiological compensation and pension examinations (hearing evaluations) at the Veterans Administration Medical Center (VAMC) in Jackson, Mississippi between 1992 and 2001 were reviewed. Audiologists are often called upon to decide whether a given word recognition score is in line with what is expected from a patient with a given degree of hearing loss. Comparison of actual scores with expected or predicted scores has diagnostic and rehabilitative implications as well as information for judging the validity of the obtained score and the accompanying pure tone thresholds. However, there is currently no objective and quantitative methodology in widespread use for evaluating word recognition scores. The purpose of this study was to establish an objective method to assist the audiologist in assessing the word recognition score obtained as part of a hearing evaluation. Over 2000 clinical records from the VAMC were reviewed and confidence limits were established for representative scores
Scatterplot of PB Max (rau) scores as a function of pure tone average at 1000, 2000, 3000, and 4000 Hz for all ears with hearing loss.
A STUDY OF RECORDED VERSUS LIVE VOICE WORD RECOGNITION
In this study, we examined administration times for monitored-live-voice (MLV) versus recorded presentation of NU-6 word lists for listeners with normal hearing and hearing loss. This study documented that test administration time for MLV presentation of monosyllabic word lists was significantly shorter than that for recorded presentations of the same stimuli for listeners with normal hearing and hearing impairment. However, this difference was just over one minute for listeners with normal hearing (1 min, 9 sec) and just under one minute for listeners with hearing loss (49 sec). The listeners with hearing loss took longer to respond to the stimuli than the listeners with normal hearing which reduced the difference in administration time between MLV and recorded lists for this population. Given that the majority of patients audiologists test have hearing loss, the average difference in test administration time between MLV and recorded presentation was less than one minute. Thus, although this is a statistically significant difference, it is our belief that this difference is not clinically significant. That is, given these findings, we suggest that clinicians should be willing to sacrifice less than one minute of time per word list for greater reliability of the results.
Portions of this study were presented at the American Speech-Language-Hearing Association (ASHA) Annual Convention in November, 2010 and at AudiologyNOW! in April, 2011. This manuscript was published in The International Journal of Audiology in 2011.
Administration time in minutes across the three presentation conditions and the two groups. CD track lengths are also plotted for the long and short ISIs (interstimulus intervals).
BILINGUALISM AND ITS EFFECTS ON SPEECH PERCEPTION IN NOISE
In Phase I of this study, signal-to-noise ratio loss was measured in two groups of normal hearing participants: (a) those with English as their native language and (b) those with English as a second language (i.e. Asian languages). Results indicated that participants for whom English is a second language perform significantly worse than native English speakers in a background of multi-talker babble. This difficulty in speech perception in noise reflects poor processing of English stimuli in a background of noise, making these individuals with normal hearing function as though they are hearing impaired. Phase II of this study compared similar groups of subjects except that the non-native speakers were all Hispanic.
Non-native English speakers had significantly poorer SNR losses compared to their English speaking counterparts.
SPEECH PERCEPTION IN NOISE FOR BILINGUAL LISTENERS WITH NORMAL HEARING
Phase II of this study compared similar groups of subjects except that the non-native speakers were all Hispanic. Results indicated that bilingual Spanish listeners with normal hearing who are proficient in English performed significantly poorer in noise when compared to their monolingual English speaking controls. This decreased performance in noise requires an improved SNR for this population to reach a comparable level of comprehension to their monolingual English speaking counterparts. It is recommended that speech-in-noise tests be used with bilingual patients as part of the audiometric test battery to provide additional insight into their speech perception capabilities.
Click on poster for readable version.
SUBJECTIVE AND OBJECTIVE ASSESSMENT OF HEARING AID OUTCOMES
Procedures currently used for evaluating hearing aids have fallen short of the goal
of accurately assessing a listener's speech perception capabilities. Neither verification
methods that determine whether hearing aids provide adequate gain according to prescriptive
techniques nor validation methods that summarize patients' subjective perceptions
of hearing aid benefit are sufficient by themselves. In this project, selected objective
and subjective outcome measures are being evaluated that have the best likelihood
of providing the desired information about hearing aid benefit. Newer speech recognition
tests that were developed with appropriate standardization are being evaluated along
with subjective measures of self report that have been rigorously tested and validated
for measuring hearing aid benefit. The effectiveness of both these objective and subjective
outcome measures will then be evaluated and compared to determine their accuracy in
documenting speech perception capabilities. Based on pilot data (Mendel, 2007) it
is anticipated that at least some of the newly developed speech recognition materials
will be sensitive enough to demonstrate objective hearing aid benefit and that their
results will correlate well with patients' subjective perceptions of that benefit
on specific self-report measures. This project will verify that if both objective
and subjective assessments are truly valid, then both types of outcomes will provide
The results of this project will more clearly define the relationship between objective and subjective outcome measures in an attempt to better define true hearing aid benefit. Thus, the long-term objective of this project is to be able to make recommendations to clinicians regarding the inclusion of appropriate speech recognition tests and self-report measures as an integral part of the hearing aid evaluation process. The results of this study will not only make the clinician's job easier, but they will also provide the needed evidence that such outcomes are valid regarding assessment of subjective and objective speech perception performance with hearing aids.
From Mendel, L.L. (2007). Objective and Subjective Hearing Aid Assessment Outcomes, American Journal of Audiology, 16, 118-129.
SPEECH INTELLIGIBILITY AND HEARING FUNCTION IN NAVY DIVERS
Previously, Dr. Lucks Mendel served as Associate Director for the Center for Speech and Hearing Research in the National Center for Physical Acoustics at the University of Mississippi. During that time, she received over $500,000 of external funding to conduct research with Navy divers at the Navy Experimental Diving Unit in Panama City, Florida. These research projects focused primarily on studying changes in hearing physiology that occurred when Navy divers were at depth. In addition, studies were conducted that focused on developing ways to improve speech intelligibility and speech perception among divers who work in noisy environments under adverse conditions. Because Navy divers work at such deep depths and must breathe helium, the quality and intelligibility of their speech is affected. A related project focused on analyzing the acoustic characteristics of the helium speech that was produced. The effects of helium and pressure changes on speech production and perception were studied in order to make improvements in the communication systems used by these divers.