Algorithms for Unsupervised and Online Learning of Hierarchy of Features for Tuning Cochlear Implants for the Hearing Impaired
In this project we will develop and use machine learning algorithms to tune hearing
instruments, particularly cochlear implants, based on each individual’s hearing characteristics
and speech production errors. The speech production capabilities of individuals with
severe to profound sensorineural hearing loss will be analyzed with the assumption
that deficiencies in their speech production output are a reflection of their poor
speech perception capabilities. The speech production data will be analyzed, and algorithms
will be developed to determine modifications that can be made to hearing instruments
to improve speech perception. Ongoing samples of normal hearing and hearing-impaired
speech will be analyzed to document the speech characteristics and deficiencies from
these two populations. The missing and distorted features from the hearing-impaired
speech will be identified, and algorithms will be developed that will ultimately be
used to improve the signal processing strategies used in hearing instruments to enhance
the audibility of speech features for hearing-impaired individuals.
This award is through the Smart Health and Wellbeing (SHB) Program of NSF that seeks
to address fundamental technical and scientific issues that support transformation
of healthcare from reactive and hospital-centered to preventive, proactive, evidence-based,
person-centered and focused on wellbeing rather than disease.
The NSF Project webpage can be found here.