Skip to content

Improving hearing aid fitting: “Decoding” of naturally spoken speech from brain waves using machine-learning algorithms S&T77

  • School: School of Science and Technology
  • Study mode(s): Full-time / Part-time
  • Starting: 2022
  • Funding: UK student / EU student (non-UK) / International student (non-EU) / Fully-funded

Overview

NTU's Fully-funded PhD Studentship Scheme 2022

Project ID: S&T77

Hearing impairment affects 12 million people in the UK. They are less likely to engage in social activities and more likely to face early retirement, depression, and dementia. Hearing aids can alleviate this, yet only 20% of hearing-impaired subjects use a hearing aid regularly. A major complaint is that hearing aids do not improve speech understanding in noisy environments.

Currently, hearing aids are set up by audiologists, by observing a subject’s behavioural response to quiet tones in silence (“Can you hear this sound?”). This is nothing like real-life noisy environments where users struggle. Behavioural methods are also tiring, especially for the elderly, and lengthy testing using realistic scenarios is not practical. We need better methods of setting up hearing aids.

Studies have shown recordings of the electrical waves that emanate from the brain (electro-encephalography or EEG) can be used to evaluate hearing function1. Using computer algorithms, it is possible to “decode” how well the listener hears speech from EEG waves2. However, the accuracy of current algorithms is limited, especially in realistic, noisy situations.

Deep Neural Networks (DNNs) have the potential to make decoding speech more accurate, thereby resolving current issues with hearing aids and hearing function assessment3. In the future, DNN-based hearing aids might automatically optimise speech understanding for users in real-time, based on changes in EEG activity related to changing speech-in-noise environments.

The goal of this project is:

  1. to improve the accuracy and efficiency of EEG-based assessment of hearing
  2. to optimise of hearing aid algorithms based on EEG activity. The project will involve programming and testing new machine learning algorithms for decoding EEG responses to sound in natural conditions.

In addition to gaining experience with leading AI methods, you will be conducting EEG experiments with human participants: recording and analysing human brain responses to sound. The project would suit a student with a degree in maths, engineering, or computer sciences. You should also be interested in learning about neuroscience and hearing.

School strategic research priority

The project aligns with the strategic research priorities of the Imaging, Materials and Engineering Centre as it will develop a new and optimised medical technology for hearing aids. It also aligns with the Centre for Public and Psychosocial Health as it will develop tools for improved hearing in natural environments and the Centre for Computer Science and Informatics as it will develop a machine learning-based algorithm to optimise hearing aid settings.

Entry qualifications

For the eligibility criteria, visit our studentship application page.

How to apply

For guidance and to make an application, please visit our studentship application page. The application deadline is Friday 14 January 2022.

Fees and funding

This is part of NTU's 2022 fully-funded PhD Studentship Scheme.

Guidance and support

Download our full applicant guidance notes for more information.

Still need help?

+44 (0)115 941 8418