Influence of Acoustic Feedback on the Learning Strategies of Neural Network-Based Sound Classifiers in Digital Hearing Aids (original) (raw)
Related papers
EURASIP Journal on Advances in Signal Processing, 2009
The feasible implementation of signal processing techniques on hearing aids is constrained by the finite precision required to represent numbers and by the limited number of instructions per second to implement the algorithms on the digital signal processor the hearing aid is based on. This adversely limits the design of a neural network-based classifier embedded in the hearing aid. Aiming at helping the processor achieve accurate enough results, and in the effort of reducing the number of instructions per second, this paper focuses on exploring (1) the most appropriate quantization scheme and (2) the most adequate approximations for the activation function. The experimental work proves that the quantized, approximated, neural network-based classifier achieves the same efficiency as that reached by "exact" networks (without these approximations), but, this is the crucial point, with the added advantage of extremely reducing the computational cost on the digital signal processor.
Sound Classification in Hearing Aids Inspired by Auditory Scene Analysis
Eurasip Journal on Advances in Signal Processing - EURASIP J ADV SIGNAL PROCESS, 2005
A sound classification system for the automatic recognition of the acoustic environment in a hearing aid is discussed. The system distinguishes the four sound classes "clean speech," "speech in noise," "noise," and "music." A number of features that are inspired by auditory scene analysis are extracted from the sound signal. These features describe amplitude modulations, spectral profile, harmonicity, amplitude onsets, and rhythm. They are evaluated together with different pattern classifiers. Simple classifiers, such as rule-based and minimum-distance classifiers, are compared with more complex approaches, such as Bayes classifier, neural network, and hidden Markov model. Sounds from a large database are employed for both training and testing of the system. The achieved recognition rates are very high except for the class "speech in noise." Problems arise in the classification of compressed pop music, strongly reverberated speech, and tonal or fluctuating noises.
Evaluation of sound classification algorithms for hearing aid applications
2010 IEEE International Conference on Acoustics, Speech and Signal Processing, 2010
Automatic program switching has been shown to be greatly beneficial for hearing aid users. This feature is mediated by a sound classification system, which is traditionally implemented using simple features and heuristic classification schemes, resulting in an unsatisfactory performance in complex auditory scenarios. In this study, a number of experiments are conducted to systematically assess the impact of more sophisticated classifiers and features on automatic acoustic environment classification performance. The results show that advanced classifiers, such as Hidden Markov Model (HMM) or Gaussian Mixture Model (GMM), greatly improve classification performance over simple classifiers. This change does not require a great increase of computational complexity, provided that a suitable number (5 to 7) of low-level features are carefully chosen. These findings indicate that advanced classifiers can be feasible in hearing aid applications.
Is hearing-aid signal processing ready for machine learning?
2013
In the hearing-aids community, machine-learning technology enjoys a reputation as a potential performance booster for signal-processing issues such as environmental steering, personalization, algorithm optimization, and speech detection. In particular in the area of in situ hearing aid personalization, the promise is steep but clear success stories are still hard to come by. In this contribution, we analyze the ‘personalizability’ of typical hearing-aid signal-processing circuits. We discuss a few salient properties of a very successful adaptable and personalized signal-processing system, namely the brain, and we discover that among some other issues, the lack of a probabilistic framework for hearing-aid algorithms hinders interaction with machine-learning techniques. Finally, the discussion leads to a set of challenges for the hearing-aid research community in the quest towards in situ personalizable hearing aids.
2008 16th European Signal Processing Conference, 2008
This paper focuses on the development of an automatic sound classifier for digital hearing aids that aims to enhance the listening comprehension when the user goes from a sound environment to another different one. The implemented approach consists in dividing the classifying algorithm into two layers that make use of two-class algorithms that work more efficiently: the input signal discriminated by the first layer into either speech or non-speech is ulteriorly classified more specifically depending on whether the audio is noise or music. The complete system results in having three classes, labeled “speech”, “noise” and “music”. The classification process is carried out by using a mean squared error linear discriminant, which provides very good results along with a low computational complexity. This is a crucial issue because hearing aids have to work at very low clock frequency. The paper explores the feasibility of this approach thanks to a number of experiments that prove the adv...
Two-layer automatic sound classification system for conversation enhancement in hearing aids1
Integrated Computer-Aided Engineering, 2008
This paper focuses on the development of an automatic sound classifier for digital hearing aids that aims to enhance the listening comprehension when the user goes from a sound environment to another different one. The approach consists in dividing the classifying algorithm into two layers that make use of two-class algorithms that work more efficiently: the input signal discriminated by the first layer into either speech or non-speech is ulteriorly classified more specifically depending on whether the user is in a conversation (both in quiet or in the presence of background noise) or in a noisy ambient in the absent of speech. The system results in having four classes, labeled speech in quiet, speech in noise, stationary noisy environments (for instance, an aircraft cabin), and non-stationary noisy environments. The combination of classifiers that has been found to be more successful in terms of probability of correct classification consists of a system that makes use of Multilayer Perceptrons for those classification tasks in which speech is involved, and a Fisher Linear Discrimnant for distinguising stationary noisy environments from the non-stationary ones. The system performance has been found to be higher than that of other more classical approaches, and even superior than that of our preliminary work.
Neural Networks and Intelligibility Test for Sensory Neural Hearing Impairment
Digital technology has made an important contribution in the field of audio logy. Digital signal processing methods offer great potential for designing a hearing aid but, today"s Digital hearing aids are not up to the expectation for Sensory Neural Hearing Loss (SNHL) patients. Background noise is particularly damaging the speech intelligibility for SNHL persons. Transform domain adaptive methods can be used for noise reduction. But are having high computational complexity and the decorrelation efficiency is also less in most if the transformation methods. Artificial neural networks provide an analytical alternative to conventional techniques, which are often limited by strict assumptions of normality, linearity, variable independence etc. Hence this paper uses the neural network noise canceller to enhance the speech signal in digital hearing aid for SNHL person. Off-line implementations shows that the SNR improvement in direct time-domain neural network filtering approach with backpropagation training algorithm is almost equal to non-neural methods like transform domain adaptive filters.
Clinical and experimental otorhinolaryngology, 2016
In an effort to improve hearing aid users' satisfaction, recent studies on trainable hearing aids have attempted to implement one or two environmental factors into training. However, it would be more beneficial to train the device based on the owner's personal preferences in a more expanded environmental acoustic conditions. Our study aimed at developing a trainable hearing aid algorithm that can reflect the user's individual preferences in a more extensive environmental acoustic conditions (ambient sound level, listening situation, and degree of noise suppression) and evaluated the perceptual benefit of the proposed algorithm. Ten normal hearing subjects participated in this study. Each subjects trained the algorithm to their personal preference and the trained data was used to record test sounds in three different settings to be utilized to evaluate the perceptual benefit of the proposed algorithm by performing the Comparison Mean Opinion Score test. Statistical analys...