Auch dieses Jahr sind Oldenburger Wissenschaftler von HörTech, Hörzentrum Oldenburg und Exzellenzcluster Hearing4all wieder mit zahlreichen Beiträgen auf dem „International Symposium on Auditory and Audiological Research (ISAAR)“ vertreten.
Im Fokus der ISAAR 2019 steht das Thema "Auditory Learning in Biological and Artificial Systems“. Das Konzept sieht vor, dieses Thema aus verschiedenen Perspektiven zu betrachten, darunter aktuelle physiologische Konzepte, Wahrnehmungsmessungen und -modelle sowie Implikationen für neue technische Anwendungen.
Neben diesen wissenschaftlichen Präsentationen ist es eines der Ziele von ISAAR, die Vernetzung zu fördern und Kontakte zwischen Forschern aus verschiedenen Institutionen in den Bereichen Audiologie und Hörforschung herzustellen.
Hier finden Sie die Abstracts zu den Beiträgen von HörTech, Hörzentrum Oldenburg und Exzellenzcluster Hearing4all:
SP.33 - Effect of binaural loudness summation on listening effort
Melanie Krueger*, Jörg-Hendrik Bach – Hörzentrum Oldenburg GmbH, Oldenburg, Germany; HörTech gGmbH, Oldenburg, Germany; Cluster of Excellence Hearing4all, Germany
Dirk Oetting – HörTech gGmbH, Oldenburg, Germany; Cluster of Excellence Hearing4all, Germany
Markus Meis – Hörzentrum Oldenburg GmbH, Oldenburg, Germany; HörTech gGmbH, Oldenburg, Germany; Cluster of Excellence Hearing4all, Germany
Michael Schulte – Hörzentrum Oldenburg GmbH, Oldenburg, Germany; Cluster of Excellence Hearing4all, Germany
Listeners with hearing loss often report of increased listening effort in acoustically challenging situations. The analysis of subjective listening effort ratings showed that listeners with comparable hearing thresholds perceive different degrees of listening effort. A possible explanation could be differences in the binaural broadband loudness summation. Oetting et al. (2016, 2018) revealed large individual differences of binaural broadband loudness summation in listeners with similar hearing thresholds. The aim of this study was to investigate the relation between the perceived listening effort and the binaural broadband loudness summation. 20 listeners with hearing loss participated. Perceived listening effort was measured unaided with the subjective scaling method ACALES (Krueger et al., 2017) using two different background noise levels (50 and 65 dB SPL). All listeners conducted the monaural and binaural categorical loudness scaling with narrow- and broadband stimuli to assess the individual binaural broadband loudness summation. Pairs of listeners with matched thresholds were analyzed and showed a correlation between binaural broadband summation and listening effort for 50 dB SPL. No relation was found for 65 dB SPL. The results and conclusions of these findings will be discussed.
SP.03 - Subjective loudness ratings of vehicles noise with the hearing aid fitting methods NAL-NL2 and trueLOUDNESS
Dirk Oetting – HörTech, Oldenburg, Germany; Cluster of Excellence Hearing4all, Oldenburg, Germany
Jörg-Hendrik Bach*, Melanie Krueger – HörTech, Oldenburg, Germany; Hörzentrum Oldenburg, Oldenburg, Germany; Cluster of Excellence Hearing4all, Oldenburg, Germany
Matthias Vormann, Michael Schulte – Hörzentrum Oldenburg, Oldenburg, Germany; Cluster of Excellence Hearing4all, Oldenburg, Germany
Markus Meis – HörTech, Oldenburg, Germany; Hörzentrum Oldenburg, Oldenburg, Germany; Cluster of Excellence Hearing4all, Oldenburg, Germany
The individual loudness perception plays a decisive role for fitting hearing aids and is one of the most important criterions for the overall satisfaction with hearing aids. Listeners with similar hearing thresholds showed large differences in loudness summation of binaural broadband signals after narrow-band loudness compensation. The effect has been well described in Oetting et al. (Hearing Res. 2016, Ear Hearing 2018). Based on these findings, the fitting method trueLOUDNESS was developed to restore the individual binaural broadband loudness perception. The aim of this study was to show that the lab measurements of loudness scaling for the trueLOUDNESS fitting corresponds to the real-world loudness perception with hearing aids. Loudness ratings of 19 listeners with hearing loss and hearing aids fitted according to NAL-NL2 and trueLOUDNESS were compared. The study was conducted at a closed road and the subjects rated the perceived loudness of four different vehicles in various driving conditions. For eight listeners the gain predictions with trueLOUDNESS were lower compared to NAL-NL2 and for 11 listeners the gains were higher with trueLOUDNESS compared to NAL-NL2. Results of loudness ratings of trueLOUDNESS and NAL-NL2 fitting compared to normal-hearing listeners will be presented.
S1.03 - Towards precision medicine in audiology: Modelling unaided and aided speech recognition with clinical data
Birger Kollmeier*, Anna Warzybok, David Hülsmeier, Marc-René Schädler – Medizinische Physik & Cluster of Excellence Hearing4All, Universität Oldenburg
To assess speech recognition performance in a precise and multilingual way, the matrix sentence recognition test has been made available in more than 20 languages so far (see Kollmeier et al., Int. J. Audiol. 2015). For its modelling and individual performance prediction, the Framework for Auditory Discrimination Experiments (FADE, Schädler et al., JASA 2016) is employed using the audiogram and one suprathreshold performance parameter that reflects Plomp´s D-factor. FADE can well predict the average individual performance with different (binaural) noise reduction algorithms using a cafeteria noise in comparison to individual empirical data from Völker et al. (2015) with R² of about 0.9. In a set of 19 hearing-impaired listeners the average prediction error is reduced to 4,6 dB if an individual estimate of suprathreshold distortion is employed. The current contribution investigates the prediction performance for a large clinical data set (315 ears from Wardenga et al., 2015) with or without accounting for Plomp´s D factor in comparison to the SII: FADE clearly outperforms the SII by reducing the prediction error from 7 to 4 dB. The application of this approach to predict the individual performance without and with a hearing device will be discussed for a variety of noise conditions, different degrees of hearing impairment, and various hearing instruments.
SP.08 - Limitations of sound localization with linear hearing devices
Florian Denk*, Stephan Ewert, Birger Kollmeier – Medizinische Physik & Cluster of Excellence Hearing4All, University of Oldenburg, Germany
Limited abilities to localize sound sources and other reduced spatial hearing capabilities remain a largely unsolved issue in hearing aids and comparable devices. Hence, the impact of several linear factors that potentially disturb localization was assessed in a subjective listening experiment with normal-hearing listeners, including the microphone location, the frequency response as well as processing delays in superposition with direct sound leaking through a vent. Using binaural synthesis and individually measured transfer functions for 6 device styles and microphone positions, it was possible to assess these aspects separately. Both horizontal- and median plane localization was tested with short white noise bursts. In our data, the microphone location is the governing factor for localization abilities with linear hearing devices. Non-optimal microphone locations have a disruptive influence on localization in the vertical domain, and a detrimental effect on lateral sound localization. Processing delays cause additional detrimental effects for lateral sound localization; and diffuse-field equalization to the open-ear response should be preferred over free-field equalization. The median-plane localization results are in line with predictions from computational models of vertical-plane localization and allow a comparison and refinement of concurrent model versions.
SP.14 - Evaluation of a gaze-controlled directional hearing device in an audiovisual virtual environment
Giso Grimm*, Frederike Kirschner, Hendrik Kayser, Volker Hohmann – Medizinische Physik and Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
Directional benefit of hearing devices interacts with head motion behavior of the user: for fixed beamformers it can only be achieved if the user is facing the target. Steerable beamformers rely on DOA estimation that can be less accurate with head movements. In multi-talker situations it remains unclear which of the sources belongs to desired sources. Taken together, current beamformers may not improve speech reception in everyday dynamic listening conditions involving movement of listener as well as in standard lab conditions. To overcome this problem, we proposed an algorithm that estimates auditory attention based on the combination of gaze behavior and DOA estimation (Grimm et al., 2018). Its aim is to support communication and natural movement behavior in dynamic listening conditions better than fixed beamformers. Attention is modeled in two steps: First, the gaze-to-object probability is estimated by classification of gaze direction based on a acoustic scene model. Next, the temporal statistics of the gaze-to-object probability is analyzed. Finally, non-attended sources are attenuated by a spatial filter. In this study, extensions of the model are introduced and evaluated with normal hearing subjects. Results show that a similar benefit was achieved as with conventional directional microphones. As a tendency, a positive effect towards natural head movement was observed.
SP.54 - Potential of self-conducted speech audiometry with smart speakers
Jasper Ooster* – Medizinische Physik, Carl von Ossietzky Universität, Oldenburg, Germany; Cluster of Excellence Hearing4all, Germany
Kirsten C. Wagener – Hörzentrum Oldenburg GmbH, Oldenburg, Germany
Jörg-Hendrik Bach – HörTech gGmbH, Oldenburg, Germany; Cluster of Excellence Hearing4all, Germany
Bernd T. Meyer – Medizinische Physik, Carl von Ossietzky Universität, Oldenburg, Germany; Cluster of Excellence Hearing4all, Germany
Speech audiometry in noise based on matrix sentence tests (Kollmeier et al, 2015) is an important diagnostic tool to assess the speech reception threshold (SRT) of a subject, i.e., the signal-to-noise ratio corresponding to 50% intelligibility. Although the matrix test format allows for self-conducted measurements by applying a visual, closed response format, these tests are mostly performed in open response format with an experimenter entering the correct/incorrect responses (expert-conducted). Using ASR enables self-conducted measurements without the need of visual presentation of the response alternatives (Ooster et al, 2018). A combination of these self-conducted measurement procedures with signal presentation via smart speakers could be used to assess individual speech intelligibility in an individual listening environment. Therefore, this paper compares self-conducted SRT measurements using smart speakers with expert-conducted lab measurements. With smart speakers, the experimenter has no control over the absolute presentation level, mode of presentation (headphones vs. loudspeaker), potential errors from the automated response logging, and room acoustics. We present the differences between lab and Alexa settings for normal-hearing and hearing-impaired subjects and two different rooms (low/high reverberation).
SP.67 - Auditory adaptation to spectral slope
Kai Siedenburg*, Henning Schepker – Dept. of Medical Physics and Acoustics, University of Oldenburg and Cluster of Excellence Hearing4all, Oldenburg, Germany
From the source to the ear drum natural sounds undergo a cascade of filtering transformations due to factors such as reverberation or sound processing in hearing devices, all of which alter the spectral shape of sounds. The auditory system must hence learn to separate source and filter while facing the fundamental uncertainty as to whether the strength of components is due to source or filter properties. Only little is known about the malleability of this aspect of auditory perception: in what way does the auditory system adapt to changes in the filter response? Here we present an experiment that tests adaptation to differences in spectral slope. Normal-hearing listeners are presented with a new (unheard) excerpt of natural speech or music per trial and judge whether the excerpt is more bright or dull than their internal reference. A filter modifies the spectrum in the range of 125-8000 Hz with linear slopes ranging from -2 to 2 dB per octave. Pilot data suggest that listeners are fairly accurate in identifying alterations of the spectral slope of unheard excerpts, but there is also rapid contrastive adaptation: excerpts are perceived as brighter when preceded by excerpts filtered with negative slopes (sounding dull) and vice versa. This work provides a window into how the auditory system separates source and filter and bears implications for the fitting of hearing devices.
SP.12 - Relationships between objective and behavioural markers of suprathreshold hearing and speech-in-noise intelligibility
Markus Garrett* – Department of Medical Physics and Acoustics; Cluster of Excellence Hearing4all, Oldenburg University, Germany
Viacheslav Vasilkov – Hearing Technology @ WAVES, Dept of Information Technology, Ghent University, Belgium
Stefan Uppenkamp, Manfred Mauermann – Department of Medical Physics and Acoustics; Cluster of Excellence Hearing4all, Oldenburg University, Germany
Sarah Verhulst – Hearing Technology @ WAVES, Dept of Information Technology, Ghent University, Belgium
Suprathreshold hearing is important for communication in challenging listening environments. Even people with normal audiometric thresholds can have degraded suprathreshold hearing as a consequence of aging/noise-exposure. We investigate how behavioural and objective electrophysiological markers of suprathreshold hearing relate to speech-in-noise recognition in young/elderly normal-hearing (yNH/oNH) and elderly listeners with sloping high-frequency audiograms (oHI). We report audiogram, distortion-product otoacoustic emission and brainstem EEG measures aiming to quantify synaptopathy. We also assessed temporal fine-structure and envelope coding performance and compared these metrics to speech reception thresholds. Speech-in-noise material was filtered to only contain frequencies present in temporal fine-structure/envelope tasks or brainstem EEG measures. While low-frequency relationships between speech intelligibility, temporal coding and brainstem measures were complex, high-frequency relationships showed an emerging pattern. An EEG metric predicted high-pass filtered speech intelligibility across listeners, suggesting that oNH and oHI suffer from synaptopathy, in line with post-mortem temporal bone studies. Findings are important for the design of hearing-aid algorithms which can only be effective when understanding how suprathreshold hearing impacts speech encoding in noise.
Das gesamte Programm der ISAAR finden Sie unter www.isaar.eu