Skip to main content
  • Original Article
  • Open access
  • Published:

Study of word-in-noise perception scores at saccular acoustic sensitivity level: randomized clinical trial

Abstract

Background

In humans, saccular acoustic sensitivity has been confirmed. The aim of this study was to determine the scores of the word-in-noise perception test at the saccular acoustic sensitivity level.

Methodology

In this randomized clinical trial study, 101 participants in the age group of 14 to 25 years with normal hearing and middle ear function, detectable vestibular evoked myogenic potentials (VEMP), normal states of mental health, and night sleep were investigated. The scores of word-in-noise perception and word-in-noise discrimination tests were evaluated for each person at two intensity levels, most comfortable level (MCL) and saccular acoustic sensitivity. Mann-Whitney was used for the multiple comparisons.

Results

There was a significant difference between the scores of the word-in-noise perception test at MCL and saccular acoustic sensitivity (U = 3971.50, Z =  − 2.10, p’ = 0.04), and also between the scores of the word-in-noise discrimination test at the MCL compared to saccular acoustic sensitivity (U = 399.89, Z =  − 2.49, p’ = 0.04). Word-in-noise discrimination scores in MCL (U = 3484.00, Z = -3.72, p’ = 0.00) and saccular acoustic sensitivity (U = 705.50, Z =  − 3.78, p’ = 0.00) were higher than word-in-noise perception.

Conclusions

Word-in-noise perception and word-in-noise discrimination scores are higher in saccular acoustic sensitivity level than MCL, suggesting that the vestibular system participates in loud speech perception. Also, in people with normal hearing thresholds, word-in-noise perception scores will be lower than word-in-noise discrimination scores for both loud and common intensity levels.

Background

The vestibular system is sensitive to sound stimuli that have a frequency of less than 1000 HZ (low frequencies) [1,2,3,4]. Air-conducted sounds with an intensity of ≥ 90 dBSPL [4,5,6,7] and bone-conducted sounds with an intensity of ≥ 25 dBHL can cause acoustic stimulation of the saccule [8].

However, utricle and semicircular canals only respond to bone-conducted stimuli, and their sound sensitivity is lower than the saccule [1, 3, 9]. The highest sound sensitivity of the vestibular organs belongs to the saccule [10, 11]. Also, the saccular nerve fibers enter the cochlear nuclei parallel to the cochlear fibers and ascend in the auditory pathway of the brainstem, then leading to several areas of the cerebral cortex [11]. As a result of sound stimulation of the vestibular system, signals are sent to the neural centers of the cerebral cortex that are responsible for speech-in-noise perception [12]. In this regard, Todd et al. (2014) have reported that afferent neurons of the vestibular system extend to both the superior temporal lobe and the cingulate cortex. In people with normal saccular function based on vestibular myogenic evoked potentials (VEMP) testing, a new wave can be detected, which is one of the late auditory evoked potentials. When the sound intensity is higher than the saccular stimulation threshold, this new wave called N42/P52 can be detected at a delay time after the N1 and P2 waves [13]. Miyamoto et al. (2007) also confirmed that saccular acoustic sensitivity can activate large areas of the cerebral cortex as evidenced by MRI imaging findings, including the frontal lobe (prefrontal/premotor cortex, and frontal eye fields), parietal lobe (peripheral area, intraparietal sulcus, temporal-parietal junction, and paracentral lobe), and cingulate cortex [14].

Schlindwein et al. (2008) also reported that saccular afferent fibers send their acoustical signals to the regions of the posterior insular cortex, middle and superior temporal gyri, and inferior parietal cortex. Saccular afferent fibers are predominantly ipsilateral and transmit received signals to the brain in a non-crossover pathway, whereas the processing of auditory signals is contralateral and predominantly in the left hemisphere [15]. McNerney et al. (2011) described the primary visual cortex, precuneus, precentral gyrus, middle temporal gyrus, and superior temporal gyrus are activated by saccular acoustic stimulation [16].

In the presence of noise, speech perception is done with the participation of the entire cerebral cortex. The left insula cortex is the processor of the basic frequency of speech sounds, which are related to the understanding of meanings, and the right insula is the processor of phonetic information and speech tones [9]. The ventral premotor cortex is also activated in environments with a lot of background noise and increases the ability to detect vowels, which is mainly done by the brainstem. Mirror neurons of the premotor cortex also participate in speech perception in noise with the temporal lobe [11]. The cingulate cortex is also active in processing speech sounds and understanding meanings that contain negative emotional charges. The parietal cortex is also activated in crowded and competitive backgrounds to improve speech perception [17]. These findings show that human hearing is not monomodal, but multimodal, and the afferent signals of the cochlear-vestibular fibers leading to the cerebral cortex participate in the signal processing of all auditory functions [9, 11, 15]. There are several studies that investigated the participation of the vestibular system in hearing/listening [7, 12, 13], phonemes and word discrimination [9], and interpreting or learning first language words [11].

Until now, the role of saccular acoustic sensitivity and its participation in speech perception in people with normal hearing has not been investigated. With the invention of the word-in-noise perception test, this investigation became possible for us. It is a new assessment that evaluates the auditory brain function by recognizing speech consonants. In fact, the brainstem is sensitive to the pitch and rhythm of the human voice, which is transmitted by vowels. In this case, the auditory cortex is sensitive to the discovery of consonants to understand the meanings and concepts. Therefore, in the word-in-noise perception test, 6 lists of 25 words have been prepared (because there are 6 vowels in the Persian language), which have the order of homotopic-monosyllabic words with the consonant/vowel/consonant pattern, and their vowels are the same in each list. Consequently, the role of the brainstem in the recognition of vowels is minimized and the function of the auditory cortex is evaluated [18].

The importance of the participation of the vestibular system in speech perception is in cases where, according to the pure tone audiogram, the hearing threshold is normal, but the person complains of impaired speech perception in noise. Therefore, to confirm his hidden hearing loss, a VEMP test can be performed with an air stimulus that evaluates saccular acoustic sensitivity [9]. In this study, it is assumed that the scores of the word-in-noise perception test of the studied participants at the saccular sound sensitivity level (a loud conversation) are better than their score at the MCL (usual and common sound intensity in conversations). The improvement in test scores was due to saccular sound stimulation and its contribution to speech perception performance. So, the aim of this study was to determine the scores of the word-in-noise perception test at the saccular acoustic sensitivity level.

Methods

This research was a randomized clinical trial. The participants were 50 men and 51 women in the age range of 14 to 25 years (mean = 17.90 ± 2.55; mean age of men = 16.98 ± 1.21; mean age of women = 17.45 ± 3.41). They were within the normal range in terms of VEMP findings, mental health status, and night sleep. Then, the scores of word-in-noise perception and word-in-noise discrimination tests were compared for each participant at two different intensity levels, which was first at MCL and then at saccular acoustic sensitivity.

Inclusion criteria

Secondary education level, age less than 25 and more than 14 years, normal mental health status and night sleep scores, normal hearing and speech reception thresholds, normal middle ear function, and acoustic reflexes.

Exclusion criteria

History of head trauma and exposure to noise and/or chemical pollution, auditory-vestibular disorders, metabolic and cardiovascular diseases, and cognitive and nervous system diseases. Unwillingness to continue cooperation in the research, inappropriate cooperation, inattention, and insufficient accuracy of the participants.

Ethical considerations

In this research, the privacy and security of the participants were a priority, and their evaluations were completely free. Since the test was performed at high intensity or 90 dB, the level of pain and annoyance of the sound was evaluated for all participants in both ears and in any condition and time when they expressed discomfort and did not want to continue cooperation, the work was terminated and excluded from the study.

Practical work

The place of study was the audiology department of Hamadan University of Medical Sciences, Hamadan, Iran. At first, a screening was done to select the participants of this research 136 female and male students. First, written informed consent to participate in the study was completed by all participants aged 16 to 25 years and presented to their parents or legal guardians in the age group of 14 to 16 years. Then the steps of the practical work were explained to them. In order to determine the state of mental health and evaluate the quality of night sleep, a 28-question general health questionnaire and the Petersburg sleep quality index were distributed among them. Participants who had normal results in these two tests (normal = 118) were evaluated by audiological assessments.

Normal hearing thresholds were considered based on pure tone audiometry between the intensity level of − 10 to 15 dBHL and the frequency range of 250 to 8000 HZ [19]. Based on acoustic immittance and tympanometric evaluation, middle ear pressure less than – 100 daPa, compliance 0.3–1.4 mmho, and tympanogram width 50–110 daPa were considered normal. Normal values of acoustic reflex using pure tone stimulus and for ipsilateral and contralateral were in the range of 85 to 100 dBSPL (normal = 105) [15].

Then 105 participants with normal results in the mentioned tests were evaluated with an air-conducted VEMP test and using inserted earphones. To perform VEMP testing, the patient was in a sitting position and the neck was fully rotated towards the non-test ear. Using a Barometer, the level of sternocleidomastoid muscle contraction was controlled throughout the evaluation period. Evaluation parameters included 500 Hz burst stimulus with 2–1-2 duration, stimulation rate 4.7/s, filter 20 to 2000 HZ, intensity 90 dBHL. The non-inverted electrode was placed on the sternocleidomastoid muscle, the inverted electrode was placed on the sternum, and the ground electrode was placed in the middle of the forehead. The analysis was based on calculating the delay time of p13 and n23 waves [12, 13]. Of the 105 participants who were evaluated with the VEMP test, 4 of them did not have normal results and were excluded from the study, 101 participants in this section had normal results and entered the next practical stage of the research. Finally, participants who had normal results in all assessments were evaluated according to established standards and criteria of word-in-noise perception and word-in-noise discrimination tests [18]. Tests were evaluated at two intensity levels, MCL and saccular acoustic sensitivity (90 dBHL). The uncomfortable level for the participants was determined in both ears before the speech tests, and none of the participants complained of the sound being annoying at an intensity level of 90 dBHL. In the word-in-noise perception test, 6 lists of 25 words are used (Appendix 1), which include homotonic-monosyllabic words [18]. Words were presented 200 ms apart to minimize working memory demands. Participants were asked to respond quickly. The intertrial interval was 1000 ms. If no response was made, the next trial was automatically initiated 2000 ms after the last stimulus was played. All stimuli were presented by a female voice with the accompanying carrier phrase (say the word…) using high-quality headphone presentation software, while participants were seated comfortably in a soundproof room. The technique of performing the word-in-noise perception test was completely similar to the word perception test in noise, with the difference that non-homogenous monosyllabic words were used in the word-in-noise-discrimination test. Then, the words presented for the word-in-noise perception test had a homotonic-monosyllabic pattern and at each evaluation time, 25 words from each of the 6 columns were presented to the participant from the top to the bottom of each column ( for example Ʃen, Sen, en, ɡel, Del, Hel,…). While the words used for the word-in-noise discrimination test had a non-homotonic monosyllabic pattern. In other words, the words that were in 25 rows of 6 columns (Appendix 1) were presented to the patient from left to right (for example Ʃol, Ʃen, Sb, Kɒːr, Sær, ɡuːʃ,…).

The selection of SNR = 5 was in terms of the degree of difficulty of the test, which was in the middle. At SNR = 0, because the signal and noise intensity are equal, the participant’s tasks are more difficult, and at SNR = 10, it is easier, because the intensity of the speech signal is 10 dB higher than the noise. All audiological tests were performed for each person in one day. In the intervals of each assessment, the patient was given a short break and the work continued as soon as he declared his readiness.

All analyses were done by means of the statistics software SPSS17. Kolmogorov-Smirnov was for evaluation of normal test distribution. The normatic values are expressed as mean ± standard deviation and as percentages. Mann-Whitney was used for the multiple comparisons. The significance level was determined to be less than 0.05.

Results

Our data were not normally distributed, so we used the Mann-Whitney test. The findings were as follows:

Word in noise perception test

There was a significant difference between the scores of the word-in-noise perception test at MCL and the saccular acoustic sensitivity level (U = 3971.50, Z =  − 2.10, p’ = 0.04). The mean scores of the word-in-noise perception test at the level of saccular acoustic sensitivity were higher than the mean scores at the MCL (Table 1).

Table 1 Mean and standard deviations of word-in-noise perception and word-in-noise discrimination scores in most comfortable level (MCL) and saccular acoustic sensitivity level (n = 101)

Word in noise discrimination test

There was a significant difference between the scores of the word-in-noise discrimination test at MCL compared to the level of saccular acoustic sensitivity (U = 399.89, Z =  − 2.49, p’ = 0.04). The mean scores of the word-in-noise discrimination test at the saccular acoustic sensitivity level were higher than MCL (Table 1).

Comparing the mean scores of two tests at MCL

There was a significant difference between word-in-noise perception and word-in-noise discrimination scores at MCL (U = 3484.00, Z =  − 3.72, p’ = 0.00), (Table 2). Indeed, word-in-noise discrimination scores were higher than word-in-noise perception.

Table 2 Comparing the mean and standard deviations of word-in-noise perception and word-in-noise discrimination scores in the most comfortable level (MCL) and saccular acoustic sensitivity level (n = 101)

Comparing the mean scores of two tests at the saccular acoustic sensitivity level: There was a significant difference between word-in-noise perception and word-in-noise discrimination scores at the level of saccular acoustic sensitivity (U = 705.50, Z =  − 3.78, p’ = 0.00), (Table 2). Actually, at the saccular acoustic sensitivity level, word-in-noise discrimination scores were higher than word-in-noise perception.

Word-in-noise perception test according to examined ear

There was a significant difference between the scores of the word-in-noise perception test at the MCL level and saccular acoustic sensitivity level for the right ear (U = 1865.50, Z =  − 0.35, p' = 0.04), (Table 3) and also for the left ear (U = 1795.98, Z =  − 0.47, p' = 0.04), (Table 4). So the mean scores of the word-in-noise perception test at the level of saccular acoustic sensitivity for each ear were higher than the mean scores at MCL.

Table 3 Mean and standard deviations of word-in-noise perception scores for the right ear in the most comfortable level (MCL) and saccular acoustic sensitivity level (men = 55, women = 45)
Table 4 Mean and standard deviations of word-in-noise perception scores for the left ear in the most comfortable level (MCL) and saccular acoustic sensitivity level (men = 55, women = 45)

Word-in-noise perception test according to gender

For the right ear, there was no significant difference between the scores of the word-in-noise perception test at MCL (U = 97.50, Z =  − 0.03, p' = 0.86) and saccular acoustic sensitivity level (U = 105.16, Z =  − 0.29, p' = 0.73) according to gender (Table 3) and similarly for the left ear (U = 187.85, Z =  − 0.16, p' =  − 0.59; U = 239.26, Z =  − 0.09, p' = 0.63), (Table 4). So the mean scores of the word-in-noise perception test at MCL and saccular acoustic sensitivity level for men were similar to women.

Word-in-noise discrimination test according to examined ear

For the right ear, there was a significant difference between the scores of the word-in-noise discrimination test at MCL and the level of saccular acoustic sensitivity (U = 780.50, Z =  − 0.35, p' = 0.64), (Table 5) and also for the left ear (U = 776.98, Z =  − 0.47, p' = 0.75), (Table 6). So the ear side was not a significant variable in the findings of this study.

Table 5 Mean and standard deviations of word-in-noise discrimination scores for the right ear in the most comfortable level (MCL) and saccular acoustic sensitivity level (men = 55, women = 45)
Table 6 Mean and standard deviations of word-in-noise discrimination scores for the left ear in the most comfortable level (MCL) and saccular acoustic sensitivity level (men = 55, women = 45)

Word-in-noise discrimination test according to gender

The gender variable was not an effective factor to create a difference in the mean scores of word-in-noise discrimination test at MCL for the right ear (U = 930.880, Z =  − 2.10, p' = 0.52) and the level of saccular acoustic sensitivity (U = 1011.50, Z = 0.10, p' = 0.33), (Table 5) and similarly for the left ear (U = 678.43 0, Z =  − 1.35, p' = 0.48; U = 4011.50, Z = 0.05, p' = 0.59), (Table 6).

Discussion

In this research, we compared each person with himself. We evaluated him/her once at an MCL and once at a level of 90 dBHL. In fact, our aim was to compare the scores of those two tests for each individual in a population with normal or peripheral hearing thresholds. Thus, we reached a general conclusion that confirms that even if there is a possibility of central auditory processing disorder (CAPD) in a population, the scores of word-in-noise discrimination test and word-in-noise perception test are still higher than the MCL at the intensity of 90 dBHL. Also, in the situation where the SNR has been constant at two intensity levels. There was a possibility of CAPD in our participants. However, the probability that all of them had CAPD was very low. Even, if we assume that they all had CAPD, we compared each person with him/herself and calculated the mean. The results showed that at the intensity of 90 dBHL, the scores of the word-in-noise discrimination and word-in-noise perception tests were improved. It seems that another factor other than the participation of the central auditory system has been effective in these improvements. Since the saccule is activated at an intensity level of 90 dBHL and at MCL is not active, we reported the possibility of the participation of the saccular acoustic sensitivity in improving the scores of these two tests. Also, this research was conducted in an SNR of + 5 dB. In other words, the intensity of speech was 5 dB higher than the intensity of noise, and the SNR was + 5 dB for two intensity levels, that was the most comfortable level (MCL: common conversation intensity level) and 90 dBHL (the intensity level of loud conversation). If the intensity of the speech signal was increased, the intensity of the noise was also increased, and in this way, the audible speech signals were not amplified at the intensity of 90 dBHL. Because there was still the disturbing factor of noise for them.

Since all the participants of this study had detectable saccular acoustic sensitivity, it can be concluded that the improvement of their speech perception and discrimination scores was due to the participation of the saccular signals in speech processing. Based on the results of this research, we report that in noisy situations, where the intensity of the speaker’s voice is higher than usual, the saccular acoustic sensitivity of the listeners is activated, which improves speech recognition and perception of noise. Speech processing for non-tonal languages includes four basic stages: hearing, discrimination, interpretation or creating an auditory object in the auditory brain, and finally speech perception. In the recognition stage, one knows that each word has a unique sound. At this stage, the meaning of the words is not acquired and includes the phonological forms of the words. The level of interpretation is specific to learning new words and a person must use all his senses to learn them. The perception stage does not require mental effort and attention to understand and produce words. As each word is heard, meanings are recalled in one’s mind, which involves the highest level of speech processing [18].

It was also observed that gender does not have an effect on improving the scores of speech recognition and perception, with the stimuli that are presented in two intensity levels, high and comfort. In other words, sex hormones do not affect the functioning of the saccule and its nerve fibers, also the right and left ears do not have superiority over each other.

The results of our research are consistent with the findings of previous studies. The role of saccular acoustic sensitivity in the discrimination of phonemes in noise as well as the discrimination of words in noise has been reported, and researchers observed that individuals with a detectable VEMP scored higher in these two tests than those without VEMP [9, 11].

The studies have also proven that the vestibular system sends signals to the neural centers of the brainstem [9, 11] and the cerebral cortex (occipital, parietal, temporal, insula, cingulate, and premotor lobes) [12,13,14,15,16]. Then, the saccular nerve fibers extend to several brain areas that participate in speech perception, and the saccular acoustic sensitivity causes the activity of those areas [9]. It is obvious that the activity of the brain regions participating in speech understanding will improve speech perception performance. This is the same result that we obtained in this research and the scores of the participants in the word-in-noise-perception test were higher than usual for high-intensity stimuli.

The left insular cortex processes fundamental frequency sounds containing semantic information. The right insula cortex processes fundamental frequency sounds containing phonetic information [11]. When speech understanding in noise becomes more difficult due to a reduced SNR, the parietal cortex is activated to improve its scores [9], in fact, human hearing is multimodal and has multiple sensory inputs [19].

There are also reports that the saccular afferent fibers extend to the cerebral nuclei of the brainstem and cause the improvement of auditory waves in the brainstem. Based on these findings, individuals with visible VEMP have shorter V wave latencies of the auditory brainstem responses than those without [11]. Some researchers have also reported that the saccular sound sensitivity, which is confirmed based on the VEMP test, increases the synchronization of the auditory nerve fibers of the brainstem, which causes a larger V wave amplitude of the auditory brainstem responses [9]. Thus, at high intensities, which are the range of sound stimulation of the saccule, the synchronization of neural activity with the first and second vowel formants increases [9, 11]. Wider parts of the auditory brain are activated and auditory objects are created in less time and with larger spatial dimensions [19]. The findings of our research are consistent with them so the scores of the two tests of our participants in the evaluation of the loud stimuli were higher than the stimuli that were presented with comfort intensity.

Also, researchers have confirmed the role of sound sensitivity of the vestibular system for voice feedback and correction of speech production patterns in deaf people. In fact, since hearing one’s own voice is done through the bone conduction pathway and all three vestibular structures (saccule, utricle, and semicircular canals) respond to these stimulations, they believe that the vestibule plays an auxiliary role for the cochlea in hearing low sounds [20]. In addition, it also participates in the neural circuit of speech production/perception of deaf people. In this way, they observed that in deaf people whose VEMP response was traced, after completing auditory training sessions, the amplitude of this response increased and their speech production improved. They reported that the vestibular system plays a phonetic role in the regulation of the human voice and underlies the development of phonetic and production skills [9, 21]. It is noteworthy that in the deaf people they studied, brainstem auditory waves were not detected before and after the auditory training sessions. Since the mechanism of speech perception/production has a two-way cycle Wernicke, which is the main center of speech perception, participates in speech production, and Broca, whose main function is in speech production, is also active in speech perception [17]. They proved that the sound sensitivity of the vestibular system is of special importance in deaf people [11, 21].

The findings of our research, which was conducted on people with normal hearing, were consistent with their findings. Our participants also obtained higher scores in the word-in-noise perception test for loud stimuli compared to comfort and usual speech intensity. However, the most important step in speech processing is perception [22]. Speech processing has two phonetic and lexical-semantic aspects. Phonological processing (hearing and discrimination) is based on discovering the fundamental frequency (f0) of vowels and their harmonics [17, 22]. While lexical-semantic processing (interpretation and perception) is based on the discovery of consonants [18, 22]. It has also been reported that for people whose saccular sound sensitivity is functioning, their ability to process native and foreign languages and in noisy situations is more than people with impaired saccular function [9, 11]. Researchers have reported that saccular acoustic sensitivity should be estimated by VEMP testing when evaluating hearing. Those people despite having a normal audiogram complain about the incomprehensibility of speech in the background noise and have hidden hearing loss [9]. It is hoped that this research can help to open a new field of research in vestibular hearing medicine.

Conclusions

Word-in-noise perception and word-in-noise discrimination scores are higher in saccular acoustic sensitivity level than MCL, suggesting that the vestibular system participates in loud speech perception. Also, in people with normal hearing thresholds, word-in-noise perception scores will be lower than word-in-noise discrimination scores for both loud and common intensity levels.

Availability of data and materials

The datasets generated and/or analyzed during the current study are not publicly available due [reason why data are not public] but are available from the corresponding author on reasonable request.

Abbreviations

VEMP:

Vestibular-evoked myogenic potentials

MCL:

Most comfortable level

SNR:

Signal-to-noise ratio

References

  1. Murofushi TKK (2009) Sound sensitivity of the vestibular end-organs and sound-evoked vestibule collic reflexes in mammals. In: Murofushi TKK, Kaga K (eds) Issues in vestibular evoked myogenic potential. Japan. Inc., E-Publishing Springer Press Nikkei Printing, pp 20–25

    Chapter  Google Scholar 

  2. Sheykholeslami K, Schmerber S, Kermany MH (2004) Vestibular evoked myogenic potentials in three patients with large vestibular aqueduct. Hear Res. 190:161–16. https://doi.org/10.1016/S0378-5955(04)00018-8

    Article  PubMed  Google Scholar 

  3. Sheykholeslami K, Habiby M, Kermany MH, Kaga K (2001) Frequency sensitivity range of the saccule to bone-conducted stimuli measured by vestibular evoked myogenic potentials. Hear Res 160(1–2):58–62. https://doi.org/10.1016/s0378-5955(01)00333-1

    Article  PubMed  CAS  Google Scholar 

  4. Sheykholeslami K, Kaga K (2002) The otolithic organ as a receptor of vestibular hearing revealed by vestibular-evoked myogenic potentials in patients with inner ear anomalie. Hear Res 165:62–67. https://doi.org/10.1016/s0378-5955(02)00278-2

    Article  PubMed  Google Scholar 

  5. Todd NPM (2001) Evidence for a behavioral significance of saccular acoustic sensitivity in humans. Acoust Soc Am 110(1):380–390. https://doi.org/10.1121/1.1373662

    Article  CAS  Google Scholar 

  6. Todd NPM, Frederick WCJ, Banks JR (2000) Saccular origin of frequency tuning in myogenic vestibular evoked potentials?: Implications for human responses to loud sounds. Hear Res 152(1–2)141:173–4. https://doi.org/10.1016/s0378-5955(99)00222-1

  7. Lenhardt ML (2006) Saccular hearing; turtle model for a human prosthesis. Acoust Soc Am 119(5):3433. https://doi.org/10.1121/1.4786900

    Article  Google Scholar 

  8. Welgampola MS, Rosengren SM, Halmagyi GM, Colebatch JG (2003) Vestibular activation by bone conducted sound. Neurol Neurosurg Psychiatry 74:771–778. https://doi.org/10.1136/jnnp.74.6.771

    Article  CAS  Google Scholar 

  9. Emami SF (2023) Sensitivity of vestibular system to sounds. Indian J Otol 29(3):141–145. https://doi.org/10.4103/indianjotol.indianjotol_19_23

    Article  Google Scholar 

  10. Fröhlich L, Wilke M, Plontke SK, Rahne T (2021) Bone conducted vibration is an effective stimulus for otolith testing in cochlear implant patients. Vestib Res 32(4):355–365. https://doi.org/10.3233/VES-210028

    Article  Google Scholar 

  11. Emami SF (2023) Central representation of cervical vestibular evoked myogenic potentials. Indian J Otolaryngol Head Neck Surg. https://doi.org/10.1007/s12070-023-03829-8

    Article  PubMed  PubMed Central  Google Scholar 

  12. Todd NPM, Paillard AC, Kluk K, Whittle E, Colebatch JG (2013) Vestibular receptors contribute to cortical auditory evoked potentials. Hear Res 309(100):63–74. https://doi.org/10.1016/j.heares.2013.11.008

    Article  PubMed  Google Scholar 

  13. Todd NPM, Paillard AC, Kluk K, Whittle E, Colebatch JG (2014) Source analysis of short and long latency vestibular-evoked potentials (VsEPs) produced by left vs. right ear air-conducted 500 Hz tone pips. Hear Res. 312:91–102. https://doi.org/10.1016/j.heares.2014.03.006

    Article  PubMed  PubMed Central  CAS  Google Scholar 

  14. Miyamoto T, Fukushima K, Takada T, Waele CD, Vidal PP (2007) Saccular stimulation of the human cortex: a functional magnetic resonance imaging study. Neurosci Lett 423:68–72. https://doi.org/10.1016/j.neulet.2007.06.036

    Article  PubMed  CAS  Google Scholar 

  15. Schlindwein P, Mueller M, Bauermann T, Brandt T, Stoeter P, Dieterich M (2008) Cortical representation of saccular vestibular stimulation: VEMPs in fMRI. Neuroimage 39(1):19–31. https://doi.org/10.1016/j.neuroimage.2007.08.016

    Article  PubMed  CAS  Google Scholar 

  16. McNerney KM, Lockwood AH, Coad ML, Wack DS, Burkard RF (2011) Use of 64-channel electroencephalography to study neural otolith-evoked responses. Am Acad Audiol 22(3):143–155. https://doi.org/10.3766/jaaa.22.3.3

    Article  Google Scholar 

  17. Emami SF, Shariatpanahi E (2023) Central representation of speech-in-noise perception: a narrative review. Aud Vestib Res. Article in Presshttps://doi.org/10.18502/avr.v32i3.12932

  18. Emami SF (2024) The use of homotonic monosyllabic words in the Persian language for the word-in-noise perception test. Perception Test Aud Vestib Res 33(1):28–33. https://doi.org/10.18502/avr.v33i1.14271

    Article  Google Scholar 

  19. Oh SY, Boegle R, Ertl M, Stephan T, Dieterich M (2018) Multisensory vestibular, vestibular-auditory, and auditory network effects revealed by parametric sound pressure stimulation. Neuroimage 1(176):354–363. https://doi.org/10.1016/j.Neuroimage.2018.04.057

    Article  Google Scholar 

  20. Iannotti GR et al (2022) EEG spatiotemporal patterns underlying self-other voice discrimination. Cereb Cortex 32:1978–1992. https://doi.org/10.1093/cercor/bhab329

    Article  PubMed  Google Scholar 

  21. Trivelli M, Potena M, Frari V, Petitti T, Deidda V, Salvinelli F (2013) Compensatory role of saccule in deaf children and adults: novel hypotheses. Med Hypotheses 80(1):43–46. https://doi.org/10.1016/j.mehy.2012.10.006

    Article  PubMed  CAS  Google Scholar 

  22. Emami SF, Shariatpanahi E, Gohari N, Mehrabifard M (2024) Word-in-noise perception test in children. Egypt J Otolaryngol 40:64. https://doi.org/10.1186/s43163-024-00625-1

    Article  Google Scholar 

Download references

Acknowledgements

The authors know to thank and appreciate the esteemed participants who cooperated in this research.

Funding

The financial sponsor of this research was Hamadan University of Medical Sciences (registered number: 14020115326).

Author information

Authors and Affiliations

Authors

Contributions

SFE was the designer of the research project and the author of the article. EF and GN were the scientific advisors of the research and MM also collected the data. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Seyede Faranak Emami.

Ethics declarations

Ethics approval and consent to participate

The study was approved by the research ethics committee of Hamadan University of Medical Sciences (Code: IR.UMSHA.REC.1402.011).

Informed written consent to participate in the study was provided by all participants (or their parents or legal guardians in the case of children under 16).

Consent for publication

Not applicable. Available data were extracted based on written consent.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Emami, S.F., Gohari, N., Eghbalian, F. et al. Study of word-in-noise perception scores at saccular acoustic sensitivity level: randomized clinical trial. Egypt J Otolaryngol 40, 96 (2024). https://doi.org/10.1186/s43163-024-00684-4

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43163-024-00684-4

Keywords