Skip to main content
  • Original Article
  • Open access
  • Published:

Evaluation of perception of music, environmental sounds, and speech in right versus left cerebral stroke patients

Abstract

Background

Stroke is a leading cause of disability and about a third of stroke survivors have aphasia. Stroke also may affect all levels of the auditory pathway and lead to hearing reception and/or perception deficits for different sound types.

Aim of the work

The aim of the work is to evaluate the perception of music, environmental sounds, and speech in post-stroke patients in order to determine if there is a difference in the basic auditory perceptual abilities in right versus left cerebral stroke.

Subjects and methods

A group of 30 healthy adults and a group of 30 right and left cerebral stroke patients with an age range from 35 to 75 years old were included. The participants were evaluated using a 10-item designed questionnaire for auditory abilities and a test of auditory perceptual/recognition skills. The questionnaire addressed some of the basic auditory skills of attention, discrimination, and recognition of environmental sounds and human voice. The test consisted of non-verbal and verbal domains. The non-verbal domain involved music recognition, discrimination, perception, performance, and environmental sound recognition tasks. While the verbal domain included; recognition of the sound related to speech stimuli and syllable/word recognition tasks.

Results

Better significant scores in right versus left stroke patients regarding questionnaire results for attention to sound sources either near or fear (p value < 0.001) and discriminating prosodic intonation of statement or interrogation (p value =  < 0.001). There was no significant difference between the right cerebral stroke group and the left cerebral stroke group regarding the score of music perception and music performance tasks of the non-verbal domain of the auditory perceptual/recognition skills assessment. Better significant scores were found in the right cerebral stroke patients than the left cerebral stroke patients regarding the scores of both the non-verbal and verbal domains of the auditory perceptual/recognition skills assessment.

Conclusion

Stroke of both right and left cerebral hemispheres has a specific negative effect on some aspects of perception of music, environmental sounds, and speech that need to be addressed in both evaluation protocols and rehabilitation programs.

Background

Stroke is a leading cause of death and disability [1]. Stroke mortality rates are decreasing due to improved medical treatment of the complications caused by acute stroke; however, the number of individuals living with the residual effects of stroke is rising [2]. Approximately two thirds of stroke survivors have a communication disorder and about a third of stroke survivors have aphasia which significantly affects functional recovery and return to work [3].

A post-stroke aphasia is an acquired language impairment that affects speaking, listening, reading, and writing [4]. Impaired auditory processing after stroke could contribute to linguistic difficulties experienced by people with aphasia [5, 6]. The majority of stroke survivors suffer from some type of hearing or auditory processing (AP) impairment [7].

Stroke may affect all levels of the auditory pathway and lead to hearing reception and/or perception deficits that may manifest with a variety of symptoms and with clinical presentations that start acutely before, during, or shortly after stroke [8].

Auditory cognitive deficits after stroke may affect language and/or perception of music and environmental sounds [9]. Aphasia is more frequent after lesions in the left hemisphere, whereas difficulty with music perception and processing can appear after lesions in either the left or right hemispheres [10].

Diagnosing aphasia typically involves tests of language comprehension and production, whereas diagnosing music processing relies on testing the perception of various musical dimensions (e.g., pitch, rhythm) or musical emotions. To allow for a better understanding of auditory deficits after brain damage, music and language disorders need to be assessed [11].

To our knowledge, there is limited research in Egypt addressing and differentiating the auditory abilities in right versus left cerebral stroke patients. This raised the utmost interest to investigate the basic auditory abilities at the level of perception of music, environmental sounds, and speech in this kind of population. This will help draw the profile of disabilities of both cerebral stroke groups, early pick up of cases to be sent for thorough instrumental evaluation then address their deficits in suitable rehabilitation protocols.

The aim of this study is to evaluate the perception of music, environmental sounds, and speech in post-cerebral stroke patients in order to determine if there is a difference in the basic auditory perceptual abilities in right versus left cerebral stroke.

Methods

Study population

The Patients were recruited from those who seek medical advice at the Phoniatric outpatient clinic in Kasr Al Ainy. The normal/control group was recruited from relatives of the patients. The patients were selected based on the following inclusion and exclusion criteria. As for inclusion criteria; more than 12 weeks duration post-cerebrovascular insults to ensure complete neurological stability. As for exclusion criteria; the presence of other central neurological lesions or dementia before the current stroke, addiction to alcohol, drugs, or medication, insufficient physical fitness/presence of physical weaknesses or disabilities in both upper limbs, psychiatric disturbances interfering with neuropsychological testing, aphasic/dysphasic patients with hearing impairment, severe aphasia and patients in the first 3 months of the cerebrovascular insult. The study was approved by the ethical committee of Otolaryngology, Cairo University, and the Faculty of Medicine, Cairo University, under reference number N-67–2023.

Methodology in details

All aphasic/dysphasic patients were subjected to:

  • History taking including personal data of name, age, sex, marital status, handedness, literacy level, and occupation.

  • History of present illness including the main complaint with its onset, course, and duration.

  • History of stroke including numbers of strokes, disabilities resulting, and transient ischemic attacks.

  • History of chronic medical illness such as hypertension and diabetes.

The subjects under study were asked to fill in the auditory perceptual questionnaire. It is a self-reported questionnaire that was designed in the current study in light of the difficulties and concerns faced by patients in the common clinical practice and the literature. The research team checked the questionnaire’s face and content validity. Before any data collection, the questionnaire was checked by four experts in the field of Phoniatrics. To check for comprehensibility and relevance, it was also filled in by six aphasic patients or their caregivers as a pilot study. Few modifications were made based on the received feedback and recommendations.

Then the questionnaire was filled in by the groups under study; cerebral stroke patients and the control group. It included ten items related to the following; the ability to attend to loud and low sounds, attend to sound sources either far or near to the subject, attend to sounds of TV and radio, discriminate speakers’ voices either male or female, discriminate the sound of door or phone, recognize common rhymes or verses of holly books when T.V. or radio are on, discriminate the prosodic intonation of statement or interrogation, discriminate family members through their voices, recognize the emotional state of speakers through their voices, and recognize the voice of the speaker through phone. Each statement was scored 1 for the presence of the ability and zero for the presence of difficulty. The total score was given out of 10.

For testing the perception of music environmental sounds and speech, the cerebral stroke patients (right and left side) were subjected to an assessment protocol of auditory perceptual/recognition skills designed by Abusenna et al. [12] in light of previous studies done by Vingolo [13] and Maffei [14]. The test was divided into non-verbal and verbal domains.

Non-verbal domain

This domain aims at evaluating the recognition and perception of sets of non-speech stimuli: the music and the environmental sounds. The stimuli were presented at a comfortable volume in a quiet sound-treated room.

Perception of music tasks

Melody recognition

The subjects were instructed to listen carefully to five familiar tunes played without lyrics (e.g., happy birthday, the national anthem of Egypt) and then to select by pointing a picture representing the tune or the song melody presented aurally from a series of 3 of the picture placed in front of him/her. The number of items in this task was equal to 5.

Two examples were played for each subject before the application of the melody recognition test to ensure his comprehension of the task requirement.

Scoring:

The subject was given a score on a 3-point scale where 2 points were given to him/her for each correct answer, 1 point was given if the subject showed any delay or hesitations during choosing the answer, and no points were given for each incorrect answer. The total score of this task was equal to 10.

Melody discrimination

The subjects were instructed to listen carefully each time to a series of two versions of the same melody of familiar tunes without lyrics. One is considered “the target melody” while the second is considered as a “comparison melody.” The comparison melody may or may not be different from the target one in either pitch or rhythm. The subject has to discriminate if the two melodies are the same or different. The number of the administered series was 6. Two of them were identical in the pitch and the rhythm, 2 were different in the pitch and identical in the rhythm and the last 2 were different in the rhythm and identical in the pitch.

Those subjects who were unable to verbally respond were instructed to point to one of the two cards presented in front of them. One card with the shape of two identical circles represented “the same” and the other card with the shape of a star and a circle represented “different.” The subjects were given two trials before the administration of the task to ensure their comprehension of the task requirement and to make sure they understood the idea of “same or different”.

Scoring:

The subject was given a score on a 3-point scale where 2 points were given to him/her for each correct answer, 1 point was given if the subject showed any delay or hesitations during choosing the answer, and no points were given for each incorrect answer. The total score for this task was equal to 12.

Melody performance task

The subjects were instructed to listen carefully to the recorded sequences of simple rhythmic patterns produced on a percussion instrument (drum) then they were asked to imitate the sequences on the drum using a stick with the use of their non-paretic hand. The simplest rhythmic pattern was formed of 3 taps and the difficulty was increased to be ended with 7 taps rhythmic pattern. The subjects were given two trials before the administration of the task to ensure comprehension. The number of items in this task was 5.

Scoring:

The subject was given a score on a 3-point scale where 2 points were given to him/her for each correct answer, 1 point was given if the subject showed any delay or hesitations during choosing the answer, and no points were given for each incorrect answer. The total score for this task was equal to 10.

Melody perception task

The subjects were instructed to listen carefully to a 1 min and 30 s recording that included the sound of five different well-known musical instruments with 5-s interval between the administrations of the sound of each instrument. The individual was asked to remember the order in which the different timbres were presented. As visual support, a card representing the picture of each instrument was placed in random order on the desk in front of the subjects. After listening to the audio file, the subjects were asked to place the cards in the opposite order in which the instruments were presented.

The sounds used were of the following musical instruments the piano, the xylophone, the drum, the flute, and the violin. Before application of the test, the cards were displayed individually in front of the subjects and the recorded sound of each instrument was administered to make sure that the subjects knew the instrument and its related sound. Two trials were administered before the application of the melody perception task to check the comprehension of the subjects.

Scoring:

The subject was given a score on a 3-point scale where 2 points were given to him/her for each correct answer, 1 point was given if the subject showed any delay or hesitations during choosing the answer, and no points were given for each incorrect.

Recognition of environmental sounds

Environmental sounds were divided into 4 sections that included animal, human sounds, vehicle and machinery sounds, and surrounding sounds.

After hearing a sound (ex; a crying baby), the arranged pictures were shown to the subject representing, the actual source of the sound, an acoustic foil; that would produce a similar sound but is semantically different (such as a cat), a semantic foil; that would produce different sound but belongs to the semantic category (ex; baby laughing) and an unrelated foil (such as a car), the subject was requested to point to the picture showing the source of the sound being heard. The participant was asked to recognize 3 sounds from each section with a total score of 12. Two examples were given for each participant before the application of the task to ensure his/her comprehension of the task requirement.

Scoring:

The subject was given a score on a 3-point scale where he/she was given 2 points for each correct answer, 1 point if the subject showed delay or hesitation during reply, and no points were given for each incorrect answer. The total score was equal to 24.

For each subject, the total score of the non-verbal domain tasks was calculated by computing the score of the previous 5 tasks of music and environmental sounds recognition and he/she was given a score out of 56.

Verbal (speech stimuli)

The subjects were evaluated by testing their ability to recognize speech stimuli through the following two tasks.

Recognition of sound related to speech stimuli

Ten short phrases were administered verbally such as” cow mooing” and the subjects were asked to select the target sound out of three sounds played consequently after the short phrase was administered.

The three administered sounds included a target one, a related distractor, and an unrelated distractor. Two trials were given before the application of the task.

Scoring:

The subject was given a score on a 3-point scale where he/she was given 2 points for each correct answer, 1 point if the subject showed delay or hesitation during reply, and no points were given for each incorrect answer. The total score was equal to 20.

Syllable/word discrimination

The subjects were asked to listen carefully to a series of words that were presented aurally to the subject. Each series has two short words that were identical or different in only one phoneme. Then the subjects were asked to discriminate between words if they were the same or different.

Those subjects who were unable to speak were instructed to point to cards on which the words “same” and “different” were written beneath an image of two identical and two different images (two circles for same, one circle and one star for different). Total number of series was 17. The selected words included change according to the place or manner of the phonemes.

Two trials were administered before the application of the test to ensure comprehension of the task requirements.

Scoring:

The subject was given a score on a 3-point scale where he/she was given 2 points for each correct answer, 1 point if the subject showed delay or hesitation during reply, and no points were given for each incorrect answer. The total score was equal to 34. The total score of both tasks of the verbal domain was equal to 54.

Finally, the total score of the test was calculated by the end of the assessment through the summation of the scores of the non-verbal and verbal domains. The subjects were given a score out of 120.

Data management and statistical analysis

Data were fed to the computer and analyzed using IBM SPSS software package version 20.0. (Armonk, NY: IBM Corp). Qualitative data were described using numbers and percentages. The Shapiro–Wilk test was used to verify the normality of distribution Quantitative data were described using range (minimum and maximum), mean, standard deviation, median, and interquartile range (IQR). The significance of the obtained results was judged at the 5% level. Chi-square test was used for categorical variables, to compare between different groups. Fisher’s exact or Monte Carlo correction for chi-square when more than 20% of the cells have an expected count less than 5. Student t-test for normally distributed quantitative variables, to compare between two studied groups. Mann–Whitney test for abnormally distributed quantitative variables, to compare between two studied groups. F-test (one-way ANOVA) for normally distributed quantitative variables, to compare between more than two groups, and post hoc test (Tukey) for pairwise comparisons. Kruskal–Wallis test for abnormally distributed quantitative variables, to compare between more than two studied groups, and post hoc (Dunn’s multiple comparisons test) for pairwise comparisons. Pearson coefficient to correlate between two normally distributed quantitative variables. Spearman coefficient to correlate between two distributed abnormally quantitative variables. Cronbach’s alpha for reliability statistics was assessed using Cronbach’s alpha test. Receiver operating characteristic curve (ROC) to denote the diagnostic performance of the test. Area more than 50% gives acceptable performance and area about 100% is the best performance for the test. The ROC curve allows also a comparison of performance between two tests.

Results

Demographic data

In the normal (control) group and the group of right cerebral stroke, males represented 73.3% while females represented 26.7%. In the group of Lt cerebral stroke, males represented 66.7% while females represented 33.3% (p value = 0.932). The mean age of normal (control), right cerebral stroke, and left cerebral stroke subjects were as follows: 55.53 years, 55.93 years, and 60.47 years respectively (p value = 0.260). The number of patients of right cerebral stroke above 40 to 50 years is equal to 4 while the number of younger patients of left cerebral stroke in the same age range is 1. In the group of right cerebral stroke, the mean duration post-stroke was 11.53 months while the mean duration post-stroke in the group of left cerebral stroke was 13.8 months (p value = 0.870). Regarding the site of the lesion, 6 of the right cerebral stroke group and 3 patients of the left cerebral stroke group had temporal lesions, and 5 patients of the right cerebral stroke group and 8 patients of the left cerebral stroke group had tempo-parietal lesions. Three patients of the right cerebral stroke group had a fronto-parietal lesion and only one patient of the right cerebral stroke group had a parietal lesion. Four patients of the left cerebral stroke group had a fronto-temporal lesion (p value = 0.032).

Table 2 revealed a highly significant difference among the three groups regarding the total score of the auditory perceptual/recognition questionnaire with a highly significant difference between; control versus right cerebral stroke group patients (p value =  < 0.001), control versus left cerebral stroke group patients (p value =  < 0.001) and between right versus left cerebral stroke group patients (p value =  < 0.001).

Comparison among the three groups regarding the scores of the items of the auditory perceptual/recognition questionnaire showed a significant difference between the control and right cerebral stroke group regarding discriminating speakers’ voices either male or female (p value = 0.032), discriminating sound of the door or phone (p value = 0.005), recognizing rhymes or verses of holly books (p value = 0.005), and recognizing the emotional state of the speakers through their voices (p value = 0.032). There was a significant difference between the control versus the left cerebral stroke groups regarding all the items except for attention to the sound of radio or TV and discriminating family members according to their voices (p value = 0.211). There was a significant difference between right versus left cerebral stroke groups regarding attention to sound source either near or far (p value < 0.001) and discriminating prosodic intonation of statement or interrogation (p value =  < 0.001) with a higher percentage in right cerebral stroke group (Table 3).

Table 4 shows a comparison among the three studied groups: the normal group, right cerebral stroke group, and left cerebral stroke group regarding the subtest scores of the non-verbal items of the auditory perceptual/recognition skills assessment. There was a highly significant difference among the 3 groups with higher scores in the control (normal) group except for the score of music recognition which showed an equal score between the control (normal) and the right cerebral hemisphere stroke groups.

The pairwise comparison between each 2 groups regarding the scores of the assessment of the non-verbal auditory perceptual/recognition skills revealed a highly significant difference among the 3 groups regarding the subtest scores of the non-verbal domain except for the score of music recognition and the score of environmental sounds recognition that showed no significant difference between the control group and the right cerebral stroke group (p value = 1, 0.084, respectively) and for the score of music perception and music performance tasks that showed no significant difference between the right and left cerebral stroke groups (p value = 0.158, 0.093, respectively).

Table 5 shows that there was a significant difference among the three studied groups regarding the subtest scores of the verbal domain of the auditory perceptual/recognition skills assessment and the total score (verbal and non-verbal) of the whole assessment with a higher score in the control group (p value =  < 0.001).

The pairwise comparison between each two studied groups regarding the scores of the assessment of subtests (recognition of the sound related to a speech stimuli and syllable/word discrimination) and test scores of the verbal domain of the auditory perceptual/recognition skills assessment and the total score of the assessment shows there was a significant difference among the three groups regarding all subitems of the verbal domain of the auditory perceptual/recognition skills assessment and the total score with a higher score in the control group versus both stroke groups (p values =  < 0.001) and a higher score in right cerebral stroke group in comparison to left cerebral stroke group (p value =  < 0.001, < 0.001, and 0.014, respectively).

The only positive correlation was found between the score of the questionnaire and the score of recognition of the sound related to speech stimuli in the right cerebral stroke group (r value = 0.900, p value = 0.006) while there was a significant positive correlation between the total score of the questionnaire and the scores of music perception (r value = 0.718, p value = 0.045), the total score of non-verbal auditory recognition/perceptual skills assessment (r value = 0.746, p value = 0.034) and the total score of the whole auditory perceptual/recognition skills assessment (r value = 0.715, p value = 0.046) in the left cerebral stroke group (Table 6).

The questionnaire showed a cut-off level of ≤ 9 to discriminate the right cerebral stroke group from normal with 71.43% sensitivity and 100% specificity. It showed a cut-off level of ≤ 7 to discriminate the left cerebral stroke group from normal with 100% sensitivity and 100% specificity. It showed a cut-off level of ≤ 6 to discriminate the left cerebral stroke group from the right cerebral stroke group with 87.50% sensitivity and 100% specificity. It showed a cut-off level of ≤ 9 to discriminate cerebral stroke patients from the control group with 86.67% sensitivity and 100% specificity (Table 7).

Discussion

There are three main types of sounds. Speech refers to sounds either produced by the human vocal tract or are imitative of the human vocal tract, and which have linguistic content. Music is structured sound, organized (necessarily by humans) to convey some aesthetic intent. Conversely, environmental sounds, lacking a linguistic or aesthetic structure, are naturally occurring non-speech, non-music sounds. They are listened to largely for the purpose of identifying the source of the sound [15].

Speech perception and environmental sound perception are separable in the brain. A similar dissociation between speech and music has often been found; the term amusica is the music analog of agnosia; the third dissociation, between music and environmental sounds, has been rarely reported [16].

The current research is an attempt to study if there are any differential marks between right and left cerebral stroke patients regarding their perceptual auditory skills. This research addresses auditory skills across the perception of music, speech, and environmental sounds. This will help draw the auditory profile of these patients across the three aspects of sounds that may pose a big effect on their quality of life.

The current study showed that males represent the higher constituent of the sample under the study than females (Table 1). This could be explained by the fact that cerebral vascular stroke is more common in males than females being vulnerable to more risk factors as hypertension, diabetes and the hazardous effect of smoking. Another explanation by Carandang et al. [17] stated that females are protected by the effect of endogenous estrogen which has anti-inflammatory effects. Regarding the age distribution, the mean ages of cerebral stroke groups were 55.93 for right cerebral stroke and 60.47 for left cerebral stroke (Table 1). This could be interpreted by the fact that aging is directly proportional to the incidence of stroke being one of the most important non-modifiable risk factors. Atherosclerosis, diabetes, and cerebrovascular insults are more common in old age. This goes with many studies in the literature as Kelly-Hayes [18] and Roy-O’Reilly and McCullough [19].

Table 1 Description of the demographic data of the groups under study according to gender distribution, age, duration post-stroke, and site of the lesion

The designed questionnaire in the current study included variable items ranging from the ability of simple perception and attention to sounds to recognizing and discriminating the prosodic and supra-segmental features of speech such as the pitch. Both cerebral stroke groups showed difficulty compared to normal in their auditory perceptual abilities (Table 2) with more difficulties found in the left cerebral hemisphere than the right cerebral hemisphere group (Table 3).

Table 2 Comparison between the three studied groups according to the total questionnaire
Table 3 Comparison among the control, the right, and left cerebral groups regarding the scores of the subitems of the auditory perceptual/recognition skills questionnaire

The difficulties faced by the stroke groups go in line with the fact reported in the literature of the presence of auditory perceptual and processing difficulties in post-stroke patients [4]. There is good evidence that aphasic patients are poorer not only in semantic but also in verbal acoustic tasks requiring relatively fine discrimination [20]. This is supported in the current results of questionnaire findings of the difficulty of both cerebral stroke groups in recognizing rhymes and verses of holly books that need the perception of the semantic and acoustic features of the auditory signals.

Findings of the questionnaire demonstrated that the affected auditory skills were related to discriminating acoustic factors more than linguistic/semantic factors in the right cerebral hemisphere stroke and to linguistic/semantic and acoustic factors in the left cerebral hemisphere stroke. This is supported in the current study by the questionnaire findings that the right cerebral stroke patients showed difficulty compared to normal in their ability to discriminate speakers’ voices based on gender, discriminate the sound of door or phone, and recognize the emotional state of the speakers through their voices. Typically adults can accurately extract gender from acoustical information in voices based on pitch and formant frequencies [21]. Previous researches by Belin et al. pointed to that functional magnetic resonance imaging (fMRI) studies have highlighted regions located along the superior temporal sulcus (STS) responsible for the processing of voices, for both linguistic and extra-linguistic information in humans. The processing of extra-linguistic aspects of voices engaged primarily the anterior STS—the temporal pole—of the right hemisphere, as only this region discriminated vocal from non-vocal sounds in the absence of speech information [22, 23]. In addition, Lattner et al. [24] stated that female voices produced stronger bilateral responses than male voices, with right hemisphere dominance, in the superior temporal gyrus (STG), whereas Sokhi et al. [25] reported that female voice processing involved the STG while male voices produced a larger response in the right precuneus.

As regards the difficulty of the group of patients (right and left) to discriminate door from phone sounds, this can be attributed to the difficulty experienced by the stroke patients in extracting the features of the acoustic signals in order to identify the source of the sound to be. The listener may have to focus on short-term spectral-temporal properties in order to identify the source as rapidly as possible as opposed to tracking over an extended period of time, which is required to extract the ‘message’ conveyed by speech or music [26].

Both cerebral stroke groups under study showed an affected ability to recognize the emotional state of the speakers through their voices. Affective prosody conveys a speaker’s emotional state largely through global changes in pitch height and loudness, although other acoustic features also serve to disambiguate emotional states [27]. Emotional expressions can take the form of “affect bursts” [28] that have emotional but not semantic meaning or can occur concurrently with normal speech. Literature clarified that individuals with cerebral damage after stroke have impaired affective prosodic perception [29, 30].

Regarding the scores of the non-verbal items of the auditory perceptual/recognition skills assessment, there was a highly significant difference among the 3 groups with higher scores in the control group except for the score of music and environmental sounds recognition that showed non-significant differences between the control and the right cerebral hemisphere stroke group as shown in (Table 4). This could be partially attributed to the fact of the specialization of the right cerebral hemisphere in dealing with music and speech prosody. For the left hemisphere, although its specialization is mainly for language but stroke of the left cerebral hemisphere can affect cognition and good comprehension of the verbal instructions given for nonverbal tasks. The good receptive and expressive linguistic ability of patients with a stroke of the right hemisphere might support their better response to some aspects of the presented non-verbal tasks. However, the findings of the good ability of right cerebral stroke when compared to normal in the tasks of recognition of music and environmental sounds raised an interesting point to investigate. Applying the study to a larger sample size may clarify the subtle differences that are expected in the right cerebral stroke group or may be more challenging items should be added to this part of the non-verbal auditory perceptual test to differentiate between the control and the right cerebral stroke group.

Table 4 Comparison between the control, right cerebral, and left cerebral stroke groups regarding the subtest scores of the non-verbal domain of the auditory perceptual/recognition skills assessment

These results are in partial agreement with the study of Vignolo [13] who clarified that both left and right cerebral hemisphere stroke patients had difficulty in the perception and recognition of the music and precisely stated that right hemisphere lesions tended to disrupt both the apperception of the environmental sounds and melody, while the difficulties faced by the left hemisphere lesions are mainly due to affected semantic identification of music and environmental sounds.

The findings of the current study were also in agreement with previous studies such as Doesborgh [31]. His study revealed higher scores in right cerebral stroke than left cerebral stroke in scores of non-verbal items of recognition skills assessment. Gialanella [32] in his study also found that right cerebral stroke patients with aphasia had higher scores in the non-verbal domain than those with left cerebral stroke patients both at admission and discharge. They suggested that their results may be attributed to severe affection of comprehension in left cerebral stroke patients which stand in their way to perceive and perform the tasks of the assessments.

The finding of the current study was not in agreement with a study done by Rosslau et al. [33] who applied tests for rhythm, melody, and pitch on left-sided as well as right-sided cerebral insult patients. His results showed significant impairments in the stroke groups compared to the normal ones. The side of the lesion of his patients had a significant effect on their applied receptive melody interval test and the rhythm test, with left-hemispheric stroke patients performing better than right-hemispheric stroke patients in both tests. He attributed his results to that the right hemisphere is more engaged in melodic (pitch and contour), timbre perception and the perception of spectral features so the right-sided stroke subjects had the worse results.

Regarding the verbal items of the auditory perceptual/recognition skills assessment and the total (verbal and non-verbal) scores of the assessment, there was a highly significant difference among the 3 groups with better scores in the control (normal) group as shown in (Table 5). The low scores of right cerebral stroke group subjects than control group subjects could be explained by the fact that patients with right cerebral hemisphere stroke have limitations with their verbal communication as stated by Kirshner [34]. This finding goes with Wambaugh and Ferguson [35] who stated that right hemisphere brain-damaged patients have subtle-semantic difficulties and they produced obscure responses on word-association tasks. The better scores with significant differences of the right hemisphere than left hemisphere stroke subjects were expected as auditory comprehension and basic language skills are greatly affected in left than right hemisphere stroke. This goes with Maffei et al. [14] point of view that suggested that the left posterior superior temporal gyrus is necessary for adequate processing of speech sounds, even though the right hemisphere may support auditory input perception and processing and compensate for other language impairments.

Table 5 Comparison between the three studied groups regarding the scores of the subitems of the verbal domain of auditory perceptual/recognition skills assessment and regarding the total score (verbal and non-verbal) of the assessment

The questionnaire managed to correlate significantly with the total nonverbal and the total score of the whole auditory perceptual/recognition skills assessment in the left cerebral stroke (Table 6). However, it showed only a significant correlation with one aspect of the verbal domain of the auditory perceptual/recognition assessment skills in the right cerebral hemisphere. In addition to the difference in the questionnaire finding that the right hemisphere showed significant difficulty compared to normal in discriminating the sound of a door from a phone; however, the auditory perceptual test showed no significant difference between the control group and the right cerebral stroke group in their ability to recognize environmental sounds. The previous finding might indicate that the subtle difficulties may be detected from the subjective perspective but a thorough assessment is still needed to pinpoint the actual difficulties, especially in the right cerebral stroke group. On the other side, both the questionnaire and the auditory perceptual test can be combined to tackle the auditory perceptual/recognition difficulties found in the cerebral stroke group, especially in the right cerebral stroke patients.

Table 6 Correlation between the score of the total questionnaire and assessment of auditory perceptual/recognition skills

Sensitivity is the ability of a test to correctly classify an individual as “diseased” while the ability of a test to correctly classify an individual as disease-free is called the test’s specificity. The detected cut-off level of the current questionnaire could discriminate between the control and the cerebral stroke groups with 100% specificity and a sensitivity ranging between 71.43 and 100% (Table 7). This is a promising result indicating that the questionnaire designed in the current study is specific and shows good sensitivity based on this preliminary study. Applying the questionnaire to a larger group and larger sample size is warranted to confirm its prognostic performance.

Table 7 ROC curve of the total questionnaire to discriminate patients from control

Conclusion and recommendation

Cerebral stroke groups showed auditory perceptual difficulties based on the questionnaire findings and the used auditory perceptual test. The left stroke group showed greater difficulty compared to right cerebral stroke patients. There is a significant correlation between the questionnaire scores and the scores of auditory perceptual tests in left stroke patients. The questionnaire is specific and shows good sensitivity. Although the number of patients is relatively small; however, the findings of the current study clarified the presence of potential auditory perceptual difficulties through the questionnaire and the testing findings. Correlating the findings of the questionnaire and the auditory perceptual test with instrumental measuring of auditory processing deficits and replication of the study on a large sample size are warranted to detect the prevalence and the size of difficulty in stroke patients. Routine auditory testing of both peripheral and central auditory abilities will be beneficial in addressing the stroke patients’ difficulties for a better benefit during rehabilitation in order to achieve better linguistic outcomes in this kind of population.

Availability of data and materials

The datasets used or analyzed during the current study are available from the corresponding author on reasonable request.

Abbreviations

AP:

Auditory processing

fMRI:

Functional magnetic resonance imaging

STG:

Superior temporal gyrus

References

  1. Lackland DT, Roccella EJ, Deutsch AF (2014) Factors influencing the decline in stroke mortality a statement from The American Heart Association/American Stroke Association. Stroke 45(1):315–353

    Article  PubMed  Google Scholar 

  2. Ovbiagele B, Goldstein LB, Higashida RT (2013) Forecasting the future of stroke in the united states: a policy statement from the American heart association and American stroke association. Stroke 44(8):2361–2375

    Article  PubMed  Google Scholar 

  3. Engelter ST, Gostynski M, Papa S (2006) Epidemiology of aphasia attributable to first ischemic stroke: incidence, severity, fluency, etiology, and thrombolysis. Stroke 37(6):1379–1384

    Article  PubMed  Google Scholar 

  4. Purdy SC, Wanigasekara I, Oscar M, Moore C, Clare M et al (2016) Aphasia and Auditory Processing after Stroke through an International Classification of Functioning, Disability and Health Lens. Semin Hear 37:233–246

    Article  PubMed  PubMed Central  Google Scholar 

  5. Szelag E, Lewandowska M, Wolak T (2014) Training in rapid auditory processing ameliorates auditory comprehension in aphasic patients: a randomized controlled pilot study. J Neurol Sci 338(1–2):77–86

    Article  PubMed  Google Scholar 

  6. Shafik AN, Tawfik S, Sadek I, Shalaby A, Hassan DM, Moustafa RR (2013) Evaluation of Central Auditory Function in Ischemic Stroke Patients. MD Thesis in Audiology. Ain Shams University. http://www.eulc.edu.eg

  7. Bamiou DE, Werring D, Cox K (2012) Patient-reported auditory functions after stroke of the central auditory pathway. Stroke 43:1285–1289

    Article  PubMed  Google Scholar 

  8. Bamiou DE (2015) Hearing disorders in stroke. Handb Clin Neurol 129:633–647

    Article  PubMed  Google Scholar 

  9. Nicholson KG, Baum S, Kilgour A, Koh CK, Munhall KG, Cuddy LL (2003) Impaired processing of prosodic and musical patterns after right hemisphere damage. Brain Cogn 52(3):382–389. https://doi.org/10.1016/S0278-2626(03)00182-9. ISSN 0278-2626

    Article  PubMed  Google Scholar 

  10. Särkämö T, Tervaniemi M, Soinila S, Autti T, Silvennoinen HM, Laine M, Hietanen M (2009) Cognitive deficits associated with acquired amusia after stroke: A neuropsychological follow-up study. Neuropsychologia 47(12):2642–2651. https://doi.org/10.1016/j.neuropsychologia.2009.05.015

    Article  PubMed  Google Scholar 

  11. Hirel C, Nighoghossian N, Lévêque Y, Hannoun S, Fornoni L, Daligault S, Bouchet P, Jung J, Tillmann B, Caclin A (2017) Verbal and musical short-term memory: Variety of auditory disorders after stroke. Brain Cogn 113:10–22. https://doi.org/10.1016/j.bandc.2017.01.003. ISSN 0278-2626

    Article  PubMed  Google Scholar 

  12. Abusenna MI, Saad SS, Shawky A, Abdel Hady AF (2023) Evaluation of non-verbal and verbal auditory perceptual/recognition skills in dysphasic patients. Thesis (MSc). Cairo University

  13. Vignolo L (2003) Music agnosia and auditory agnosia. Ann N Y Acad Sci 999:50–57

    Article  PubMed  Google Scholar 

  14. Maffei C, Capasso R, Cazzolli G, Colosimo C, Dell’Acqua F, Piludu F, Catani M, Miceli G (2017) Pure word deafness following left temporal damage: behavioral and neuroanatomical evidence from a new case. Cortex 97:240–254

    Article  PubMed  Google Scholar 

  15. Gygi B (2001) Factors in the identification of environmental sounds. Submitted to the faculty of the University Graduate School in partial fulfillment of the requirements for the degree Doctor of Philosophy in the Department of Psychology. Indiana University. https://www.researcghate.net

  16. Peretz I (1993) Auditory agnosia: a functional analysis. In: Adams S, Bigand E (eds) Thinking in sound: The cognitive psychology of human audition. Clarendon Press, Oxford, pp 199–230

    Chapter  Google Scholar 

  17. Carandang R, Seshadri S, Beiser A, Kelly-Hayes M, Kase CS, Kannel WB (2006) Trends in incidence, lifetime risk, severity, and 30-day mortality of stroke over the past 50 years. J Am Med Assoc 296(24):2939–2946

    Article  CAS  Google Scholar 

  18. Kelly-Hayes M (2010) Influence of age and health behaviors on stroke risk Lessons from longitudinal studies. J Am Geriatr Soc 58(SUPPL. 2):325–328. https://doi.org/10.1111/j.1532-5415.2010.02915.x

    Article  Google Scholar 

  19. Roy-O’Reilly M, McCullough LD (2018) Age and sex are critical factors in ischemic stroke pathology. Endocrinology 159(8):3120–3131. https://doi.org/10.1210/en.2018-00465

    Article  CAS  PubMed  PubMed Central  Google Scholar 

  20. Brookshire RH (1974) Differences in responding to auditory verbal materials among aphasic patients. Acta Symbolica 5:118

    Google Scholar 

  21. Latinus M, Taylor MJ (2011) Discriminating male and female voices: differentiating pitch and gender. Brain Topogr 25(2):194–204. https://doi.org/10.1007/s10548-011-0207-9

    Article  PubMed  Google Scholar 

  22. Belin P, Zatorre RJ, Lafaille P, Ahad P, Pike B (2000) Voice-selective areas in human auditory cortex. Nature 403(6767):309–312

    Article  CAS  PubMed  Google Scholar 

  23. Belin P, Zatorre RJ, Ahad P (2002) Human temporal-lobe response to vocal sounds. Brain Res Cogn Brain Res 13(1):17–26

    Article  PubMed  Google Scholar 

  24. Lattner S, Meyer ME, Friederici AD (2005) Voice perception: sex, pitch, and the right hemisphere. Hum Brain Mapp 24(1):11–20

    Article  PubMed  Google Scholar 

  25. Sokhi DS, Hunter MD, Wilkinson ID, Woodruff PW (2005) Male and female voices activate distinct regions in the male brain. Neuroimage 27(3):572–578

    Article  PubMed  Google Scholar 

  26. Remez RE, Rubin PE, Pisoni DB, Carrell TD (1981) Speech perception without traditional speech cues. Science 212:947–950

    Article  CAS  PubMed  Google Scholar 

  27. Banse R, Sherer K (1996) Acoustic profiles in vocal emotion expression. J Pers Soc Psychol 70(3):614–636

    Article  CAS  PubMed  Google Scholar 

  28. Schröder M (2003) Experimental study of affect bursts. Speech Commun 40:99–116

    Article  Google Scholar 

  29. Leung JH, Purdy SC, Tippett LJ, Leão SH (2017) Affective speech prosody perception and production in stroke patients with left-hemispheric damage and healthy controls. Brain Lang 166:19–28. https://doi.org/10.1016/j.bandl.2016.12.001. Epub PMID: 28013040

    Article  PubMed  Google Scholar 

  30. Trauner D, Ballantyne S, Friedland S, Chase C (1996) Disorders of affective and linguistic prosody in children after early unilateral brain damage. Ann Neurol 39(3):361–367

    Article  CAS  PubMed  Google Scholar 

  31. Doesborgh S (2004) Assessment and Treatment of Linguistic Deficits in Aphasic Patients. Doctoral Dissertation, Erasmus MC: University Medical Center Rotterdam. https://www.researcghate.net

  32. Gialanella B (2011) Aphasia assessment and functional outcome prediction in patients with aphasia after stroke. J Neurol 258(2):343–349

    Article  PubMed  Google Scholar 

  33. Rosslau K, Steinwede D, Schröder C, Herholz SC, Lappe C, Dobel C, Altenmüller E (2015) Clinical investigations of receptive and expressive musical functions after stroke. Front Psychol 6:1–11. https://doi.org/10.3389/fpsyg.2015.00768

    Article  Google Scholar 

  34. Kirshner H S (2012) Aphasia. V.S. Ramachandran (ed). Encyclopedia of Human Behavior, 2nd edn. Academic Press, p 177–186. https://doi.org/10.1016/B978-0-12-375000-6.00029-X ISBN 9780080961804

  35. Jl Wambaugh, Ferguson M (2007) Application of semantic feature analysis to retrieval of action names in dysphasia. J Rehab Res Dev 44(3):381–394

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

No funding was obtained.

Author information

Authors and Affiliations

Authors

Contributions

AFA constructed the idea, interpreted the results, and wrote the manuscript; SSS and AMS shared in constructing the idea and revised the manuscript; and MIA collected the data and tabulated them. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Aisha Fawzy Abdel Hady.

Ethics declarations

Ethics approval and consent to participate

The study was ethically approved by the Research Ethical Committee of Cairo University.-Code: N-67–2023. Written consent was taken from all the participants.

Consent for publications

Not applicable.

Competing interests

The authors declare that they have no competing interests.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Abdel Hady, A.F., Shohdi, S.S., Shawky, A.M. et al. Evaluation of perception of music, environmental sounds, and speech in right versus left cerebral stroke patients. Egypt J Otolaryngol 39, 130 (2023). https://doi.org/10.1186/s43163-023-00486-0

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s43163-023-00486-0

Keywords