María Lucía - Academia.edu (original) (raw)

Papers by María Lucía

Research paper thumbnail of Automatic music emotion classification model for movie soundtrack subtitling based on neuroscientific premises

Applied Intelligence, Aug 31, 2023

The ability of music to induce emotions has been arousing a lot of interest in recent years, espe... more The ability of music to induce emotions has been arousing a lot of interest in recent years, especially due to the boom in music streaming platforms and the use of automatic music recommenders. Music Emotion Recognition approaches are based on combining multiple audio features extracted from digital audio samples and different machine learning techniques. In these approaches, neuroscience results on musical emotion perception are not considered. The main goal of this research is to facilitate the automatic subtitling of music. The authors approached the problem of automatic musical emotion detection in movie soundtracks considering these characteristics and using scientific musical databases, which have become a reference in neuroscience research. In the experiments, the Constant-Q-Transform spectrograms, the ones that best represent the relationships between musical tones from the point of view of human perception, are combined with Convolutional Neural Networks. Results show an efficient emotion classification model for 2-second musical audio fragments representative of intense basic feelings of happiness, sadness, and fear. Those emotions are the most interesting to be identified in the case of movie music captioning. The quality metrics have demonstrated that the results of the different models differ significantly and show no homogeneity. Finally, these results pave the way for an accessible and automatic captioning of music, which could automatically identify the emotional intent of the different segments of the movie soundtrack.

Research paper thumbnail of Normative Values for Sport-Specific Left Ventricular Dimensions and Exercise-Induced Cardiac Remodeling in Elite Spanish Male and Female Athletes

Sports Medicine - Open

Background There is debate about the magnitude of geometrical remodeling [i.e., left ventricle (L... more Background There is debate about the magnitude of geometrical remodeling [i.e., left ventricle (LV) cavity enlargement vs. wall thickening] in the heart of elite athletes, and no limits of normality have been yet established for different sports. We aimed to determine sex- and sport-specific normative values of LV dimensions in elite white adult athletes. Methods This was a single-center, retrospective study of Spanish elite athletes. Athletes were grouped by sport and its relative dynamic/static component (Mitchell’s classification). LV dimensions were measured with two-dimensional-guided M-mode echocardiography imaging to compute normative values. We also developed an online and app-based calculator (https://sites.google.com/lapolart.es/athlete-lv/welcome?authuser=0) to provide clinicians with sports- and Mitchell’s category-specific Z-scores for different LV dimensions. Results We studied 3282 athletes (46 different sports, 37.8% women, mean age 23 ± 6 years). The majority (85.4%...

Research paper thumbnail of Vibrotactile Captioning of Musical Effects in Audio-Visual Media as an Alternative for Deaf and Hard of Hearing People: An EEG Study

IEEE Access, 2020

Standard captioning for the deaf and hard of hearing people cannot transmit the emotional informa... more Standard captioning for the deaf and hard of hearing people cannot transmit the emotional information that music provides in support of the narrative in audio-visual media. We explore an alternative method using vibrotactile stimulation as a possible channel to transmit the emotional information contained in an audio-visual soundtrack and, thus, elicit a greater emotional reaction in hearing-impaired people. To achieve this objective, we applied two one-minute videos that were based on image sequences that were unassociated with dramatic action, maximizing the effect of the music and vibrotactile stimuli. While viewing the video, using EEG we recorded the brain activity of 9 female participants with normal hearing, and 7 female participants with very severe and profound hearing loss. The results show that the same brain areas are activated in participants with normal hearing watching the video with the soundtrack, and in participants with hearing loss watching the same video with a ...

Research paper thumbnail of Limitations of Standard Accessible Captioning of Sounds and Music for Deaf and Hard of Hearing People: An EEG Study

Frontiers in Integrative Neuroscience

Captioning is the process of transcribing speech and acoustical information into text to help dea... more Captioning is the process of transcribing speech and acoustical information into text to help deaf and hard of hearing people accessing to the auditory track of audiovisual media. In addition to the verbal transcription, it includes information such as sound effects, speaker identification, or music tagging. However, it just takes into account a limited spectrum of the whole acoustic information available in the soundtrack, and hence, an important amount of emotional information is lost when attending just to the normative compliant captions. In this article, it is shown, by means of behavioral and EEG measurements, how emotional information related to sounds and music used by the creator in the audiovisual work is perceived differently by normal hearing group and hearing disabled group when applying standard captioning. Audio and captions activate similar processing areas, respectively, in each group, although not with the same intensity. Moreover, captions require higher activation of voluntary attentional circuits, as well as language-related areas. Captions transcribing musical information increase attentional activity, instead of emotional processing.

Research paper thumbnail of Automatic music emotion classification model for movie soundtrack subtitling based on neuroscientific premises

Applied Intelligence, Aug 31, 2023

The ability of music to induce emotions has been arousing a lot of interest in recent years, espe... more The ability of music to induce emotions has been arousing a lot of interest in recent years, especially due to the boom in music streaming platforms and the use of automatic music recommenders. Music Emotion Recognition approaches are based on combining multiple audio features extracted from digital audio samples and different machine learning techniques. In these approaches, neuroscience results on musical emotion perception are not considered. The main goal of this research is to facilitate the automatic subtitling of music. The authors approached the problem of automatic musical emotion detection in movie soundtracks considering these characteristics and using scientific musical databases, which have become a reference in neuroscience research. In the experiments, the Constant-Q-Transform spectrograms, the ones that best represent the relationships between musical tones from the point of view of human perception, are combined with Convolutional Neural Networks. Results show an efficient emotion classification model for 2-second musical audio fragments representative of intense basic feelings of happiness, sadness, and fear. Those emotions are the most interesting to be identified in the case of movie music captioning. The quality metrics have demonstrated that the results of the different models differ significantly and show no homogeneity. Finally, these results pave the way for an accessible and automatic captioning of music, which could automatically identify the emotional intent of the different segments of the movie soundtrack.

Research paper thumbnail of Normative Values for Sport-Specific Left Ventricular Dimensions and Exercise-Induced Cardiac Remodeling in Elite Spanish Male and Female Athletes

Sports Medicine - Open

Background There is debate about the magnitude of geometrical remodeling [i.e., left ventricle (L... more Background There is debate about the magnitude of geometrical remodeling [i.e., left ventricle (LV) cavity enlargement vs. wall thickening] in the heart of elite athletes, and no limits of normality have been yet established for different sports. We aimed to determine sex- and sport-specific normative values of LV dimensions in elite white adult athletes. Methods This was a single-center, retrospective study of Spanish elite athletes. Athletes were grouped by sport and its relative dynamic/static component (Mitchell’s classification). LV dimensions were measured with two-dimensional-guided M-mode echocardiography imaging to compute normative values. We also developed an online and app-based calculator (https://sites.google.com/lapolart.es/athlete-lv/welcome?authuser=0) to provide clinicians with sports- and Mitchell’s category-specific Z-scores for different LV dimensions. Results We studied 3282 athletes (46 different sports, 37.8% women, mean age 23 ± 6 years). The majority (85.4%...

Research paper thumbnail of Vibrotactile Captioning of Musical Effects in Audio-Visual Media as an Alternative for Deaf and Hard of Hearing People: An EEG Study

IEEE Access, 2020

Standard captioning for the deaf and hard of hearing people cannot transmit the emotional informa... more Standard captioning for the deaf and hard of hearing people cannot transmit the emotional information that music provides in support of the narrative in audio-visual media. We explore an alternative method using vibrotactile stimulation as a possible channel to transmit the emotional information contained in an audio-visual soundtrack and, thus, elicit a greater emotional reaction in hearing-impaired people. To achieve this objective, we applied two one-minute videos that were based on image sequences that were unassociated with dramatic action, maximizing the effect of the music and vibrotactile stimuli. While viewing the video, using EEG we recorded the brain activity of 9 female participants with normal hearing, and 7 female participants with very severe and profound hearing loss. The results show that the same brain areas are activated in participants with normal hearing watching the video with the soundtrack, and in participants with hearing loss watching the same video with a ...

Research paper thumbnail of Limitations of Standard Accessible Captioning of Sounds and Music for Deaf and Hard of Hearing People: An EEG Study

Frontiers in Integrative Neuroscience

Captioning is the process of transcribing speech and acoustical information into text to help dea... more Captioning is the process of transcribing speech and acoustical information into text to help deaf and hard of hearing people accessing to the auditory track of audiovisual media. In addition to the verbal transcription, it includes information such as sound effects, speaker identification, or music tagging. However, it just takes into account a limited spectrum of the whole acoustic information available in the soundtrack, and hence, an important amount of emotional information is lost when attending just to the normative compliant captions. In this article, it is shown, by means of behavioral and EEG measurements, how emotional information related to sounds and music used by the creator in the audiovisual work is perceived differently by normal hearing group and hearing disabled group when applying standard captioning. Audio and captions activate similar processing areas, respectively, in each group, although not with the same intensity. Moreover, captions require higher activation of voluntary attentional circuits, as well as language-related areas. Captions transcribing musical information increase attentional activity, instead of emotional processing.