Anders Friberg | KTH Royal Institute of Technology (original) (raw)
Papers by Anders Friberg
Springer eBooks, 1995
Several music computers can now convert an input note file to a sounding performance. Listening t... more Several music computers can now convert an input note file to a sounding performance. Listening to such performances demonstrates convincingly the significance of the musicians’ contribution to music performance; when the music score is accurately replicated as nominally written, the music sounds dull and nagging. It is the musicians’ contributions that make the performance interesting. In other words, by deviating slightly from what is nominally written in the music score, the musicians add expressivity to the music.
Journal of Voice, 1988
Analysis by synthesis is a method that has been successfully applied in many areas of scientific ... more Analysis by synthesis is a method that has been successfully applied in many areas of scientific research, in speech research, it has proven to be an excellent tool for identifying perceptually relevant acoustical properties of sounds. This paper reports on some first attempts at synthesizing choir singing, the aim being to elucidate the importance of factors such as the frequency scatter in the fundamental and the formants. The presentation relies heavily on sound examples.
International Computer Music Conference, 1986
QCR 20180910</p
Applied sciences, Dec 3, 2016
There have been few empirical investigations of how individual differences influence the percepti... more There have been few empirical investigations of how individual differences influence the perception of the sonic environment. The present study included the Big Five traits and noise sensitivity as personality factors in two listening experiments (n = 43, n = 45). Recordings of urban and restaurant soundscapes that had been selected based on their type were rated for Pleasantness and Eventfulness using the Swedish Soundscape Quality Protocol. Multivariate multiple regression analysis showed that ratings depended on the type and loudness of both kinds of sonic environments and that the personality factors made a small yet significant contribution. Univariate models explained 48% (cross-validated adjusted R 2) of the variation in Pleasantness ratings of urban soundscapes, and 35% of Eventfulness. For restaurant soundscapes the percentages explained were 22% and 21%, respectively. Emotional stability and noise sensitivity were notable predictors whose contribution to explaining the variation in quality ratings was between one-tenth and nearly half of the soundscape indicators, as measured by squared semipartial correlation. Further analysis revealed that 36% of noise sensitivity could be predicted by broad personality dimensions, replicating previous research. Our study lends empirical support to the hypothesis that personality traits have a significant though comparatively small influence on the perceived quality of sonic environments.
PLOS ONE, Dec 7, 2015
Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of ... more Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R 2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.
Music Perception, 1991
In an analysis- by-synthesis investigation of music performance, rules have been developed that d... more In an analysis- by-synthesis investigation of music performance, rules have been developed that describe when and how expressive deviations are made from the nominal music notation in the score. Two experiments that consider the magnitudes of such deviations are described. In Experiment 1, the musicians' and nonmusicians' sensitivities to expressive deviations generated by seven performance rules are compared. The musicians showed a clearly greater sensitivity. In Experiment 2, professional musicians adjusted to their satisfaction the quantity by which six rules affected the performance. For most rules, there was a reasonable agreement between the musicians regarding preference. The preferred quantities seemed close to the threshold of perceptibility.
Psychology of Music, Apr 1, 1989
Starting from a text-to-speech conversion programme (Carlson and Gran- strom, 1975), a note-to-to... more Starting from a text-to-speech conversion programme (Carlson and Gran- strom, 1975), a note-to-tone conversion programme has been developed (Sundberg and Fryden, 1985). It works with a set of ordered rules affecting the performance of melodies written into the computer. Depending on the musical context, each of these rules manipulates various tone parameters, such as intensity level, fundamental frequency, and duration. In the present study the musical effect of nine rules is tested. Ten melodies were played under several rule-implementation conditions, and musically trained listeners rated the musical quality of each performance. The results support the assumption that the musical quality of performances is improved by applying rules.
How to terminate a phrase. An analysis-by-synthesis experiment on the perceptual aspect of music ... more How to terminate a phrase. An analysis-by-synthesis experiment on the perceptual aspect of music performance
Music and locomotion. Perception of tones with level envelopes replicating force patterns of walking
This article presents an overview of a long-term research work with a rule system for the automat... more This article presents an overview of a long-term research work with a rule system for the automatic performance of music. The performance rules produce deviations from the durations, sound levels, and pitches nominally specified in the music score. They can be classified according to their apparent musical function: to help the listener (I) in the differentiation of different pitch and duration categories and (2) in the grouping of the tones. Apart from this, some rules serve the purpose of organizing tuning and synchronization in ensemble performance. The rules reveal striking similarities between music performance and speech; for instance final lengthening occur in both and the acoustic code used for marking of emphasis are similar.
Environment and Behavior, 2015
Music listening ofien produces associations to locomotion. This suggests that some patterns in mu... more Music listening ofien produces associations to locomotion. This suggests that some patterns in music are similar to those perceived during locomotion. The present investigation tests the hypothesis ...
Starting from a text-to-speech conversion program (Carlson & Granstrom, 1975), a note-to-tone con... more Starting from a text-to-speech conversion program (Carlson & Granstrom, 1975), a note-to-tone conversion program has been developed (!Xmdberg & ~rydh, 1985). It works with a set of ordered rules af fe&ing the performance of melodies written into the computer. Depending on the musical context, each of these rules manipulates various tone parameters, such as sound level, fundamental frequency, duration, etc. In the present study the effect of some of the rules developed so far on the musical quality of the performance is tested; various musical excerpts perbrmed according to different combinations an5 versions of nine performance rules were played to musically trained listeners who rated the musical quality. The results support the assumption that the musical quality of the performance is improved by applying the rules.
Musicians often make gestures and move their bodies expressing their musical intention. This visu... more Musicians often make gestures and move their bodies expressing their musical intention. This visual information provides a separate channel of communication to the listener. In order to explore to what extent emotional intentions can be conveyed through musicians' movements, video recordings were made of a marimba player performing the same piece with four different intentions, Happy, Sad, Angry and Fearful. Twenty subjects were asked to rate the silent video clips with respect to perceived emotional content and movement qualities. The video clips were presented in different viewing conditions, showing different parts of the player. The results showed that the intentions Happiness, Sadness and Anger were well communicated, while Fear was not. The identification of the intended emotion was only slightly influenced by viewing condition. The movement ratings indicated that there were cues that the observers used to distinguish between intentions, similar to cues found for audio signals in music performance.
Springer eBooks, 1995
Several music computers can now convert an input note file to a sounding performance. Listening t... more Several music computers can now convert an input note file to a sounding performance. Listening to such performances demonstrates convincingly the significance of the musicians’ contribution to music performance; when the music score is accurately replicated as nominally written, the music sounds dull and nagging. It is the musicians’ contributions that make the performance interesting. In other words, by deviating slightly from what is nominally written in the music score, the musicians add expressivity to the music.
Journal of Voice, 1988
Analysis by synthesis is a method that has been successfully applied in many areas of scientific ... more Analysis by synthesis is a method that has been successfully applied in many areas of scientific research, in speech research, it has proven to be an excellent tool for identifying perceptually relevant acoustical properties of sounds. This paper reports on some first attempts at synthesizing choir singing, the aim being to elucidate the importance of factors such as the frequency scatter in the fundamental and the formants. The presentation relies heavily on sound examples.
International Computer Music Conference, 1986
QCR 20180910</p
Applied sciences, Dec 3, 2016
There have been few empirical investigations of how individual differences influence the percepti... more There have been few empirical investigations of how individual differences influence the perception of the sonic environment. The present study included the Big Five traits and noise sensitivity as personality factors in two listening experiments (n = 43, n = 45). Recordings of urban and restaurant soundscapes that had been selected based on their type were rated for Pleasantness and Eventfulness using the Swedish Soundscape Quality Protocol. Multivariate multiple regression analysis showed that ratings depended on the type and loudness of both kinds of sonic environments and that the personality factors made a small yet significant contribution. Univariate models explained 48% (cross-validated adjusted R 2) of the variation in Pleasantness ratings of urban soundscapes, and 35% of Eventfulness. For restaurant soundscapes the percentages explained were 22% and 21%, respectively. Emotional stability and noise sensitivity were notable predictors whose contribution to explaining the variation in quality ratings was between one-tenth and nearly half of the soundscape indicators, as measured by squared semipartial correlation. Further analysis revealed that 36% of noise sensitivity could be predicted by broad personality dimensions, replicating previous research. Our study lends empirical support to the hypothesis that personality traits have a significant though comparatively small influence on the perceived quality of sonic environments.
PLOS ONE, Dec 7, 2015
Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of ... more Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R 2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.
Music Perception, 1991
In an analysis- by-synthesis investigation of music performance, rules have been developed that d... more In an analysis- by-synthesis investigation of music performance, rules have been developed that describe when and how expressive deviations are made from the nominal music notation in the score. Two experiments that consider the magnitudes of such deviations are described. In Experiment 1, the musicians' and nonmusicians' sensitivities to expressive deviations generated by seven performance rules are compared. The musicians showed a clearly greater sensitivity. In Experiment 2, professional musicians adjusted to their satisfaction the quantity by which six rules affected the performance. For most rules, there was a reasonable agreement between the musicians regarding preference. The preferred quantities seemed close to the threshold of perceptibility.
Psychology of Music, Apr 1, 1989
Starting from a text-to-speech conversion programme (Carlson and Gran- strom, 1975), a note-to-to... more Starting from a text-to-speech conversion programme (Carlson and Gran- strom, 1975), a note-to-tone conversion programme has been developed (Sundberg and Fryden, 1985). It works with a set of ordered rules affecting the performance of melodies written into the computer. Depending on the musical context, each of these rules manipulates various tone parameters, such as intensity level, fundamental frequency, and duration. In the present study the musical effect of nine rules is tested. Ten melodies were played under several rule-implementation conditions, and musically trained listeners rated the musical quality of each performance. The results support the assumption that the musical quality of performances is improved by applying rules.
How to terminate a phrase. An analysis-by-synthesis experiment on the perceptual aspect of music ... more How to terminate a phrase. An analysis-by-synthesis experiment on the perceptual aspect of music performance
Music and locomotion. Perception of tones with level envelopes replicating force patterns of walking
This article presents an overview of a long-term research work with a rule system for the automat... more This article presents an overview of a long-term research work with a rule system for the automatic performance of music. The performance rules produce deviations from the durations, sound levels, and pitches nominally specified in the music score. They can be classified according to their apparent musical function: to help the listener (I) in the differentiation of different pitch and duration categories and (2) in the grouping of the tones. Apart from this, some rules serve the purpose of organizing tuning and synchronization in ensemble performance. The rules reveal striking similarities between music performance and speech; for instance final lengthening occur in both and the acoustic code used for marking of emphasis are similar.
Environment and Behavior, 2015
Music listening ofien produces associations to locomotion. This suggests that some patterns in mu... more Music listening ofien produces associations to locomotion. This suggests that some patterns in music are similar to those perceived during locomotion. The present investigation tests the hypothesis ...
Starting from a text-to-speech conversion program (Carlson & Granstrom, 1975), a note-to-tone con... more Starting from a text-to-speech conversion program (Carlson & Granstrom, 1975), a note-to-tone conversion program has been developed (!Xmdberg & ~rydh, 1985). It works with a set of ordered rules af fe&ing the performance of melodies written into the computer. Depending on the musical context, each of these rules manipulates various tone parameters, such as sound level, fundamental frequency, duration, etc. In the present study the effect of some of the rules developed so far on the musical quality of the performance is tested; various musical excerpts perbrmed according to different combinations an5 versions of nine performance rules were played to musically trained listeners who rated the musical quality. The results support the assumption that the musical quality of the performance is improved by applying the rules.
Musicians often make gestures and move their bodies expressing their musical intention. This visu... more Musicians often make gestures and move their bodies expressing their musical intention. This visual information provides a separate channel of communication to the listener. In order to explore to what extent emotional intentions can be conveyed through musicians' movements, video recordings were made of a marimba player performing the same piece with four different intentions, Happy, Sad, Angry and Fearful. Twenty subjects were asked to rate the silent video clips with respect to perceived emotional content and movement qualities. The video clips were presented in different viewing conditions, showing different parts of the player. The results showed that the intentions Happiness, Sadness and Anger were well communicated, while Fear was not. The identification of the intended emotion was only slightly influenced by viewing condition. The movement ratings indicated that there were cues that the observers used to distinguish between intentions, similar to cues found for audio signals in music performance.
Special Issue on Soundscapes. Appl. Sci. 2016, 6(12), 405, Dec 3, 2016
There have been few empirical investigations of how individual differences influence the percepti... more There have been few empirical investigations of how individual differences influence the perception of the sonic environment. The present study included the Big Five traits and noise sensitivity as personality factors in two listening experiments (n = 43, n = 45). Recordings of urban and restaurant soundscapes that had been selected based on their type were rated for Pleasantness and Eventfulness using the Swedish Soundscape Quality Protocol. Multivariate multiple regression analysis showed that ratings depended on the type and loudness of both kinds of sonic environments and that the personality factors made a small yet significant contribution. Univariate models explained 48% (cross-validated adjusted R2) of the variation in Pleasantness ratings of urban soundscapes, and 35% of Eventfulness. For restaurant soundscapes the percentages explained were 22% and 21%, respectively. Emotional stability and noise sensitivity were notable predictors whose contribution to explaining the variation in quality ratings was between one-tenth and nearly half of the soundscape indicators, as measured by squared semipartial correlation. Further analysis revealed that 36% of noise sensitivity could be predicted by broad personality dimensions, replicating previous research. Our study lends empirical support to the hypothesis that personality traits have a significant though comparatively small influence on the perceived quality of sonic environments.
PLoS ONE 10(12): e0144013, Dec 7, 2015
Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of ... more Crossmodal associations may arise at neurological, perceptual, cognitive, or emotional levels of brain processing. Higher-level modal correspondences between musical timbre and visual colour have been previously investigated, though with limited sets of colour. We developed a novel response method that employs a tablet interface to navigate the CIE Lab colour space. The method was used in an experiment where 27 film music excerpts were presented to participants (n = 22) who continuously manipulated the colour and size of an on-screen patch to match the music. Analysis of the data replicated and extended earlier research, for example, that happy music was associated with yellow, music expressing anger with large red colour patches, and sad music with smaller patches towards dark blue. Correlation analysis suggested patterns of relationships between audio features and colour patch parameters. Using partial least squares regression, we tested models for predicting colour patch responses from audio features and ratings of perceived emotion in the music. Parsimonious models that included emotion robustly explained between 60% and 75% of the variation in each of the colour patch parameters, as measured by cross-validated R2. To illuminate the quantitative findings, we performed a content analysis of structured spoken interviews with the participants. This provided further evidence of a significant emotion mediation mechanism, whereby people tended to match colour association with the perceived emotion in the music. The mixed method approach of our study gives strong evidence that emotion can mediate crossmodal association between music and visual colour. The CIE Lab interface promises to be a useful tool in perceptual ratings of music and other sounds.