Musical Microtonal Scales based in the quantitative measurement of the sensory dissonance of Sethares (original) (raw)

Timbre as a Psychoacoustic Parameter for Harmonic Analysis and Composition

Timbre can affect our subjective experience of musical dissonance and harmonic progression. To this end, we have developed a set of algorithms to measure roughness (sensory dissonance), and pitch correlation between sonorities, taking into account the effects of timbre and microtonal inflection. We proceed from the work of Richard Parncutt and Ernst Terhardt, extending their algorithms for the psychoacoustic analysis of harmony to include spectral data from actual instrumental sounds. This allows for the study of a much wider variety of timbrallyrich acoustic or electronic sounds which was not possible with the previous algorithms. Further, we generalize these algorithms by working directly with frequency rather than a tempered division of the octave, making them available to the full range of microtonal harmonies. The new algorithms, by yielding different roughness estimates depending on the orchestration of a sonority, confirm our intuitive understanding that orchestration affects sensory dissonance. This package of tools presents rich possibilities for composition and analysis of music that is timbrallydynamic and microtonally-complex.

An investigation of musical timbre

Journal de physique, 1994

Musical timbre is a particularly difficult attribute to measure and quantify. However, Pollard and Janson (1982) C31 have successfully plotted the changing timbre of the starting transient and steady state of a single note. Following on from their work, the changing timbre of an ensemble piece of music i s graphed using just two in* pendent-parameters, these being derived from Stevens (1971) C41, Mark VII.

Computational Approach to Musical Consonance and Dissonance

In sixth century BC, Pythagoras discovered the mathematical foundation of musical consonance and dissonance. When auditory frequencies in small-integer ratios are combined, the result is a harmonious perception. In contrast, most frequency combinations result in audible, off-centered by-products labeled " beating " or " roughness; " these are reported by most listeners to sound dissonant. In this paper, we consider second-order beats, a kind of beating recognized as a product of neural processing, and demonstrate that the data-driven approach of Recurrence Quantification Analysis (RQA) allows for the reconstruction of the order in which interval ratios are ranked in music theory and harmony. We take advantage of computer-generated sounds containing all intervals over the span of an octave. To visualize second-order beats, we use a glissando from the unison to the octave. This procedure produces a profile of recurrence values that correspond to subsequent epochs along the original signal. We find that the higher recurrence peaks exactly match the epochs corresponding to just intonation frequency ratios. This result indicates a link between consonance and the dynamical features of the signal. Our findings integrate a new element into the existing theoretical models of consonance, thus providing a computational account of consonance in terms of dynamical systems theory. Finally, as it considers general features of acoustic signals, the present approach demonstrates a universal aspect of consonance and dissonance perception and provides a simple mathematical tool that could serve as a common framework for further neuro-psychological and music theory research.

Real-Time Analysis of Sensory Dissonance

2007

We describe a tool for real-time musical analysis based on a measure of roughness, the principal element of sensory dissonance. While most historical musical analysis is based on the notated score, our tool permits analysis of a recorded or live audio signal in its full complexity. We proceed from the work of Richard Parncutt and Ernst Terhardt, extending their algorithms for the psychoacoustic analysis of harmony to be used for the live analysis of spectral data. This allows for the study of a wider variety of timbrally-rich acoustic or electronic sounds which was not possible with previous algorithms. Further, the direct treatment of audio signal facilitates a wide range of analytical applications, from the comparison of multiple recordings of the same musical work to the real-time analysis of a live performance. Our algorithm is programmed in C as an external object for the program Max/MSP. Taking musical examples by Arnold Schoenberg, Gérard Grisey and Iannis Xenakis, our algorithm yields varying roughness estimates depending on instrumental orchestration or electronic texture, confirming our intuitive understanding that timbre affects sensory dissonance. This is one of the many possibilities this tool presents for analysis and composition of music that is timbrally-dynamic and microtonally-complex.

THE CONCEPT OF TIMBRE AND ITS NUMERICAL FOUNDATIONS -www.serdaryilmaz.uk-

www.serdaryilmaz.uk

No natural sound in nature has one single frequency. In the same way, the frequencies that generate musical sound aren’t made up of a single fixed frequency, but of many pure frequencies, each of which has a different frequency and pressure, that overlap each other. The hearable cluster frequencies can be also called the resultant sound. The over-tones which create the resultant sound and their sound pressure levels can be analysed and listed individually through computer softwares. The very list that shows the frequency-pressure levels of each hearable sound is like the fingerprint of a sound. Considering the entire music, when the list of each pitch is brought together and evaluated, there comes out the tone color, which we call timbre and is a conception of perception in music. Given this fact, the quality of timbre and sound is the all-together evaluation of distribution of fingerprint identities of each hearable sound. There is no doubt that it is possible to make such sound analyses through computer softwares, but the ear and the brain carry out this evaluation in a much faster way, and thus making clear interpretations about the sound. In that case, when the fingerprint identity of a Manol-made oud is brought out, one can speculate about an anonymous oud whether it is Manol-made or not. It is for sure that competent musicians can figure this out only with their ears in a shorter while.

The perception of musical timbre1 CHAPTER 7

CHAPTER 7 T IMBRE is a misleadingly simple and vague word encompassing a very complex set of auditory attributes, as well as a plethora of psychological and musical issues. It covers many parameters of perception that are not accounted for by pitch, loudness, spatial position, duration, and various environmental characteristics such as room reverberation. This leaves a wealth of possibilities that have been explored over the last 40 years or so. We now understand timbre to have two broad characteristics that contribute to the perception of music: (1) it is a multifarious set of abstract sensory attributes, some of which are continuously varying (e.g. attack sharpness, brightness, nasality, richness), others of which are discrete or categorical (e.g., the 'blatt' at the beginning of a sforzando trombone sound or the pinched offset of a harpsichord sound), and (2) it is one of the primary perceptual vehicles for the recognition, identifi cation, and tracking over time of a sound source (singer's voice, clarinet, set of carillon bells), and thus involves the absolute categorization of a sound . The psychological approach to timbre has also included work on the musical implications of timbre as a set of formbearing dimensions in music ).

Timbre models of musical sounds

1999

This work involves the analysis of musical instrument sounds, the creation of timbre models, the estimation of the parameters of the timbre models and the analysis of the timbre model parameters.

Sonological models for timbre characterization*

Journal of New Music Research, 1997

In the research on timbre, two important variables have to be assigned from the onset: the instruments used to analyze and to model the physical sound, and the techniques employed to provide an e cient and manageable representation of the data. The experimental methodology which results from these choices de nes a sonological model; several di erent psychoacoustical and analytical tools have been employed to this aim in the past. In this paper we will present a series of experiments conducted at the CSC-University of Padova, which attempted to de ne an experimental framework for the development of algorithmically-de ned timbre spaces. Fundamental to our line of research has been the use of analysis methods borrowed from the speech processing community and of data-representation techniques such as neural networks and statistical tools. The results are very interesting and show several analogies to the classical timbre spaces de ned in the literature; this has proved very important in order to explore the qualities of musical timbre in a purely analytical way which does not rely on subjective listeners' ratings.