Computer Modelling of Sound for Transformation and Synthesis of Musical Signals (original) (raw)

Sound modeling: signal-based approaches

2004

The sound produced by acoustic musical instruments is caused by the physical vibration of a certain resonating structure. This vibration can be described by signals that correspond to the time-evolution of the acoustic pressure associated to it. The fact that the sound can be characterized by a set of signals suggests quite naturally that some computing equipment could be successfully employed for generating sounds, for either the imitation of acoustic instruments or the creation of new sounds with novel timbral properties.

Modelling of natural sounds by time–frequency and wavelet representations

Organised Sound, 1997

andor the frequency of the partials can, for example, synthesis process since it combines sound processing and be manipulated through spectral analysis. Timealgorithmic synthesis within the same formalism. Its aim is frequency analysis allows the separation of the time to make sound simulators by synthesis methods based on and frequency characteristics associated with the signal models or physical models, the parameters of which sound and is of great interest (Kronland-Martinet are directly extracted from the analysis of natural sounds. 1988). However, this approach conflicts with a very In this article the successive steps for making such systems important mathematical principle which states that are described. These are numerical synthesis and sound one cannot arbitrarily modify a time-frequency repgeneration methods, analysis of natural sounds, particularly resentation of a signal. This constraint is due to the time-frequency and time-scale (wavelet) representations, existence of the so-called 'reproducing kernel' which extraction of pertinent parameters, and the determination of the correspondence between these parameters and those takes into account the redundancy of such represencorresponding to the synthesis models. Additive synthesis, tations (Kronland-Martinet et al. 1987). It correnonlinear synthesis, and waveguide synthesis are discussed.

A source-filter model for musical instrument sound transformation

2012 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2012

The model used to represent musical instrument sounds plays a crucial role in the quality of sound transformations. Ideally, the representation should be compact and accurate, while its parameters should give flexibility to independently manipulate perceptually related features of the sounds. This work describes a source-filter model for musical instrument sounds based on the sinusoidal plus residual decomposition. The sinusoidal component is modeled as sinusoidal partial tracks (source) and a time-varying spectral envelope (filter), and the residual is represented as white noise (source) shaped by a time-varying spectral envelope (filter). This article presents estimation and representation techniques that give totally independent and intuitive control of the spectral envelope model and the frequencies of the partials to perform perceptually related sound transformations. The result of a listening test confirmed that, in general, the sounds resynthesized from the source-filter model are perceptually similar to the original recordings.

Spectral modeling for higher-level sound transformations

2001

Abstract When designing audio effects for music processing, we are always aiming at providing higherlevel representations that may somehow fill in the gap between the signal processing world and the end-user. Spectral models in general, and the Sinusoidal plus Residual model in particular, can sometimes offer ways to implement such schemes.

On the Use of Time-Frequency Reassignment in Additive Sound Modeling

Advances in Engineering Software, 2000

We introduce the use of the method of reassignment in sound modeling to produce a sharper, more robust additive representation. The Reassigned Bandwidth- Enhanced Additive Model follows ridges in a time- frequency analysis to construct partials having both si- nusoidal and noise characteristics. This model yields greater resolution in time and frequency than is possible using conventional additive techniques, and

New Applications of the Sound Description Interchange Format

1998

The Sound Description Interchange Format (SDIF) is a recently-adopted standard that can store a variety of sound representations: time domain, spectral, higher-level models, etc. SDIF's design strikes a balance between an overly strong standard, which would restrict innovative and creative uses, and an overly weak standard, in which it is so easy to modify the format that each institution has its own incompatible version of SDIF, even when using the same sound representation. SDIF's structure as a sequence of time-tagged IFF-style chunks or "frames" supports this balanced extensibility: extra information can be added to standard frame types, and entirely new representations can be defined by new frame types. This paper reviews early uses of SDIF. IUA, IRCAM, and CNMAT have added SDIF support to existing tools for sound analysis and synthesis, using standard frame types for STFT results, spectral peaks, tracked sinusoids (both pseudo-harmonic or not), and fundamenta...

Physically based sound modelling

Organised Sound, 1998

In multimedia art and communication, sound models are Schafer (1977), who also introduced a catalogue of needed which are versatile, responsive to users' sounds organised according to referential attributes. expectations, and have high audio quality. Moreover, model Nowadays, a common terminology is available for flexibility for human-machine interaction is a major issue. describing sound objects both from a phenomeno-Models based on the physics of actual or virtual objects can logical or a referential viewpoint, and for describing meet all of these requirements, thus allowing the user to collections of such objects (i.e. soundscapes) (Risset rely on high-level descriptions of the sounding entities. As 1969, Truax 1978, McAdams 1987). long as the sound description is based on the physics of For effective generation and manipulation of the sounding objects and not only on the characteristics of sound objects it is necessary to define models for human hearing, an integration with physics-based graphic sound synthesis, processing and composition. Identimodels becomes possible.