Max/MSP/Jitter Research Papers - Academia.edu (original) (raw)
2025, International Computer Music Conference (August …
The Audio Scene Description Format (ASDF) is an collaboratively evolving format for the storage and interchange of static, dynamic and interactive spatial audio content. This position paper briefly describes the current status and raises... more
The Audio Scene Description Format (ASDF) is an collaboratively evolving format for the storage and interchange of static, dynamic and interactive spatial audio content. This position paper briefly describes the current status and raises a list of open questions which shall be addressed in the panel discussion.
2025, International Computer Music Conference
We present sndtools, a set of cross platform, open-source tools for simultaneously displaying related audio and visual information in real-time. The distribution includes tools to extract spectral information, perform linear predictive... more
We present sndtools, a set of cross platform, open-source tools for simultaneously displaying related audio and visual information in real-time. The distribution includes tools to extract spectral information, perform linear predictive coding analysis and resynthesis, manipulate pitch and time using a phase vocoder, and map text to Morse code. Each tool has closely related audio and visual (graphical or text) components and can be used for instructive purposes or experimentation with sound. We show that hardware-accelerated graphics tools such as OpenGL can be used to enable real-time 3D visualization of DSP algorithms.
2025, DCAC 2024: 6th International Conference on Digital Culture & AudioVisual Challenges, Interdisciplinary Creativity in Arts and Technology, Corfu-Greece & Online May 24–25, 2024
This article examines the role of interpretative mapping in electroacoustic composition through an analysis of two works for flute and electronics: Machine Gun and 59 Winds (Forton Music). These scores function as dual frameworks,... more
This article examines the role of interpretative mapping in electroacoustic composition through an analysis of two works for flute and electronics: Machine Gun and 59 Winds (Forton Music). These scores function as dual frameworks, designed to (1) faithfully represent the flute's extended techniquesencompassing multiphonics, microtonal inflections, and narrative-driven expression-and (2) experiment with schematic notation as a subjective interface between composer, performer, and electronic soundscapes. Both works resist conventional pitch-centric notation, instead adopting a spectral lens to decompose timbre, texture, and temporal evolution into hybrid visual vocabularies. By reimagining notation as a dynamic cartography, these works exemplify how electroacoustic scores can transcend prescriptive documentation, instead fostering collaborative semiosis where visual abstraction and acoustic intentionality coalesce. The conclusions drawn from the above practical research concern an original style of music representation, which however is based on the tradition of contemporary music scoring and offers the performer more possibilities to render the work as accurately as possible. Technology is evolving rapidly and offers many possibilities for recording music. Nevertheless, the composer's subjective impression of the music must be preserved in every case in order to convey the symbolism and anthropogenic elements of the composition. So the digital imprinting of music remains a creative process, a kind of visual art, where the medium-the digital image-functions as the canvas on which we paint the music.
2025, International Computer Music Conference
In popular music genres, typical songs are pre-composed and leave little or no space for improvisation during a live performance. That applies for the performing musicians as well as for the spectators in terms of interactive audience... more
In popular music genres, typical songs are pre-composed and leave little or no space for improvisation during a live performance. That applies for the performing musicians as well as for the spectators in terms of interactive audience participation. In this study we question these improvisational limits and try to identify strategies for involving the audience as an additional and unpredictable factor in a pre-composed rock song. To do so we composed "Experimence" guided by the standard practice of song writing. The song was premiered at a public live concert where the audience could collaboratively participate in real-time by playing with a balloon together throughout the song. Using a wizard of oz technique, the movements of the balloon influenced the live music played by the pianist. We reflect across this experience and present notable issues raised during the composition, rehearsals and the actual performance. We then classify these aspects as abstract variables of consideration for a composition meant to promote such audience participation. We propose this proof of concept as a starting point for further discussion, suggesting that a song such as Experimence can be a unique and individual piece of music every time it is played although largely pre-composed.
2025
Intervista a Christoph Franke, produttore e tecnico del suono presso l'Orchestra Filarmonica di Berlino.
2025
Ciao e benvenuti! Presentatevi ai nostri lettori! I Notturno si formano a Catania nel 2020 circa, per volontà mia, Claudia Caudullo, chitarra e voce, e Riccardo Liberti, basso e drum programming. L'intenzione era quella di creare una band... more
Ciao e benvenuti! Presentatevi ai nostri lettori! I Notturno si formano a Catania nel 2020 circa, per volontà mia, Claudia Caudullo, chitarra e voce, e Riccardo Liberti, basso e drum programming. L'intenzione era quella di creare una band dal sound atmosferico e potente, ma che si aprisse anche a soluzioni più progressive. Il nucleo principale si completa con l'ingresso di Francesco Cucinotta ai synth. Insieme componiamo ed incidiamo i brani del nostro debut album "Iceblink" che ci autoproduciamo nel 2023. Attualmente stiamo lavorando al nostro secondo album.
2025, Proceedings of the ACM on human-computer interaction
2025, Alto e bom som: uma história da arte sonora brasileira. Rui Chaves e Fernando Iazzetta (organizadores)
field recording) é comumente empregado para indicar a prática da tomada de sons fora do estúdio de gravação. Em seus usos específicos, esta prática serviu a propósitos diversos ao longo de sua história, sendo instrumental para áreas como... more
field recording) é comumente empregado para indicar a prática da tomada de sons fora do estúdio de gravação. Em seus usos específicos, esta prática serviu a propósitos diversos ao longo de sua história, sendo instrumental para áreas como a ornitologia, a ecologia acústica e a bioacústica, com os influentes trabalhos de Ludwig Karl Koch, Raymond Murray Schafer e Bernie Krause, respectivamente; ou ainda para a etnomusicologia, com as gravações de cantos tradicionais realizadas por Alan Lomax e Béla Bartók, ou os esforços de Mário de Andrade, que, em 1938, organizou uma missão pelo Norte e Nordeste do Brasil, produzindo um registro de valor inestimável da rica produção musical de origem popular naquelas regiões 1 . Os usos de gravações de campo para a realização de trabalhos artísticos -um tópico de interesse significativo para o presente capítulo -remontam aos trabalhos de (dentre outros) John Cage e suas ideias referentes à libertação do ruído; a Pierre Schaeffer e à musique concrète,
2024
Iannis Xenakis’ first electroacoustic work, Diamorphoses (for magnetic tape, 1957), has been studied and analyzed by various scholars, usually in analysis approaches based on listening and perceptual features. Information concerning the... more
Iannis Xenakis’ first electroacoustic work, Diamorphoses (for magnetic tape, 1957), has been studied and analyzed by various scholars, usually in analysis approaches based on listening and perceptual features. Information concerning the production means is limited and uncertain. Some authors mention the use of the phonogéne as well as of the morphophone, magnetophonic processing systems developed in the '50s at GRM in Paris, where the piece was made. Here, based on the historical sources and clues gathered, we put forth the hypothesis that the morphophone was used to produce very peculiar sonorities sometimes described as "glissando swarms" or in other, similar ways. To test the hypothesis, in concrete operational manners, we resort to the Digital Morphophone Environment, a recent audio digital application written in Max. By way of reconstructing the accumulation process of innumerable short glissando sounds, we prove the hypothesis to be essentially correct. The paper also provides hints as to how Xenakis might have tried to manage the 'density' of his thick sound textures.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
This paper presents preliminary works exploring the use of machine learning in computer-aided composition processes. We propose a work direction using motion recognition and audio descriptors to learn abstract musical gestures.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
We report preliminary results of an ongoing project on automatic recognition and classification of musical "gestures" from audio extracts. We use a machine learning tool designed for motion tracking and recognition, applied to labeled... more
We report preliminary results of an ongoing project on automatic recognition and classification of musical "gestures" from audio extracts. We use a machine learning tool designed for motion tracking and recognition, applied to labeled vectors of audio descriptors in order to recognize hypothetical gestures formed by these descriptors. A hypothesis is that the classes detected in audio descriptors can be used to identify higherlevel/abstract musical structures which might not be described easily using standard/symbolic representations.
2024, International Computer Music Conference
We consider the issue of how a flexible musical space can be manipulated by users of an active music system. The musical space is navigated within by selecting transitions between different sections of the space. We take inspiration from... more
We consider the issue of how a flexible musical space can be manipulated by users of an active music system. The musical space is navigated within by selecting transitions between different sections of the space. We take inspiration from pheromone trails in ant colonies to propose and investigate an approach that allows an artificial agent to navigate such musical spaces in accordance with the preferences of the user, and a set of boundaries specified by the designer of the musical space.
2024
This paper discusses the evolution of the Max/MSP patch used in schismatics (2007, rev. 2010) for electric violin (Violectra) and computer, by composer Sam Hayden in collaboration with violinist Mieko Kanno. schismatics involves a... more
This paper discusses the evolution of the Max/MSP patch used in schismatics (2007, rev. 2010) for electric violin (Violectra) and computer, by composer Sam Hayden in collaboration with violinist Mieko Kanno. schismatics involves a standard performance paradigm of a fixed notated part for the e-violin with sonically unfixed live computer processing. Hayden was unsatisfied with the early version of the piece: the use of attack detection on the live e-violin playing to trigger stochastic processes led to an essentially reactive behaviour in the computer, resulting in a somewhat predictable one-toone sonic relationship between them. It demonstrated little internal relationship between the two beyond an initial e-violin ‘action’ causing a computer ‘event’. The revisions in 2010, enabled by an AHRC Practice-Led research award, aimed to achieve 1) a more interactive performance situation and 2) a subtler and more ‘musical’ relationship between live and processed sounds. This was realised t...
2024
DisPatch is the title of a dance work that was partly the outcome of research into ways in which movement in space might successfully correlate to the parameters of sound. The process of making the work in workshops, rehearsals and... more
DisPatch is the title of a dance work that was partly the outcome of research into ways in which movement in space might successfully correlate to the parameters of sound. The process of making the work in workshops, rehearsals and performance both altered our view of, and added to the background research. It became evident in this process that certain outcomes, whilst alien to musically conventional ears, were nevertheless valid and interesting sonic translations of the choreography. The performance demonstrated how issues within and between disciplines can be re-problematised through the creative and innovative use of technology. It is our contention that in this work is evidence for a disseminable "knowledge increase" that is intrinsic to "The Work", a primary assessable criteria in traditional textbased research (Biggs. 2002, Melrose. 2002). The question that is central to any assessment of intrinsic knowledge in a work of art is how the work demonstrates or explains this knowledge. And this question is examined in light of our reading of DisPatch. Extrinsic to DisPatch were a number of other contextualising influences by which spectators were able to inform their interpretation of the work. Program notes, an informal verbal explanation of some of the enabling technology, the venue and its attendant architecture, ritual, expectation and the broader cultural context of performance/concert-going.
2024, 2021 24th International Conference on Digital Audio Effects (DAFx)
We release synth1B1, a multi-modal audio corpus consisting of 1 billion 4-second synthesized sounds, paired with the synthesis parameters used to generate them. The dataset is 100x larger than any audio dataset in the literature. We also... more
We release synth1B1, a multi-modal audio corpus consisting of 1 billion 4-second synthesized sounds, paired with the synthesis parameters used to generate them. The dataset is 100x larger than any audio dataset in the literature. We also introduce torchsynth, an open source modular synthesizer that generates the synth1B1 samples on-the-fly at 16200x faster than real-time (714MHz) on a single GPU. Finally, we release two new audio datasets: FM synth timbre and subtractive synth pitch. Using these datasets, we demonstrate new rank-based evaluation criteria for existing audio representations. Finally, we propose a novel approach to synthesizer hyperparameter optimization.
2024, Teorias estrangeiras no Brasil: migração, enculturação, aculturação
No Brasil, no início do século XXI, o campo de pesquisa chamado sonologia emergiu da necessidade de pesquisadores e músicos, sobretudo compositores de música "eletroacústica" ou "computacional" 1 , compreenderem mais profundamente o som,... more
No Brasil, no início do século XXI, o campo de pesquisa chamado sonologia emergiu da necessidade de pesquisadores e músicos, sobretudo compositores de música "eletroacústica" ou "computacional" 1 , compreenderem mais profundamente o som, sua natureza, suas formas de manipulação e articulação, em contexto de produção e difusão de obras musicais.
2024, Assodolab
This article discusses the evolution and impact of the vibraphone in jazz music. It highlights the instrument's late entry into the jazz scene, its invention in 1916, and how musicians like Lionel Hampton and Red Norvo pioneered its use.... more
This article discusses the evolution and impact of the vibraphone in jazz music. It highlights the instrument's late entry into the jazz scene, its invention in 1916, and how musicians like Lionel Hampton and Red Norvo pioneered its use. The article also explores the technical and stylistic innovations, such as the use of two or four mallets, which expanded the vibraphone's role in genres like swing, bebop, cool jazz, and free jazz. Finally, it emphasizes the vibraphone's enduring influence and versatility in modern jazz.
2024, Springer eBooks
This paper aims at describing an approach to the music performance situation as a laboratory for investigating interactivity. I would like to present "Leçons pour un apprenti sourd-muet" 1 , where the basic idea is that of two... more
This paper aims at describing an approach to the music performance situation as a laboratory for investigating interactivity. I would like to present "Leçons pour un apprenti sourd-muet" 1 , where the basic idea is that of two improvisers, a saxophonist and a computer, engaged in a series of musical questions and responses. The situation is inspired from the Japanese shakuhachi tradition, where imitating the master performer is a prime element in the apprentice's learning process. Through listening and imitation, the computer's responses get closer to that of its master for each turn. In this sense, the computer's playing emanates from the saxophonist's phrases and the interactivity in "Leçons" happens on the level of the composition. COMPOSITION PROCESS FORMALISED The architecture of "Leçons" takes as point of departure the information exchange between two agents in a conversation-like situation as described in linguistic theory. 2 Communication between a musician and a machine may be considered to have aspects of both natural and formal languages.
2024
The article outlines a psychoacoustically founded method to describe the acoustic performance of earphones in two dimensions, Spectral Shape and Stereo Image Coherence. In a test set of 14 typical earphones, these dimensions explained... more
The article outlines a psychoacoustically founded method to describe the acoustic performance of earphones in two dimensions, Spectral Shape and Stereo Image Coherence. In a test set of 14 typical earphones, these dimensions explained 66.2% of total variability in 11 acoustic features based on Bark band energy distribution. We designed an interactive Earphone Simulator software that allows smooth interpolation between measured earphones, and employed it in a controlled experiment (N=30). Results showed that the preferred 'virtual earphone' sound was different between two test conditions, silence and commuter noise, both in terms of gain level and spectral shape. We discuss possible development of the simulator design for use in perceptual research as well as in commercial applications. 1.
2024
Le roi de la pampa retourne sa chemise pour déplaire au profane aussi bien qu'aux idiots il se penche et alors à sa grande surprise il ne trouve aussi sec qu'un sac de vieux fayots.
2024
This chapter examines computer assisted analysis and composition (CAAC) techniques in relation to the composition of my piece TreeTorika for chamber orchestra. I describe methods for analysing the musical features of a recording of a... more
This chapter examines computer assisted analysis and composition (CAAC) techniques in relation to the composition of my piece TreeTorika for chamber orchestra. I describe methods for analysing the musical features of a recording of a speech by Mao Zedong, in order to extract compositional material such as global form, melody, harmony and rhythm, and for developing rhythmic material. The first part focuses on large-scale segmentation, melody transcription, quantification and quantization. Automatic transcription of the voice was discarded in favour of an aural method using tools in Amadeus and Max/MSP. The data were processed in OpenMusic to optimise the accuracy and readability of the notation. The harmonic context was derived from the transcribed melody and from AudioSculpt partial tracking and chord sequence analyses. The second part of this chapter describes one aspect of computer assisted composition, that is the use of the rhythm constraint library in OpenMusic to develop polyrhythmic textures. The flexibility of these techniques allowed the computer to assist me in all but the final phases of the work. In addition, attention is given to the artistic and political implications of using recordings of such a disputed public figure as Mao.
2024
We report results from an investigation into the relationships between acoustic performance, price, and perceived quality of earphones. In Singapore today, the most common situation where people listen to music is while commuting, however... more
We report results from an investigation into the relationships between acoustic performance, price, and perceived quality of earphones. In Singapore today, the most common situation where people listen to music is while commuting, however such environments have generally high ambient noise levels. A survey (N=94) of listener habits on buses and trains was conducted. Results showed that people use a wide range of earphones, both in terms of price and measurable acoustic performance. Five typical earphone models were identified and employed in a perceptual experiment (N=15). Volunteers rated various aspects of earphone quality while listening to music under two conditions: studio silence and a reproduced commuter environment. Results showed that participants displayed a strong preference towards in-ear earphones and this can be attributed to these having better acoustic isolation than on-ear earphones. People tend to describe the music listening experiences in terms of sonic clarity and noise isolation. We believe that these results can inform development of an ecologically valid model of how noisy environments affect people's perception of audio quality, and through that, of music experience. Such a model could inform consumers as well as manufacturers.
2024
Introduction (#top) Aspects of Music Interactivity (#aspects) A Definition of Interactivity (#definition) The Score (#score) Degrees of Music Interactivity (#degrees) Where Lies the Creativity of Interactive Systems? (#creativity)... more
Introduction (#top) Aspects of Music Interactivity (#aspects) A Definition of Interactivity (#definition) The Score (#score) Degrees of Music Interactivity (#degrees) Where Lies the Creativity of Interactive Systems? (#creativity) Bibliography (#_bibliography) | Author Biography (#_biography)
2024
The present study reports results from an experiment that is part of Soundscape Emotion Responses (SSER) study. We investigated the interaction between psychological and acoustic features in the perception of soundscapes. Participant... more
The present study reports results from an experiment that is part of Soundscape Emotion Responses (SSER) study. We investigated the interaction between psychological and acoustic features in the perception of soundscapes. Participant features were estimated with
2024, arXiv (Cornell University)
We address the issue of editing musical performance data, in particular M les representing human musical performances. Editing such sequences raises speci c issues due to the ambiguous nature of musical objects. e rst source of ambiguity... more
We address the issue of editing musical performance data, in particular M les representing human musical performances. Editing such sequences raises speci c issues due to the ambiguous nature of musical objects. e rst source of ambiguity is that musicians naturally produce many deviations from the metrical frame. ese deviations may be intentional or subconscious, but they play an important role in conveying the groove or feeling of a performance. Relations between musical elements are also usually implicit, creating even more ambiguity. A note is in relation with the surrounding notes in many possible ways: it can be part of a melodic pattern, it can also play a harmonic role with the simultaneous notes, or be a pedal-tone. All these aspects play an essential role that should be preserved, as much as possible, when editing musical sequences. In this paper, we contribute speci cally to the problem of editing non-quantized, metrical musical sequences represented as M les. We rst list of number of problems caused by the use of naive edition operations applied to performance data, using a motivating example. We then introduce a model, called Dancing M , based on 1) two desirable, well-de ned properties for edit operations and 2) two well-de ned operations, and , with an implementation. We show that our model formally satis es the two properties, and that it prevents most of the problems that occur with naive edit operations on our motivating example, as well as on a real-world example using an automatic harmonizer.
2024
Vi ho scoperti su "Museo del Black Metal Italiano" con "Painting My Soul Black" (https://www.youtube.com/watch?v=LxAPMrQz3Qc) restando incantata dal vostro sound fin dai primissimi secondi dopo averci cliccato su! Grazie per aver... more
Vi ho scoperti su "Museo del Black Metal Italiano" con "Painting My Soul Black" (https://www.youtube.com/watch?v=LxAPMrQz3Qc) restando incantata dal vostro sound fin dai primissimi secondi dopo averci cliccato su! Grazie per aver accettato l'intervista! Presentatevi ai nostri lettori! Ciao, ti ringrazio per le belle parole, Incanto Lunare è un progetto formato da me (Mirco) e Nekrist. Io mi occupo della composizione dei brani, registrazione delle chitarre e tastiere, mentre Nekrist scrive i testi ed è il cantante ufficiale della band. Per essere precisi Nekrist è entrato a far parte in pianta stabile di Incanto Lunare dall'EP "Medieval Winterspell" (https://incantolunare.bandcamp.com/album/medievalwinterspell)
2024
Interactive scores proposes a model to write and execute interactive multimedia scores. The software I-SCORE implements the above model using Hierarchical Time Stream Petri Nets (HTSPN). However, this model is very static and modelling... more
Interactive scores proposes a model to write and execute interactive multimedia scores. The software I-SCORE implements the above model using Hierarchical Time Stream Petri Nets (HTSPN). However, this model is very static and modelling new features would require a complete redesign of the network or sometimes they cannot be expressed. Another problem of I-SCORE is that it does not provide a good visual feedback of the execution of the scenario. In this work, we define and implement an interpreter of interactive scores using the synchronous pro- gramming language REACTIVEML. Our work takes advantage of the expressiveness of the reactive model and the power of functional programming to develop an interpreter more dynamic and simple. Contrary to the Petri Net model, our approach allows to model precisely the hierarchical behaviour, and permits the easy prototyping of new features. We also propose a visualization system using the environment INSCORE that provides a real-time visualizati...
2024, DEBATES - Cadernos do Programa de Pós-Graduação em Música
O presente artigo tem por objetivo discutir o papel do intérprete de música eletroacústica sob o ponto de vista técnico, elencando e qualificando os instrumentos necessários para a captação, o tratamento e a difusão de objetos sonoros.... more
O presente artigo tem por objetivo discutir o papel do intérprete de música eletroacústica sob o ponto de vista técnico, elencando e qualificando os instrumentos necessários para a captação, o tratamento e a difusão de objetos sonoros. Para isso, recorreu-se ao mapeamento do caminho percorrido pelo sinal de áudio (que pode ser de natureza acústica, elétrica, eletrônica ou digital) durante uma situação de concerto de música mista, realizando-se também uma discussão sobre situações hipotéticas e as escolhas envolvidas na interpretação.
2024
Time mapping is a common feature in many (commercial and/or open-source) Digital Audio Workstations, allowing the musician to automate tempo changes of a musical performance or work, as well as to visualize the relation between score time... more
Time mapping is a common feature in many (commercial and/or open-source) Digital Audio Workstations, allowing the musician to automate tempo changes of a musical performance or work, as well as to visualize the relation between score time (beats) and real/performance time (seconds). Unfortunately, available music production, performance and remixing tools implemented with web technologies like JavaScript and Web Audio API do not offer any mechanism for flexible, and seamless, tempo manipulation and automation. In this paper, we present BPMTimeline, a time mapping library, providing a seamless mapping between score and performance time. To achieve this, we model tempo changes as tempo functions (a well documented subject in literature) and realize the mappings through integral and inverse of integral of tempo functions.
2024
The rapid advance of handheld and head-mounted mixed reality technologies comes with many opportunities to integrate 3D visualizations on top of screen-based 2D technologies for immersive analytics, such as the ability to walk through... more
The rapid advance of handheld and head-mounted mixed reality technologies comes with many opportunities to integrate 3D visualizations on top of screen-based 2D technologies for immersive analytics, such as the ability to walk through particle data. However, while mixed reality devices can render virtual worlds into the three dimensional space of the physical world, integrating these devices with 2D immersive technologies is fundamentally constrained by the limited computing and networking resources of energy-efficient mobile devices. Towards a better understanding of these limitations, we design and implement a distributed particle rendering framework to bridge mobile mixed-reality devices with an immersive screenbased stage environment using a Microsoft HoloLens, an NVIDIA Shield Tablet, and a multi-projector immersive data visualization environment. Using our framework, we characterize rendering and networking constraints of the devices involved in the hybrid data visualization. ...
2024
Αν χρειάζεται να δώσουμε τον ορισμό της ειδικής βιβλιοθήκης, ένας τρόπος θα ήταν να πούμε ότι είναι η βιβλιοθήκη που δεν είναι εθνική, δημόσια, ακαδημαϊκή ή σχολική. Έτσι τη διακρίνει και η Unesco το 1958, ορίζοντάς την ταυτόχρονα ως τη... more
Αν χρειάζεται να δώσουμε τον ορισμό της ειδικής βιβλιοθήκης, ένας τρόπος θα ήταν να πούμε ότι είναι η βιβλιοθήκη που δεν είναι εθνική, δημόσια, ακαδημαϊκή ή σχολική. Έτσι τη διακρίνει και η Unesco το 1958, ορίζοντάς την ταυτόχρονα ως τη βιβλιοθήκη που είναι σχεδιασμένη για να εξυπηρετεί περιορισμένο αριθμό ειδικών, επιστημόνων, ερευνητών κ.ά., που περιέχει στη συλλογή της υλικό από συγκεκριμένα θεματικά πεδία και ανήκει συνήθως σε φορείς όπως υπουργεία, επιστημονικές εταιρείες, επαγγελματικές ενώσεις, επιμελητήρια, μουσεία, βιομηχανίες κ.λπ. (Unesco 1958). Ένας άλλος ορισμός είναι: «η βιβλιοθήκη στους χώρους δουλειάς: υπηρεσίες σε εμπορικούς, κυβερνητικούς και άλλους φορείς, που παρέχουν πληροφόρηση προκειμένου να εξυπηρετήσουν τους σκοπούς των οργανισμών τους» (Spiller 1998). Είναι ενδιαφέρον ότι στην Αγγλία χρησιμοποιείται ο όρος workplace libraries (βιβλιοθήκες στους χώρους δουλειάς) και μάλιστα έχει γίνει έρευνα για τη μελέτη αυτής της κατηγορίας βιβλιοθηκών (Spiller 1992). Μία ενδιαφέρουσα αναδρομή στην ιστορική εξέλιξη των ειδικών βιβλιοθηκών, που άνθισαν τον 20 ο αιώνα, αλλά σπέρματα είχαν ήδη εμφανιστεί και στους προηγούμενους, γίνεται σε αφιέρωμα του περιοδικού Library Trends του 1976, όπου ορίζοντας αρχικά τον όρο «ειδική βιβλιοθήκη», τον αποδίδει τόσο στην εξειδικευμένη βιβλιοθήκη (specialized library), όσο και στην εξειδικευμένη συλλογή (specialized collection) (Christianson 1976). Ένα χαρακτηριστικό γνώρισμα για τις ειδικές βιβλιοθήκες είναι ότι δεν αποτελούν μία ενιαία κατηγορία βιβλιοθηκών με κοινά χαρακτηριστικά, όπως είναι π.χ. οι σχολικές ή οι ακαδημαϊκές βιβλιοθήκες, δεδομένου ότι ανήκουν σε φορείς διαφορετικής κατηγορίας ως προς το θεματικό αντικείμενο, το νομικό πλαίσιο, το μέγεθος και την αποστολή. Οι ειδικές βιβλιοθήκες μπορεί να είναι τεχνικές, νομικές, ιατρικές κ.λπ., να ανήκουν σε συλλογικό φορέα ή σε επιχείρηση, να εξυπηρετούν συμφέροντα συλλογικά ή ιδιωτικά. Αυτή η ετερογένεια έχει ως αποτέλεσμα να μην είναι δυνατή ή έστω το ίδιο εύκολη η καθιέρωση προτύπων και κανόνων για την αντιμετώπιση ζητημάτων που έχουν σχέση με την οργάνωση και διαχείριση του υλικού τους, αλλά και με την εργασία και διοίκηση των ανθρώπων που απασχολούνται σε αυτές. Υπάρχει πλούσια βιβλιογραφία στο θέμα αυτό και μάλιστα έχει ενδιαφέρον να δει κανείς διαχρονικά
2024, Proceedings of the 18th International Audio Mostly Conference
Tone Transfer is a novel deep-learning technique for interfacing a sound source with a synthesizer, transforming the timbre of audio excerpts while keeping their musical form content. Due to its good audio quality results and continuous... more
Tone Transfer is a novel deep-learning technique for interfacing a sound source with a synthesizer, transforming the timbre of audio excerpts while keeping their musical form content. Due to its good audio quality results and continuous controllability, it has been recently applied in several audio processing tools. Nevertheless, it still presents several shortcomings related to poor sound diversity, and limited transient and dynamic rendering, which we believe hinder its possibilities of articulation and phrasing in a real-time performance context. In this work, we present a discussion on current Tone Transfer architectures for the task of controlling synthetic audio with musical instruments and discuss their challenges in allowing expressive performances. Next, we introduce Envelope Learning, a novel method for designing Tone Transfer architectures that map musical events using a training objective at the synthesis parameter level. Our technique can render note beginnings and endings accurately and for a variety of sounds; these are essential steps for improving musical articulation, phrasing, and sound diversity with Tone Transfer. Finally, we implement a VST plugin for real-time live use and discuss possibilities for improvement. CCS CONCEPTS • Applied computing → Sound and music computing; • Computing methodologies → Neural networks; • Human-centered computing → Interaction techniques.
2024, Proceedings of the SMC Conferences
SpatDIF, the Spatial Sound Description Interchange Format is a lightweight , human-readable syntax for storing and transmitting spatial sound scenes, serving as an independent, cross-platform and host-independent solution for spatial... more
SpatDIF, the Spatial Sound Description Interchange Format is a lightweight , human-readable syntax for storing and transmitting spatial sound scenes, serving as an independent, cross-platform and host-independent solution for spatial sound composition. The recent update to version 0.4 of the specification introduces the ability to define and store continuous trajectories on the authoring layer in a human-readable way, as well as describing groups and source spreading. As a result, SpatDIF provides a new way to exchange higher level authoring data across authoring tools that help to preserve the artistic intent in spatial music.
2024, Proceedings of the SMC Conferences
We propose a multi-layer structure to mediate essential components in sound spatialization. This approach will facilitate artistic work with spatialization systems, a process which currently lacks structure, flexibility, and... more
We propose a multi-layer structure to mediate essential components in sound spatialization. This approach will facilitate artistic work with spatialization systems, a process which currently lacks structure, flexibility, and interoperability.
2024, Computer Music Journal
SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight... more
SpatDIF, the Spatial Sound Description Interchange Format, is an ongoing collaborative effort offering a semantic and syntactic specification for storing and transmitting spatial audio scene descriptions. The SpatDIF core is a lightweight minimal solution providing the most essential set of descriptors for spatial sound scenes. Additional descriptors are introduced as extensions, expanding the namespace and scope with respect to authoring, scene description, rendering, and reproduction of spatial sound. A general overview presents the principles informing the specification, as well as the structure and the terminology of the SpatDIF syntax. Two use cases exemplify SpatDIF's potential for pre-composed pieces as well as interactive installations, and several prototype implementations that have been developed show its real-life utility.
2024, Oxford Handbooks Online
The chapter discusses how bringing music and computation together in the curriculum offers socially grounded contexts for the learning of digital expression and creativity. It explores how algorithms codify cultural knowledge, how... more
The chapter discusses how bringing music and computation together in the curriculum offers socially grounded contexts for the learning of digital expression and creativity. It explores how algorithms codify cultural knowledge, how programming can assist students in understanding and manipulating cultural norms, and how these can play a part in developing a student’s musicianship. In order to highlight how computational thinking extends music education and builds on interdisciplinary links, the chapter canvasses the challenges, and solutions, involved in learning through algorithmic music. Practical examples from informal and school-based educational contexts are included to illustrate how algorithmic music has been successfully integrated with established and emerging pedagogical approaches.
2024
We propose an approach to digital audio effects using recombinant spatialization for signal processing. This technique, which we call Spatio-Operational Spectral Synthesis (SOS), relies on recent theories of auditory perception. The... more
We propose an approach to digital audio effects using recombinant spatialization for signal processing. This technique, which we call Spatio-Operational Spectral Synthesis (SOS), relies on recent theories of auditory perception. The perceptual spatial phenomenon of objecthood is explored as an expressive musical tool.
2024
DisPatch is the title of a dance work that was partly the outcome of research into ways in which movement in space might successfully correlate to the parameters of sound. The process of making the work in workshops, rehearsals and... more
DisPatch is the title of a dance work that was partly the outcome of research into ways in which movement in space might successfully correlate to the parameters of sound. The process of making the work in workshops, rehearsals and performance both altered our view of, and added to the background research. It became evident in this process that certain outcomes, whilst alien to musically conventional ears, were nevertheless valid and interesting sonic translations of the choreography. The performance demonstrated how issues within and between disciplines can be re-problematised through the creative and innovative use of technology. It is our contention that in this work is evidence for a disseminable "knowledge increase" that is intrinsic to "The Work", a primary assessable criteria in traditional textbased research (Biggs. 2002, Melrose. 2002). The question that is central to any assessment of intrinsic knowledge in a work of art is how the work demonstrates or explains this knowledge. And this question is examined in light of our reading of DisPatch. Extrinsic to DisPatch were a number of other contextualising influences by which spectators were able to inform their interpretation of the work. Program notes, an informal verbal explanation of some of the enabling technology, the venue and its attendant architecture, ritual, expectation and the broader cultural context of performance/concert-going.
2024, Rotura – Revista De Comunicação, Cultura E Artes, 4(1).
MAD Clarinet 3.0. is an article that discusses a Static Artificial Intelligence (SAI) system, dialoguing with a clarinet performance under a Tangible Acoustic Interface (TAI) system and using the software Max/MSP. The TAI system, working... more
MAD Clarinet 3.0. is an article that discusses a Static Artificial Intelligence (SAI) system, dialoguing with a clarinet performance under a Tangible Acoustic Interface (TAI) system and using the software Max/MSP. The TAI system, working through solid vibrations, enables the use of the clarinet as an interface to trigger the computer's sonic response, employing the parametrisation of the clarinet´s sonic characteristics, such as pitch and duration. This dialogue is made from the harmonic and melodic point of view, and, whereas the harmonic paths are chosen randomly-from among several possibilities-by the computer, the melodic paths are made by measuring the clarinettist preferences in real-time. Throughout the document, it will discuss the concepts, the system, the patches, and potentialities. Thus, this article arises from the junction of a clarinet-under a TAI system-with an SAI system and, from the artistic point of view, using a computer as a performative partner with an essential role.
2024, Proceedings of the 15th International Audio Mostly Conference
Despite 2-factor authentication and other modern approaches, authentication by password is still the most commonly used method on the Internet. Unfortunately, as analyses show, many users still choose weak and easy-to-guess passwords. To... more
Despite 2-factor authentication and other modern approaches, authentication by password is still the most commonly used method on the Internet. Unfortunately, as analyses show, many users still choose weak and easy-to-guess passwords. To alleviate the significant effects of this problem, systems often employ textual or graphical feedback to make the user aware of this problem, which often falls short on engaging the user and achieving the intended user reaction, i.e., choosing a stronger password. In this paper, we introduce auditory feedback as a complementary method to remedy this problem, using the advantages of sound as an affective medium. We investigate the conceptual space of creating usable auditory feedback on password strength, including functional and non-functional requirements, influences and design constraints. We present web-based implementations of four sonification designs for evaluating different characteristics of the conceptual space and define a research roadmap for optimization, evaluation and applications. CCS CONCEPTS • Human-centered computing → Auditory feedback; Soundbased input / output; • Security and privacy → Usability in security and privacy.
2024, Blucher Design Proceedings
This article discusses the relationship between sound and image in design projects that use the programming language. The knowledge of these projects bring contributions to the graphic design in an expanded context in which graphic-visual... more
This article discusses the relationship between sound and image in design projects that use the programming language. The knowledge of these projects bring contributions to the graphic design in an expanded context in which graphic-visual messages are transmitted in different media and environments and hypermedia platform where sound participate actively such as applications for mobile devices, projections on walls, installations in augmented reality, websites and vignettes. Another issue to consider is the possibility of involvement of other players in real time or acting for the co-edition of message.
2024, Proceedings of the SMC Conferences
In the following paper we present an innovative approach to coupling gaming, telematics, machinima, and opera to produce a hybrid performance art form and an arts+technology education platform. To achieve this, we leverage a custom... more
In the following paper we present an innovative approach to coupling gaming, telematics, machinima, and opera to produce a hybrid performance art form and an arts+technology education platform. To achieve this, we leverage a custom Minecraft video game and sandbox mod and pd-l2ork real-time digital signal processing environment. The result is a malleable telematic-ready platform capable of supporting a broad array of artistic forms beyond its original intent, including theatre, cinema, as well as machinima and other experimental genres.
2024, HAL (Le Centre pour la Communication Scientifique Directe)
2024, Aural teaching e nuove tecnologie: nuovi paradigmi per l'educazione musicale
Intervento nell'ambito della manifestazione Eufonica 2023 tenutasi a Bologna presso il palazzo Re Enzo durante la tavola rotonda dal titolo "Le sfide della nuova didattica musicale: tra inclusione creatività e tecnologie musicali"
2024, International Journal of Serious Games
By using sound to visualize the real-world environment, audio games can offer a completely new game experience. In this paper, we study how audio cues can be visualized in audio games and their effects on the players' navigation skills,... more
By using sound to visualize the real-world environment, audio games can offer a completely new game experience. In this paper, we study how audio cues can be visualized in audio games and their effects on the players' navigation skills, gaming experience, and immersion. Also, factors that impact the system performance in terms of navigation using audio components. To this end, we propose an audio game in which blindfolded players move through a physical maze with the help of audio cues. To realize this game, a sonar headset is designed and built based on ultrasonic sensors, which maps the external environment features into musical sounds. Actually, the design of the sonar headset is the result of design and research efforts. The gaming environment of this game is a physical maze, where the blindfolded players enter the maze while wearing the sonar headset. A user study was conducted to evaluate the effect of using different sound mapping techniques on navigation performance and playing experience in the game proposed in this paper. The results show that producing musical sound can lead to better navigation performance, game experience, and immersion in players.
2024, Organised Sound
This paper addresses questions regarding the performance of interactive music compositions through an examination of the author's own works. The questions emerge from the compositional impetus and the subsequent technical design of... more
This paper addresses questions regarding the performance of interactive music compositions through an examination of the author's own works. The questions emerge from the compositional impetus and the subsequent technical design of each of the works. The paper also examines some of the forces impacting the performance, preservation and long-term viability of interactive works and non-interactive electroacoustic compositions.
2024, inescporto.pt
In this paper we present an algorithm for real-time fullbody skeletonization and visualization implemented as two external objects for Max/MSP/Jitter. These objects are intended to provide an accurate description of bodily motion as... more
In this paper we present an algorithm for real-time fullbody skeletonization and visualization implemented as two external objects for Max/MSP/Jitter. These objects are intended to provide an accurate description of bodily motion as captured by a video camera, to be used as musical rhythm controller in interactive music systems.
2024, Dance education in practice
enjoys using Canvas for two of her courses. The Canvas website indicates that this cloud-based LMS, used by more than 3,000 school districts, institutions, and universities around the world, can be customized, is easy to use, and... more
enjoys using Canvas for two of her courses. The Canvas website indicates that this cloud-based LMS, used by more than 3,000 school districts, institutions, and universities around the world, can be customized, is easy to use, and integrates seamlessly with many apps. The Canvas Instructor Guide (https://community. canvaslms.com/docs/DOC-10460) provides extensive instructions for using all aspects of this digital tool starting with getting an account as an instructor.
2024, Lecture Notes in Computer Science
This paper describes the temporal model of a scheduler geared towards show control and live music applications. This model relies on multiple interrelated temporal axes, called timescales. Timescales allow scheduling computations using... more
This paper describes the temporal model of a scheduler geared towards show control and live music applications. This model relies on multiple interrelated temporal axes, called timescales. Timescales allow scheduling computations using abstract dates and delays, much like a score uses symbolic positions and durations (e.g. bars, beats, and note values) to describe musical time. Abstract time is ultimately mapped onto wall-clock time through the use of time transformations, specified as tempo curves, for which we provide a formalism in terms of differential equations on symbolic position. In particular, our model allows specifying tempo both as a function of time or as a function of symbolic position, and allows piecewise tempo curves to be built from parametric curves.