Melody Characterization by a Genetic Fuzzy System (original) (raw)
Related papers
TOWARDS A HUMAN-FRIENDLY MELODY CHARACTERIZATION BY AUTOMATICALLY INDUCED RULES
There is an increasing interest in music information retrieval for reference, motive, or thumbnail extraction from a piece in order to have a compact and representative representation of the information to be retrieved. One of the main references for music is its melody. In a practical environment of symbolic format collections the information can be found in standard MIDI file format, structured as a number of tracks, usually one of them containing the melodic line, while the others contain the accompaniment. The goal of this work is to analyse how statistical rules can be used to characterize a melody in such a way that one can understand the solution of an automatic system for selecting the track containing the melody in such files.
Fuzzy rules in computer-assisted music interpretation
Proceedings of the …, 2005
In this paper we describe fuzzy rules used in the developed prototype of a "fuzzy music interpretation system" [4]. The core of this system consists of two essential units, the rule base and the inference machine. The rule base contains general IF-THEN interpretation rules, formulated by an experienced pianist. The inference machine contains both conventional and advanced fuzzy information processing strategies. Once the system is fed with the information-the notes and special signs such as "ppp" and "legato", coded in accordance with the MIDI format-contained in the score of Beethoven's "Für Elise", it generates an interpretation of this piece of music and renders it in the form of a MIDI file. Certain refinement parameters allow us to modify the character of the interpretation.
A fuzzy rule model for high level musical features on automated composition systems
2017
Algorithmic composition systems are now well-understood. However, when they are used for specific tasks like creating material for a part of a piece, it is common to prefer, from all of its possible outputs, those exhibiting specific properties. Even though the number of valid outputs is huge, many times the selection is performed manually, either using expertise in the algorithmic model, by means of sampling techniques, or some times even by chance. Automations of this process have been done traditionally by using machine learning techniques. However, whether or not these techniques are really capable of capturing the human rationality, through which the selection is done, to a great degree remains as an open question. The present work discusses a possible approach, that combines expert’s opinion and a fuzzy methodology for rule extraction, to model high level features. An early implementation able to explore the universe of outputs of a particular algorithm by means of the extract...
Melodic track identification in MIDI files
The objective of this work is to find the melodic line in MIDI files. Usually, the melodic line is stored in a single track, while the other tracks contain the accompaniment. The detection of the track that contains the melodic line can be very useful for a number of applications, such as melody matching when searching in MIDI databases. The system was developed using WEKA. First, a set of descriptors from each track of the target melody is extracted. These descriptors are the input to a random forest classifier that assigns a probability of being a melodic line to each track. The tracks with a probability under a given threhold are filtered out, and the one with the highest probability is selected as the melodic line of that melody. Promising results were obtained testing different MIDI databases.
A Pattern Recognition Approach for Melody Track Selection in MIDI Files
2006
Standard MIDI files contain data that can be considered as a symbolic representation of music (a digital score), and most of them are structured as a number of tracks. One of them usually contains the melodic line of the piece, while the other tracks contain accompaniment music. The goal of this work is to identify the track that contains the melody using statistical properties of the musical content and pattern recognition techniques. Finding that track is very useful for a number of applications, like speeding up melody matching when searching in MIDI databases or motif extraction, among others. First, a set of descriptors from each track of the target file are extracted. These descriptors are the input to a random forest classifier that assigns the probability of being a melodic line to each track. The track with the highest probability is selected as the one containing the melodic line of that MIDI file. Promising results have been obtained testing a number of databases of different music styles.
2004
The modeling of the behavior of human process operators using fuzzy methods is well established in technical applications. We investigate whether these fuzzy strategies can also be successfully applied for modeling the manner in which a musician interprets a piece of music. For L. v. Beethoven’s composition “Fur Elise” we set up a base of 150 fuzzy rules that relate to features laid down explicitly or implicitly in the score, and produce situation-dependent variations of the volume and time points of the start and end of the notes. For sensitive processing of these rules advanced fuzzy-strategies are applied. The interpretation obtained is encouraging. We are aware that human artistry in interpreting music is much more sophisticated than what we can presently imitate. However, we consider it as a challenge, not an a priori fruitless question, to determine to what extent it is possible to automate the interpretation of music by fuzzy methods. At the same time, we consider the problem...
Musical genre classification by means of fuzzy rule-based systems: A preliminary approach
2011
Abstract Musical Genre is part of the basic information required for classifying musical audio, and fundamental for music information retrieval systems. The problem of automatic musical genre detection has attracted large attention in the last decade, due to the emergence of digital music databases and Internet. Although a number of techniques has been applied to the problem, no general solution still exists, due to the imprecise features that properly define musical genre.
A fuzzy model for optical recognition of musical scores
2004
Optical music recognition aims at reading automatically scanned scores in order to convert them in an electronic format, such as a midi ÿle. We only consider here classical monophonic music: we exclude any music written on several staves, but also any music that contains chords. In order to overcome recognition failures due to the lack of methods dealing with structural information, non-local rules and corrections, we propose a recognition approach integrating structural information in the form of relationships between symbols and of musical rules. Another contribution of this paper is to solve ambiguities by accounting for sources of imprecision and uncertainty, within the fuzzy set and possibility theory framework. We add to a single symbol analysis several rules for checking the consistency of hypotheses: graphical consistency (compatibility between accidental and note, between grace note and note, between note and augmentation dot, etc.), and syntactic consistency (accidentals, tonality, metric). All these rules are combined in order to lead to better decisions. Experimental results on 65 music sheets show that our approach leads to very good results, and is able to correct errors made by other approaches, such as the one of SmartScore.
Melody Identification in Standard MIDI Files
Proceedings of the SMC Conferences, 2019
Melody identification is an important early step in music analysis. This paper presents a tool to identify the melody in each measure of a Standard MIDI File. We also share an open dataset of manually labeled music for researchers. We use a Bayesian maximum-likelihood approach and dynamic programming as the basis of our work. We have trained parameters on data sampled from the million song dataset [1, 2] and tested on a dataset including 1703 measures of music from different genres. Our algorithm achieves an overall accuracy of 89% in the test dataset. We compare our results to previous work.
Automated algorithmic composition systems are now well-understood. However, its capacity for accurately manage high level musical features (like coherence, emotion or personality), is still object of discussion and much research remains to be done in this area. In the present work, a Fuzzy Inductive Reasoning methodology (FIR) is used to improve coherence among beat patterns, structured in a musical A-B form. Patterns were generated based on a probability matrix, en- coding a particular musical style, designed by experts. Then, all possible patterns were generated and the most probables were selected. With the selected patterns, A-B musical forms were created and the coherence of the sequence was evaluated by experts by using linguistic quantities. The output pairs (A-B pattern and its qualification) were used as inputs to train a FIR system, and the variables that produce “coherent” outputs and the relations among them where identified as rules. Finally, the extracted rules are discussed in the context of the musical form and taking into account the psychological perception of patterns.