Approximate Semantic Matching of Music Classes on the Internet (original) (raw)
Related papers
Computing the Semantic Relatedness of Music Genre using Semantic Web Data
2016
Computing the semantic relatedness between two entities has many applications domains. In this paper, we show a new way to compute the semantic relatedness between two resources using semantic web data. Moreover, we show how this measure can be used to compute the semantic relatedness between music genres which can be used for music recommendation systems. We first describe how to build a vector representations for resources in an ontology. Subsequently we show how these vector representations can be used to compute the semantic relatedness of two resources. Finally, as an application, we show that our measure can be used to compute the semantic relatedness of music genres. CCS Concepts •Information systems → Similarity measures; Language models;
Automatic interlinking of music datasets on the semantic web
2008
In this paper, we describe current efforts towards interlinking music-related datasets on the Web. We first explain some initial interlinking experiences, and the poor results obtained by taking a naïve approach. We then detail a particular interlinking algorithm, taking into account both the similarities of web resources and of their neighbours. We detail the application of this algorithm in two contexts: to link a Creative Commons music dataset to an editorial one, and to link a personal music collection to corresponding web identifiers. The latter provides a user with personally meaningful entry points for exploring the web of data, and we conclude by describing some concrete tools built to generate and use such links.
Bridging the Music Semantic Gap
In this paper we present the music information plane and the different levels of information extraction that exist in the musical domain. Based on this approach we propose a way to overcome the existing semantic gap in the music field. Our approximation is twofold: we propose a set of music descriptors that can automatically be extracted from the audio signals, and a top-down approach that adds explicit and formal semantics to these annotations. These music descriptors are generated in two ways: as derivations and combinations of lower-level descriptors and as generalizations induced from manually annotated databases by the intensive application of machine learning. We belive that merging both approaches (bottom-up and top-down) can overcome the existing semantic gap in the musical domain.
Local and global Semantic Networks for the representation of music information
2016
In the field of music informatics, multilayer representation formats are becoming increasingly important, since they enable an integrated and synchronized representation of the various entities that describe a piece of music, from the digital encoding of score symbols to its typographic aspects and audio recordings. Often these formats are based on the eXtensible Markup Language (XML), that allows information embedding, hierarchical structuring and interconnection within a single document. Simultaneously, the advent of the so-called Semantic Web is leading to the transformation of the World Wide Web into an environment where documents are associated with data and metadata. XML is extensively used also in the Semantic Web, since this format supports not only human- but also machine-readable tags. On the one side the Semantic Web aims to create a set of automatically-detectable relationships among data, thus providing users with a number of non-trivial paths to navigate information in...
Comparative Analysis of Content-Based and Context-Based Similarity on Musical Data
IFIP Advances in Information and Communication Technology, 2011
Similarity measurement between two musical pieces is a hard problem. Humans perceive such similarity by employing a large amount of contextually semantic information. Commonly used content-based methodologies rely on information that includes little or no semantic information, and thus are reaching a performance "upper bound". Recent research pertaining to contextual information assigned as free-form text (tags) in social networking services has indicated tags to be highly effective in improving the accuracy of music similarity. In this paper, we perform a large scale (20k real music data) similarity measurement using mainstream content and context methodologies. In addition, we test the accuracy of the examined methodologies against not only objective metadata but real-life user listening data as well. Experimental results illustrate the conditionally substantial gains of the context-based methodologies and a not so close match these methods with the real user listening data similarity.
Publishing Music Similarity Features on the Semantic Web
We describe the process of collecting, organising and publishing a large set of music similarity features produced by the SoundBite [10] playlist generator tool. These data can be a valuable asset in the development and evaluation of new Music Information Retrieval algorithms. They can also be used in Web-based music search and retrieval applications. For this reason, we make a database of features available on the Semantic Web via a SPARQL end-point, which can be used in Linked Data services. We provide examples of using the data in a research tool, as well as in a simple web application which responds to audio queries and finds a set of similar tracks in our database.
On providing semantic alignment and unified access to music library metadata
International Journal on Digital Libraries, 2017
A variety of digital data sources-including institutional and formal digital libraries, crowd-sourced community resources, and data feeds provided by media organisations such as the BBC-expose information of musicological interest, describing works, composers, performers, and wider historical and cultural contexts. Aggregated access across such datasets is desirable as these sources provide complementary information on shared real-world entities. Where datasets do not share identifiers, an alignment process is required, but this process is fraught with ambiguity and difficult to automate, whereas manual alignment may be time-consuming and error-prone. We address this problem through the application of a Linked Data model and framework to assist domain experts in this process. Candidate alignment suggestions are generated automatically based on textual and on contextual similarity. The latter is determined according to user-configurable weighted graph traversals. Match decisions confirming or disputing the candidate suggestions are obtained in conjunction with user insight and expertise. These decisions are integrated into the knowledge base, enabling further iterative alignment, and simplifying the creation of unified viewing interfaces. Provenance of the musicologist's judgement is captured and published, supporting scholarly discourse and counter-proposals. We present our implementation and evaluation of this framework, conducting a user study with eight musicologists. We further demonstrate the value of our approach through a case study B David M. Weigl
Comparing content and context based similarity for musical data
Neurocomputing, 2013
Similarity measurement between two musical pieces is a hard problem. Humans perceive such similarity by employing a large amount of contextually semantic information. Commonly used content-based methodologies rely on data descriptors of limited semantic value, and thus are reaching a performance ''upper bound''. Recent research pertaining to contextual information assigned as freeform text (tags) in social networking services has indicated tags to be highly effective in improving the accuracy of music similarity. In this paper, a large scale (20k real music data) similarity measurement is performed using mainstream off-the-shelf methodologies relying on both content and context. In addition, the accuracy of the examined methodologies is tested against not only objective metadata but also real-life user listening data as well. Experimental results illustrate the conditionally substantial gains of the context-based methodologies and not a so close match of these methods with the similarity based on real-user listening data.
Inferring Semantic Facets of a Music Folksonomy with Wikipedia
Journal of New Music Research, 2013
Music folksonomies include both general and detailed descriptions of music, and are usually continuously updated. These are significant advantages over music taxonomies, which tend to be incomplete and inconsistent. However, music folksonomies have an inherent loose and open semantics, which hampers their use in many applications, such as structured music browsing and recommendation. In this paper, we present a system that can (1) automatically obtain a set of semantic facets underlying the folksonomy of the social music website Last.fm, and (2) categorize Last.fm tags with respect to the obtained facets. The semantic facets are anchored upon the structure of Wikipedia, a dynamic repository of universal knowledge.
Finding Music Formal ConceptsConsistent with Acoustic Similarity
In this paper, we present a method of finding conceptual clusters of music objectsbased on Formal Concept Analysis.A formal concept (FC) is defined as a pair of extent and intent which are sets of objects andterminological attributes commonly associated with the objects, respectively. Thus, an FC can beregarded as a conceptual cluster of similar objects for which its similarity can clearly be stated interms of the intent. We especially discuss FCs in case of music objects, called music FCs.Since a music FC is based solely on terminological information, we often find extracted FCswould not always be satisfiable from acoustic point of view. In order to improve their quality,we additionally require our FCs to be consistent with acoustic similarity. We design an efficientalgorithm for extracting desirable music FCs. Our experimental results forThe MagnaTagATuneDatasetshows usefulness of the proposed method.