The semantic interpretation of music audio analysis relies heavily on the availability of formal structures that encode relevant domain knowledge. The use of ontological models to access and integrate knowledge repositories is an important contribution, improving knowledge-based reasoning and music information retrieval (MIR) systems alike. However, manual annotation of data and the construction of ontologies are laborious tasks. Thus, there is a need for automated systems that overcome these problems in knowledge management and ontology engineering. As a solution to this problem, we developed a hybrid system for generating the class hierarchy of an ontology, automatically relying on the acoustical analysis of isolated notes and solo performances played on various musical instruments.
The system performs two main tasks: i) musical instrument recognition, and ii) the construction of instrument concept hierarchies. In the first part, the hybrid system uses either a Multi-Layer Perceptron neural network, or Support Vector Machines to model the relationships between instruments (e.g., violin) and their attributes (e.g. bowed) using content-based timbre features. In the second part, the output of the instrument recognition system is processed using Formal Concept Analysis to construct a hierarchy of instrument concepts.
The proposed system is based on a general conceptual analysis approach and can be applied to any research fields that deal with knowledge management issues. We, therefore, believe that this study contributes to the theory of Semantic Web intelligence as well as audio music analysis, and it will enable an improved methodical design of automatic ontology generation.
For more details regarding the generated OWL files, see this link.