SoundTracer Workshop: Music Information Retrieval

A workshop organized by the SoundTracer project.

Program

Time Content Who
13.00 Welcome Richard Gjems
13.10 Introduction to the SoundTracer project Alexander Refsum Jensenius
13.20 Sound analysis for gestural interactions with large music datasets Olivier Lartillot
14.00 Break  
14.15 Extracting features and formal metadata from sound files Robert Engels
15..00 Break  
15.15 Music information retrieval for folk music archives Matija Marolt
16.00 End of program  

​

Abstracts and bios

Introduction to the SoundTracer project (Alexander Refsum Jensenius, UiO)

SoundTracer is an innovation project aimed at using the motion of the hand to search in large sound databases. We want to develop tools to extract features from the sound files, and make these features available for searching. The concept is that the user will "draw" a sonic quality in the air with her mobile phone. This can be, for example, a rapid, ascending movement followed by shaking. The features from such a "drawing" will be used to retrieve sound files with similar features. The sound files that most closely matches the search will immediately be played, and the user will be able to navigate through other sound files with similar features by moving the mobile phone in the space. The project is supported by the University of Oslo and the National Library of Norway. The aim is to present a functional prototype of a mobile phone app that can be used to retrieve songs from the folk music collection of the National Library.

Alexander Refsum Jensenius is a music researcher and research musician working at the University of Oslo. His research focuses on why music makes us move, which he explores through empirical studies using different types of motion sensing technologies. He also uses the knowledge and tools from his analytic work in the creation of new music, with both traditional and very untraditional instruments.

Sound analysis for gestural interactions with large music dataset (Olivier Lartillot, UiO)

The SoundTracer project requires to extract musical gestures from a large catalogue of folk music recordings. While melodic lines can be easily extracted from a cappella songs, the task is more challenging for other types of music such as Hardanger fiddle music. In such cases, we need to automatically transcribe the recordings and track melodic voices throughout the counterpoint of each composition. By segmenting melodic voices into phrases, we obtain a catalogue of candidate gestures. Other gestures can be obtained by performing a large range of audio and musical analysis of the recordings. The folk music catalogue is a large set of long recordings, each made of a succession of pieces of music that are highly disparate in terms of instrumentation. As no systematic metadata is associated with the catalogue, we are developping a method to automatically segment the songs into pieces and to classify based on instrumentation. I will also present the new SoundTracer app that enables you to draw a gesture in the air with your phone (currently only iPhone SE, 6S, 7, 8 or X) and to find pieces of music from the catalogue that is characterised by a similar musical gesture.

Olivier Lartillot is a researcher at the Department of Musicology at the University of Oslo, working in the fields of computational music and sound analysis and artificial intelligence. He designed MIRtoolbox, a referential tool for music feature extraction from audio. He also works on symbolic music analysis, notably on sequential pattern mining. In the context of his 5-year Academy of Finland research fellowship, he conceived the MiningSuite, an analytical framework that combines audio and symbolic research.

 

Extracting features and formal metadata from sound files - (Im)possibilities with sound and video analysis (Robert Engels, NRK)

 

Robert Engels holds a PhD in Machine Learning and Data Mining from the University of Karlsruhe. He studied Artificial Intelligence, Psychology and Computer Science at the University of Amsterdam, the Stockholm University and the University of Karlsruhe. Author of articles and papers on various topics in AI/ML, information representation, knowledge management, computer linguistics.

 

Music information retrieval for folk music archives

While music information retrieval (MIR) can nowadays be treated as a mature research field with many applications in popular and classical music, we have yet to realize its full potential for analysis of folk music. I will discuss our efforts in applying MIR techniques to materials gathered by the Slovenian Institute of Ethnomusicology. The archive contains a large number of folk music field recordings, song transcriptions, song lyrics, images and folk dance videos, with metadata that varies a lot in quantity and quality. Our goal is thus to make the archive more accessible to researchers, as well as the wider community. I will outline our method for segmentation of long field recordings into individual items and the accompanying tool SeFiRe, our work in transcription of polyphonic singing and bell chiming, the methods we use for searching the archive, as well as research in analysis of melodic patterns and tune family classification. I will also present our EtnoFletno application that exposes part of the collection to the general public, as well as results of a user interface design study for an interface that would be suitable for different user groups.

Matija Marolt is associate professor with University of Ljubljana, Faculty of Computer and Information Science, where he has been working since 1996. He is head of Laboratory for Computer Graphics and Multimedia, where his research interests include music information retrieval, specifically semantic description of audio signals, retrieval and organization in music archives, and human-computer interaction.

 

Published Sep. 6, 2017 7:32 PM - Last modified Sep. 28, 2022 11:22 AM