SoundTracer Completion Workshop

Completion workshop of the SoundTracer innovation project. Demos of prototypes and tutorials/lectures by invited guest Prof. Dr. Meinard Müller.

The day before this workshop, Meinard Müller will give a tutorial presenting an introduction to music processing at the University of Oslo.

Program

Time Content Who
13.00 Welcome Richard Gjems
13.10 Introduction to the SoundTracer project Alexander Refsum Jensenius
13.20 Sound analysis for gestural interactions with large music datasets Olivier Lartillot
14.00 Demo of the SoundTracer app Olivier Lartillot
14.15 Break  
14.30 Beethoven, Bach, and Billions of Bytes - When Music meets Computer Science ​Meinard Müller
15.30 Demo and mingling  

Abstracts and bios

SoundTracer is an innovation project aimed at using the motion of the hand to search in large sound databases. We want to develop tools to extract features from the sound files, and make these features available for searching. The concept is that the user will "draw" a sonic quality in the air with her mobile phone. This can be, for example, a rapid, ascending movement followed by shaking. The features from such a "drawing" will be used to retrieve sound files with similar features. The sound files that most closely matches the search will immediately be played, and the user will be able to navigate through other sound files with similar features by moving the mobile phone in the space. The project is supported by the University of Oslo and the National Library of Norway. The aim is to present a functional prototype of a mobile phone app that can be used to retrieve songs from the folk music collection of the National Library.

Alexander Refsum Jensenius is a music researcher and research musician working at the University of Oslo. His research focuses on why music makes us move, which he explores through empirical studies using different types of motion sensing technologies. He also uses the knowledge and tools from his analytic work in the creation of new music, with both traditional and very untraditional instruments.

Sound analysis for gestural interactions with large music dataset (Olivier Lartillot, UiO)

The SoundTracer project requires to extract musical gestures from a large catalogue of folk music recordings. While melodic lines can be easily extracted from a cappella songs, the task is more challenging for other types of music such as Hardanger fiddle music. In such cases, we need to automatically transcribe the recordings and track melodic voices throughout the counterpoint of each composition. By segmenting melodic voices into phrases, we obtain a catalogue of candidate gestures. Other gestures can be obtained by performing a large range of audio and musical analysis of the recordings. The folk music catalogue is a large set of long recordings, each made of a succession of pieces of music that are highly disparate in terms of instrumentation. As no systematic metadata is associated with the catalogue, we are developping a method to automatically segment the songs into pieces and to classify based on instrumentation. I will also present the new SoundTracer app that enables you to draw a gesture in the air with your phone (currently only iPhone SE, 6S, 7, 8 or X) and to find pieces of music from the catalogue that is characterised by a similar musical gesture.

Olivier Lartillot is a researcher at the Department of Musicology at the University of Oslo, working in the fields of computational music and sound analysis and artificial intelligence. He designed MIRtoolbox, a referential tool for music feature extraction from audio. He also works on symbolic music analysis, notably on sequential pattern mining. In the context of his 5-year Academy of Finland research fellowship, he conceived the MiningSuite, an analytical framework that combines audio and symbolic research.

 
Beethoven, Bach, and Billions of Bytes - When Music meets Computer Science (Meinard Müller)
 
Significant digitization efforts have resulted in large music collections, which comprise music-related documents of various types and formats including text, symbolic data, audio, image, and video. For example, in the case of an opera there typically exist digitized versions of the libretto, different editions of the musical score, as well as a large number of performances given as audio and video recordings.  In the field of music information retrieval (MIR) great efforts are directed towards the development of technologies that allow users to access and explore music in all its different facets. For example, during playback of some CD recording, a digital music player may present the corresponding musical score while highlighting the current playback position within the score. On demand, additional information about melodic and harmonic progression or rhythm and tempo is automatically presented to the listener. A suitable user interface displays the musical structure of the current piece of music and allows the user to directly jump to any key part within the recording without tedious fast-forwarding and rewinding. Furthermore, the listener is equipped with a Google-like search engine that enables him to explore the entire music collection in various ways: the user creates a query by specifying a certain note constellation, some harmonic progression, or rhythmic patterns, by whistling a melody, or simply by selecting a short passage from a CD recording; the system then provides the user with a ranked list of available music excerpts from the collection that are musically related to the query. In this talk, I provide an overview of a number of current research problems in the field of music information retrieval and indicate possible solutions. Furthermore, I want to discuss to which extent computer-based methods may help users to better access and explore music in all its different facets.

Meinard Müller studied mathematics (Diplom) and computer science (Ph.D.) at the University of Bonn, Germany. In 2002/2003, he conducted postdoctoral research in combinatorics at the Mathematical Department of Keio University, Japan. In 2007, he finished his Habilitation at Bonn University in the field of multimedia retrieval. From 2007 to 2012, he was a member of the Saarland University and the Max-Planck Institut für Informatik leading the research group "Multimedia Information Retrieval and Music Processing" within the Cluster of Excellence on "Multimodal Computing and Interaction". Since September 2012, Meinard Müller holds a professorship for Semantic Audio Processing at the International Audio Laboratories Erlangen, whichis a joint institution of the Friedrich-Alexander-Universität Erlangen-Nürnberg (FAU) and the Fraunhofer-Institut für Integrierte Schaltungen IIS. His recent research interests include music processing, music information retrieval, audio signal processing, and motion processing. Meinard Müller has been a member of the IEEE Audio and Acoustic Signal Processing Technical Committee from 2010 to 2015 and is a member of the Board of Directors of the International Society for Music Information Retrieval (ISMIR) since 2009. He has co-authored more than 100 peer-reviewed scientific papers, wrote a monograph titled Information Retrieval for Music and Motion (Springer, 2007) as well as a textbook titled Fundamentals of Music Processing (Springer, 2015).


Handouts from Prof. Müller's presentation.

Organizer

RITMO, fourMs and National Library
Published Mar. 20, 2018 12:05 AM - Last modified Sep. 28, 2022 11:23 AM