-
-
Talseth, Thomas & Brøvig, Ragnhild
(2024).
Kommentar til innlegget "Refser NRK for å la Gåte-låt konkurrere i Melodi Grand Prix".
[Newspaper].
VG.
-
Jensenius, Alexander Refsum & Laczko, Balint
(2024).
Video Visualization.
Show summary
This workshop is targeted at students and researchers working with video recordings. You will learn to use MG Toolbox, a Python package with numerous tools for visualizing and analyzing video recordings. This includes visualization techniques such as motion videos, motion history images, and motiongrams; techniques that, in different ways, allow for looking at video recordings from different temporal and spatial perspectives. It also includes some basic computer vision analysis, such as extracting quantity and centroid of motion, and using such features in analysis.MG Toolbox for Python is a collection of high-level modules for generating all of the above-mentioned visualizations and analyses. This toolbox was initially developed to analyze music-related body motion but is equally helpful for other disciplines working with video recordings of humans, such as linguistics, psychology, medicine, and educational sciences.
-
-
-
Jensenius, Alexander Refsum & Poutaraud, Joachim
(2023).
Video Visualization.
Show summary
This workshop is targeted at students and researchers working with video recordings. Even though the workshop will be based on quantitative tools, the aim is to provide solutions for qualitative research. This includes visualization techniques such as motion videos, motion history images, and motiongrams, which, in different ways, allow for looking at video recordings from different temporal and spatial perspectives. It also includes basic computer vision analysis modules, such as extracting quantity and centroid of motion, and using such features in analysis.
The participants will learn to use the Musical Gestures Toolbox for Python, a collection of high-level modules for easily generating all of the above-mentioned visualizations and analyses. This toolbox was initially developed for analyzing music-related body motion but is equally helpful for other disciplines working with video recordings of humans, such as linguistics, psychology, medicine, and educational sciences.
-
Danielsen, Anne
(2023).
Ain’t that a groove! Musicological, philosophical and psychological perspectives on groove (keynote).
Show summary
The notion of groove is key to both musicians’ and academics’ discourses on musical rhythm. In this keynote, I will present groove’s historical grounding in African American musical practices and explore its current implications by addressing three distinct understandings of groove: as pattern and performance; as pleasure and “wanting to move”; and as a state of being. I will point out some musical features that seem to be shared among a wide range of groove-based styles, including syncopation and counterrhythm, swing and subdivision, and microrhythmic qualities. Ultimately, I will look at the ways in which the groove experience has been approached in different disciplines, drawing on examples from musicology / ethnomusicology, philosophy, psychology and neuroscience.
-
Danielsen, Anne
(2023).
Decolonizing groove (panel discussion).
-
-
-
-
-
Jensenius, Alexander Refsum & Burnim, Kayla
(2023).
Forskere inntok Konserthuset.
[Newspaper].
Stavanger Aftenblad.
Show summary
Hundrevis av elever kom for å høre på Stavanger symfoniorkester. Mens orkesteret spilte, var musikerne, dirigenten og publikum del av et unikt forskningsprosjekt.
-
Jensenius, Alexander Refsum & Rosenberg, Ingvild
(2023).
Unik forskningskonsert.
[Radio].
NRK P1.
-
-
-
-
Jensenius, Alexander Refsum
(2023).
CV-modul som grunnlag for NOR-CAM.
Show summary
Vurdering av forskning er på dagsorden som aldri før. NIFU inviterer derfor til åpent seminar om helhetlig vurdering av forskere og forskning. Bakteppet er den nye europeiske avtalen om evaluering av forskning og den nye norske veilederen for karrierevurdering av forskere. Seminaret arrangeres i samarbeid mellom NIFU (R-Quest), UHR og Det nasjonale publiseringsutvalget.
-
-
Jensenius, Alexander Refsum
(2023).
Rhythmic Data Science.
Show summary
Rhythm is everywhere, from how we walk, talk, dance and play to telling stories about our past and even predicting the future. Rhythm is key to how we interact with our world. Our heartbeat, nervous system, and other bodily cycles work through rhythm. As such, rhythm is a crucial aspect of human action and perception, and it is in complex interaction with the world's cultural, biological and mechanical rhythms. At RITMO, they research rhythmic phenomena and their complex relationships with the rhythms of human bodies and brains. In the talk, Alexander will present examples of how they record, synchronize, and analyze data of complex, rhythmic human behavior, such as real-world concerts.
-
Jensenius, Alexander Refsum
(2023).
Explorations of human micromotion through standing still.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
Jensenius, Alexander Refsum
(2023).
Conceptualizing Musical Instruments.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my forthcoming book on "musicking in an electronic world". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments. This will be used at the heart of a new organology that embraces the qualities of both acoustic and electroacoustic instruments.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions: An Embodied approach to a Digital Organology.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my forthcoming book on "musicking in an electronic world". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments. This will be used at the heart of a new organology that embraces the qualities of both acoustic and electroacoustic instruments.
-
Jensenius, Alexander Refsum
(2023).
Wishful thinking about CVs: Perspectives from a researcher.
-
-
-
Jensenius, Alexander Refsum
(2023).
Oppsummering av arbeidet med opphavsrett og lisenser i QualiFAIR.
Show summary
Forskere ofte stiller spørsmål på hvordan de skal håndtere opphavsrett når det samler inn data. Hvem eier data? Hvem har rettigheter og hvilke rettigheter har man som prosjektleder eller prosjektdeltaker? Hvilke lisenser skal man velge når man vil dele ulikt materiale slikt som artikler, datasett, kildekode, bilder, lyd- og videoopptak? Hvordan kan man bruke andres materiale som ikke har spesifikke lisenser? Hvordan kan UiO legge bedre til rette for at studenter og ansatte får et bevisst forhold til opphavsrett?
-
(2023).
Bio-inspiration for robot design and adaptation
.
-
(2023).
Evolutionary and adaptive robotics: from simulation to reality
.
-
(2023).
Adaptive robots through evolutionary algorithms and machine learning.
-
-
Jensenius, Alexander Refsum
(2023).
Conceptualizing Musical Instruments.
Show summary
What is an instrument in our increasingly electrified world? In this talk I will present a set of theoretical building blocks from my recent book "Sound Actions". At the core of the argument is the observation that the introduction of new music technologies has led to an increased separation between action and sound in musical performance. This has happened gradually, with pianos and organs being important early examples of instruments that introduced mechanical components between the performer and resonating objects. Today's network-based instruments represent an extreme case of a spatiotemporal dislocation between action and sound. They challenge our ideas of what an instrument can be, who can perform on them, and how they should be analyzed. In the lecture I will explain how we can use the concepts of action-sound couplings and mappings to structure our thinking about such instruments.
-
Saplacan, Diana
(2023).
Introduction on my background and on the University of Oslo, Robotics and Intelligent Systems Research Group, Norway for Human-Robot Interaction Lab, Department of Social Informatics, Kyoto University, Japan.
-
-
Vuoskoski, Jonna Katariina
(2023).
Music and the experience of social connection.
Show summary
Musical engagement – ranging from group music-making to solitary music listening – is an inherently social activity that can facilitate communication, understanding, and connection between individuals and groups. The three studies presented in this talk shed light on the social dimension of musical experiences, focusing on the listener’s perspective. The first study explores the characteristics of virtual concerts and their impact on social connection and felt emotions during the COVID-19 pandemic. The second study compares live and livestreamed concerts and their effects on motion, emotion, the experiences of social connectedness. Finally, the third study investigates experiences of feeling moved in response to music listening, and shows that musically evoked experiences of feeling moved are associated with similar patterns of appraisals, physiological sensations, and empathic processes as feeling moved by videos depicting social scenarios. Together, these studies highlight the importance of social connection and empathy in musical experiences, demonstrating that music can serve as a powerful tool for promoting social bonding and experiences of connectedness.
-
-
Herrebrøden, Henrik; Espeseth, Thomas & Bishop, Laura
(2023).
Cognitive load affects effort, performance, and kinematics in elite and non-elite rowers.
Journal of Sport & Exercise Psychology (JSEP).
ISSN 0895-2779.
45(S1),
p. S83–S83.
doi:
10.1123/jsep.2023-0077.
Show summary
The extent to which elite athletes depend on mental effort and attention to task execution has been a debated topic. Some studies have suggested that motor experts might be relatively unaffected in the face of distraction and that they might even perform better when they attend to extraneous cognitive stimuli (for example in a dual-task paradigm), as compared to single-task conditions where they concentrate fully on a sports task. However, task complexity and participants’ skill levels have so far been relatively modest in most dual-task studies. To address gaps in past research, a multi-method study was conducted using a rowing ergometer task. Participants were nine male elite rowers from the Norwegian national rowing team, preparing for the 2020 Olympic Games in Tokyo, as well as nine male recreational rowers. Participants engaged in three-minute rowing trials of varying task demands, including single-task conditions (focusing on rowing only) and dual-task conditions (focusing on rowing and solving arithmetic problems). Performance and mental effort were measured via ergometer data (i.e., rowing speed values) and eye-tracking measures (i.e., blink rates and pupil size measurements), respectively. Movement kinematics was measured by motion capture technology. The results suggested that adding extraneous cognitive load led to performance decline and increased mental effort across all participants. Both elites and non-elites demonstrated kinematic changes when going from single-task to dual-task performance. That is, kinematic events in participants’ lower-body and upper-body segments became more temporally coupled, and more in line with movement patterns associated with novice athletes when the extraneous cognitive load was added. This study contradicts several past findings and suggests that elite athletes rely on attentional resources to execute fundamental aspects of their performance. Funding source: Research Council of Norway.
-
-
Jónsson, Björn Thór
(2023).
Live Streams From Evolutionary Search for Sounds.
Show summary
Here we present a web interface for navigating sounds discovered during runs of evolutionary processes. Those runs are performed as a part of investigations into the applicability of quality diversity search for sounds. This audible peek into the collected data supplements statistical analysis. Such a way of communicating the current results is intended to provide an engaging experience of the data. By either listening to automatic playback of the discovered sounds, or interacting with them, for example by changing their parameters, interesting, annoying, pleasing, and perhaps useful artefacts may be discovered, modified and downloaded for use in any creative work. The application can be accessed from desktops or mobile devices at: https: //synth.is/exploring-evoruns
-
Jónsson, Björn Thór
(2023).
Jukebox with research data.
Show summary
Evolution runs explorer: opening up access to current results from the application of quality diversity search algorithms to the discovery of synthesised sounds.
https://synth.is/exploring-evoruns
-
-
Vogt, Yngve; Krauss, Stefan Johannes Karl; Mossige, Joachim; Dysthe, Dag Kristian; Angheluta, Luiza & Jensenius, Alexander Refsum
(2023).
Bereder grunnen for kunstige organer.
[Business/trade/industry journal].
Apollon.
-
Saplacan, Diana
(2023).
Presentation of the paper "Health Professionals’ Views on the Use of Social Robots with Vulnerable Users: A Scenario-Based Qualitative Study Using Story Dialogue Method".
-
-
-
-
-
Jensenius, Alexander Refsum & Zürn, Christof
(2023).
Standing still with Alexander Refsum Jensenius.
[Internet].
The Power of Music Thinking.
Show summary
What is the use of standing still for 10 minutes? I was asking myself when I saw a post on social media. It was a double picture of a man with a mobile phone around his neck displaying some data, and another picture showed the view he saw at that moment. I learned that he stood there for 10 minutes without any movement, listening to the sound that was already there. There were many pictures like this, and I decided to get in contact.
So, today, we are in Oslo. We speak with Alexander Refsum Jensenius, a professor of music technology at the University of Oslo, a book author, a music researcher and researching musician working in the fields of embodied music cognition and new interfaces for musical expression.
Alexander shares with us his experiences while performing and testing with artistic methods of embodied listening and how people experience music and sound. This goes from experiments with and without the conductor of a Symphony Orchestra to the sounds of our kitchen appliances.
We talk about his motion capture lab, where a person’s exact location and micro-movements can be detected while they hear different kinds of music, and how the researchers can understand what moves them.
Alexander shares insights about the Norwegian Championship of Stand Still, where until now, 1000s of people have participated, and the winner is the person with the lowest average velocity on standing the stillest over some time.
Alexander explains the interplay of body and mind and reveals some secrets on how to move people, for example, on the dance floor or to calm them down. It all has to do with our bpm, the average heartbeat of about 60 beats a minute.
-
Jensenius, Alexander Refsum; Danielsen, Anne & Søndergaard, Pia
(2023).
Hvor blir det av UiOs alumni-satsing?
Uniforum.
ISSN 1891-5825.
Show summary
Det snakkes i festlige lag om at våre alumni er en ressurs. Dessverre viser praksis at man ikke bare ignorerer tidligere ansatte, men aktivt forsøker å fjerne alle spor av at de har forsket ved institusjonen.
-
Jensenius, Alexander Refsum
(2023).
Still Standing: The effects of sound and music on people standing still.
Show summary
Throughout 2023, I have been standing still for ten minutes around noon every day, in a different room each day. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for the conscious and unconscious control of musical sounds.
-
Jensenius, Alexander Refsum
(2023).
Still Standing: The effects of sound and music on people standing still.
Show summary
Throughout 2023, I have been standing still for ten minutes around noon every day, in a different room each day. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for the conscious and unconscious control of musical sounds.
-
Monstad, Lars Alfred Løberg
(2023).
KI kan demokratisere musikkbransjen.
VG : Verdens gang.
ISSN 0805-5203.
-
Danielsen, Anne
(2023).
Beat bins, asynchronies and muddy sounds: Shaping micro-time in grooves.
Show summary
In musical genres such as neo-soul and hip-hop, beats often have a temporal shape that makes their placement in time difficult to locate relative to a single point in time. This is often due to «muddy», processed sounds or asynchronies between events at beat-related metric positions. The beat bin theory suggests that the perceptual counterpart to such beat asynchronies or muddy beat shapes in a sounding groove is an internal (perceptual) reference structure of beat bins of considerable ‘width’ and a distinctive ‘shape’. I will start by pre- senting the theory and then focus on how various acoustic factors influence the beat bin, using examples from computer-based musical grooves. Ultimately, I argue that micro-level perception of, and synchronization to, sound is opti- mized for the task at hand, in line with the flexibility and dynamic nature of the human apparatus in perceiving, predicting, and processing rhythm.
-
-
Jensenius, Alexander Refsum
(2023).
Forskarperspektivet.
Show summary
Denne hausten har Utkast til strategi for norsk vitenskapelig publisering etter 2024 vore ute til høyring. Strategien skildrar tilrådingar til både forskarar, forskingsutførande institusjonar, forskingsfinansiørar og myndigheiter. I dette seminaret inviterer vi ein av dei som har utarbeidd strategien, Vidar Røeggen frå Universitets- og Høgskolerådet, til å fortelje om arbeidet med rapporten, innspel som har komme inn og korleis han ser for seg det framtidige publiseringslandskapet. Deretter går ordet til Alexander Jensenius (UiO, NOR-CAM), Johanne Raade (UiT) og Marte Qvenild (NFR), til å diskutere korleis dei ser framtida for open publisering etter 2024, frå perspektivet til ein forskar, institusjon og finansiør, høvesvis. Ser dei andre utfordringar enn dei som er forsøkt møtt i den nye strategien?
-
Saplacan, Diana; Pajalic, Zada & Tørresen, Jim
(2023).
Should Social and Assistive Robots Integrated within Home- and Healthcare Services Be Universally Designed?
Cambridge Handbook on Law, Policy, and Regulations for Human-Robot Interaction.
Cambridge University Press.
ISSN 000-0-000-00000-0.
doi:
ISBN%209781009386661.
-
-
Godøy, Rolf Inge
(2023).
Exploring sound-motion links in motormimetic cognition.
Show summary
The focus of my talk is on the intimate links between sensations of sound and of motion in music, summarized in the expression motormimetic cognition. The purpose of coining this neologism was to give a name to the mental re-enactment (in some cases, also as overt, visible body motion) of sound-related motion in listening to, or merely imagining, musical sound, and typically, as re-enactments of assumed sound-producing body motion, but also of more overall sensations of energy and/or affect.
My motivation for exploring this topic was a number of personal, introspection-based experiences of sound-producing body motion sensations when listening to music, or when merely imagining music. After quite extensive readings in various domains of the cognitive sciences, it dawned on me that maybe other people could have similar motion sensations when listening to, or merely imagining, music. When publishing papers on motormimetic cognition in musical experience, the response of people in the music cognition community was quite varied. However, in the last couple of decades, there has with the growing popularity of so-called embodied cognition in the cognitive sciences, become more accepted that there are indeed extensive links between perception and body motion in most, perhaps all, domains of human behavior. Yet, there are needless to say still very many outstanding questions as to what we mean by embodied cognition in music, and in my opinion, we seem in particular to lack more detail and systematic knowledge of how such embodied elements play out in very concrete musical features. And this is the aim of my presentation, namely to give an account of how the fusion of sound and motion can be explored in more detail.
One leading idea here is that there are constraints in sound production, both of instruments and sound-producing body motion, concerning biomechanics as well as motor control, and that we may enhance our understanding of motormimetic cognition in music by studying such constraints, first of all in performance, but also in improvisation and composition. This will include constraints and affordances of motion and body postures associated with patterns of textures, rhythm, various figures, ornaments, contours, spectral and formantic shapes, as well as the associated sense of effort and affect.
The basic idea here is to regard musical sound as intimately linked with sensations of motion, to the extent that we may actually perceive salient musical features as multimodal phenomena, e.g. in the case of a drum fill where sensations of drum sound and hands/arms motion are totally fused. Recognizing the extent of this multimodal fusion of sound and motion in music perception, should then have consequences for how we think about various theoretical and practical music-related activities, i.e. encourage us to think about a work of music as just as much a choreography of sound-producing motion as sequence of sounds.
-
Jensenius, Alexander Refsum
(2023).
Exploring large datasets of human, music-related standstill.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
Martin, Remy Richard & Bernhardt, Emil
(2023).
Entrainment, free will, and musicking: an enactivist perspective.
-
Karbasi, Seyed Mojtaba
(2023).
Reinforcement Learning for Curious Systems.
-
Oddekalv, Kjell Andreas; Sørli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Nova nedstrippa - Sinsenfist.
[Radio].
Radio Nova.
-
Nielsen, Nanette; Martin, Remy Richard & Bernhardt, Emil
(2023).
Entrainment, free will, and musicking: an enactivist perspective.
-
Jensenius, Alexander Refsum & Sørnes, Astrid Johanne
(2023).
Ny Beatles-musikk.
[Radio].
NRK Nyhetsmorgen.
Show summary
The Beatles gir ut ny musikk
-
Jensenius, Alexander Refsum & Sørnes, Astrid Johanne
(2023).
Beatles med ny låt.
[Internet].
NRK.
Show summary
«Now And Then» er ferdigstilt av Paul McCartney og Ringo Starr – med litt hjelp frå kunstig intelligens.
-
-
Lartillot, Olivier
(2023).
Towards a Comprehensive Modelling Framework for Computational Music Transcription/Analysis.
Show summary
Computational music analysis, still in its infancy, lacking overarching reliable tools, can be seen at the same time as a promising approach to fulfill core epistemo- logical needs. Analysis in the audio domain, although approaching music in its entirety, is doomed to superficiality if it does not fully embrace the underlying symbolic system, requiring a complete automated transcription and scaffolding of metrical, modal/harmonic, voicing and formal structures on top of the layers of elementary events (such as notes). Automated transcription enables to get over the polarity between sound and music notation, providing an interfacing semiotic system that combines the advantages of both domains, and surpassing the limitation of traditional approaches based on graphic representations. Deep learning and signal processing approaches for the discretisation of the continuous signal are compared and discussed. The multi-dimensional music transcription and analysis framework (where both tasks are actually deeply intertwined) requires to take into account the far-reaching interdependencies between dimensions, for instance between motivic and metrical analysis. We propose an attempt to build such a comprehensive framework, founded on general musical and cognitive principles and an attempt to build music analysis capabilities through a combina- tion of simple and general operators. The validity of the analyses is addressed in close discussion with music experts. The potential capability to produce valid analyses for a very large corpus of music would make such a complex system a potentially relevant blueprint for a cognitive modelling of music understanding. We try to address a large diversity of music cultures and their specific challenges: among others, maqam modes (with Mondher Ayari), Norwegian Hardanger fiddle rhythm (with Mats Johansson and Hans-Hinrich Thedens), djembe drumming from Mali (with Rainer Polak) or electroacoustic music (Towards a Toolbox des objets musicaux, with Rolf Inge Godøy). We aim at making the framework fully transparent, collaborative and open.
-
Vuoskoski, Jonna Katariina & Peltola, Henna-Riikka
(2023).
Who hates (some) music, and why? Explaining individual differences in the intensity of music-induced aversion.
-
Vuoskoski, Jonna Katariina & Swarbrick, Dana
(2023).
Moving together: Exploring the relationship between emotions, connectedness, and motion in concert audiences.
Show summary
Music is able to evoke experiences of being moved and a sense of social connectedness in audiences – even in the context of streamed concerts and recorded music. The present study set out to investigate audiences’ emotional experiences and amount of movement in a classical string quartet concert, which was attend by both a live (N=91) and a livestreaming (N=45) audience. The results revealed that both audiences felt similarly connected to the performers, while the live audience felt more connected to other audience members than the livestreaming audience. Reports of ‘being moved’ and awe were influenced more by the piece of music than by the listening context, and the live audience demonstrated distinct motion patterns in response to different musical pieces. The amount of audience movement was also associated with the degree of connectedness experienced towards other audience members. In a follow-up online experiment, 189 participants continuously rated their experience of being moved while watching a recording of the Beethoven string quartet performance from the main concert experiment. Cross-correlations between the continuous ratings and musical features and audience movement patterns were analysed. Overall, the findings demonstrate that the degree of connectedness experienced towards other audience members is modulated by shared presence as well as the amount of audience movement, while experiences of ‘feeling moved’ and awe are influenced by the music itself.
-
Grüning, David J.; Kaemmerer, Mareike & Vuoskoski, Jonna Katariina
(2023).
Being Moved by Sad Music Across Countries: Characterising the Experience in Finland, Germany, and France.
-
Lucas Bravo, Pedro Pablo
(2023).
Sonic Explorations for 3D Swarmalators.
Show summary
A swarmalator is a type of self-organizing system where agents or particles interact with each other through local rules. The term "swarmalator" is a combination of "swarm," which refers to a group of agents, and "oscillator," which refers to a system that exhibits periodic behavior. In a swarmalator system, each agent has an internal oscillator that determines its behavior, and the agents interact with their neighbors, affecting each other's oscillations and leading to synchronization. This synchronization can result in collective behaviors such as coordinated motion or pattern formation. Both the phase dynamics and spatial dynamics of the oscillators are coupled in swarmalators. Swarmalators are related to the concept of entrainment, which refers to the synchronization of rhythmic patterns in biological or physical systems.
Swarmalator systems can be used to model entrainment and the emergence of collective behavior in natural systems. Sonic and musical properties can be explored using the parameters involved in swarmalators, leading to interesting self-organized compositions and emergent behaviors capable of interacting with humans in a synchronized environment. Some of these sonical mappings will be presented for a 3D version of swarmalators, and future directions for interactive music systems based on synchronized swarms will be discussed.
-
Vuoskoski, Jonna Katariina & Peltola, Henna-Riikka
(2023).
Who hates (some) music, and why? Explaining individual differences in the intensity of music-induced aversion.
Show summary
Aversive or disliked music has the capacity to evoke strong negative emotions and physical sensations – at least in some listeners. Although previous (qualitative) studies on aversive and disliked music have provided valuable insights into listeners’ experiences, more generalizable approaches are needed for understanding individual differences in strong aversion to disliked music. This study set out to explore these individual differences by developing a standardised questionnaire to measure the intensity of aversive musical experiences; The Aversive Musical Experience Scale (AMES). Furthermore, we explored potential predictors and hypothesized underlying mechanisms (such as emotional contagion and a general sensitivity to sounds) by measuring trait emotional contagion, misophonia, proneness to experiences of ASMR and frisson, and personality. Based on the results of exploratory and confirmatory factor analyses, a final 18-item version of AMES was constructed. The global AMES comprises three subscales: Sensations, Social and Features. The Sensations subscale taps into the physical sensations, bodily reactions and feelings associated with aversive musical experiences. The Social subscale comprises items related to social relationships and attitudes in the context of aversive music. Finally, the Features subscale taps into specific musical and acoustic features that participants find aversive. Misophonia emerged as the strongest predictor of global AMES and its three subscales, explaining 9-19 % (adj. R2 change) of the inter-individual variance. Emotional contagion also emerged as a significant predictor, accounting for 2-4 % of the variance in AMES and two of its subscales. Furthermore, the personality traits Neuroticism, Ageeableness, and Openness to experience, as well as age and musical training, emerged as significant predictors of at least one of the scales. The implications and limitations of the findings are discussed with respect to sound-sensitivity, music-induced emotions, and personality theory.
-
Gibbs, Hannah & Vuoskoski, Jonna Katariina
(2023).
The effects of synchronised drumming and trait empathy on perspective taking and social bonding.
-
Mossige, Joachim
(2023).
Paneldeltaker på Abels tårn.
-
Lucas Bravo, Pedro Pablo
(2023).
Human-Swarm Interactive Music Systems: Design, Algorithms, Technologies, and Evaluation.
Show summary
This paper presents considerations for developing Human-Swarm Interactive Music Systems (IMS), based on previous work in the field. We discuss design principles, algorithms, technologies, and evaluation methods for creating user-centred Human-Swarm IMSs using architectural approaches, swarm strategies, and levels of embodiment in implementation. Our contribution aims to establish a framework for future applications and research studies on swarm-based music platforms.
-
Polak, Rainer; Pearson, Lara & Horlor, Sam
(2023).
Theorizing audiency.
-
Polak, Rainer
(2023).
From Mali to Ghana: An Empirical Critique of the Theory of African Rhythm.
-
Polak, Rainer
(2023).
Metric beat subdivision non-isochrony in African music: A comparative perspective.
-
Polak, Rainer
(2023).
Embedded audiency: Performing as audiencing at music-dance events in Mali.
-
Polak, Rainer
(2023).
Djembe Dance-Drumming from Mali and Beyond.
-
Polak, Rainer
(2023).
DjembeDance – Multimodal rhythm in music and dance from West Africa.
-
Harmeling, Tobias & Polak, Rainer
(2023).
Kann man Rhythmusgefühl lernen? (Episode in the podcast "Obligato" hosted by the German music magazine "Stereo").
[Internet].
https://obligato.blogs.julephosting.de/5-kann-man-rhythmusge.
-
Rezende Carvalho, Vinicius; Mendes, Eduardo Mazoni; Cash, Sydney & Moraes, Márcio
(2023).
Auditory steady-state responses with stereoelectroencephalography: distribution and relation to seizure onset zone.
-
-
Jensenius, Alexander Refsum
(2023).
Musikk og kunstig intelligens.
Show summary
Kunstig intelligens kan allerede skrive noter og mikse musikk. I tiden fremover vil vi se mange eksempler på hvordan maskinlæring tas i bruk i musikkutøving og -produksjon og til å skape nye lytteopplevelser. Men hva er egentlig musikalsk kunstig intelligens? Hva vil det si å trene en maskinlæringsmodell? Vil maskinene gjøre musikere og komponister overflødige? Denne forelesningen vil gi deg en del svar, men også flere spørsmål.
-
-
Bishop, Laura; Høffding, Simon; Lartillot, Olivier Serge Gabriel & Laeng, Bruno
(2023).
Mental effort and expressive interaction in expert and student string quartet performance.
-
Bishop, Laura; Bonnin, Geoffray & Frey, Jeremy
(2023).
Analyzing physiological data collected during music listening: An introduction.
-
Solli, Sandra; Doelling, Keith; Leske, Sabine Liliana; Danielsen, Anne & Endestad, Tor
(2023).
The role of the motor system in predicting accelerating
and decelerating auditory rhythms.
-
Bernhardt, Emil
(2023).
More than a Metaphor? Schubert and an Aesthetics of the Body.
-
Bernhardt, Emil
(2023).
Entrainment and Aesthetic Experience.
-
-
Danielsen, Anne; Brøvig, Ragnhild; Câmara, Guilherme Schmidt; Haugen, Mari Romarheim; Johansson, Mats Sigvard & London, Justin
(2023).
There’s more to timing than time: Investigating sound–timing interaction across disciplines and cultures
.
-
Oddekalv, Kjell Andreas
(2023).
On Analysing Hip-Hop/Rap : Doing Hip-Hop Scholarship in a hip-hop way
.
-
Oddekalv, Kjell Andreas
(2023).
Weak Alternatives …and their presence making shit dope.
-
Oddekalv, Kjell Andreas
(2023).
Project: Chimera
Postdoctoral project – overview, examples, loose thoughts. HHRIG meeting presentation
.
-
Oddekalv, Kjell Andreas
(2023).
Flow, layering and rupture in composite auditory streams.
-
Oddekalv, Kjell Andreas
(2023).
A Norwegian emcee/scholar – Theorizing rap flow from the outside and inside
.
-
Oddekalv, Kjell Andreas
(2023).
Sounding Same/Sounding Other:
Creative, practical and aesthetic aspects of ad libs and ‘backtracks’ in rap
.
-
Oddekalv, Kjell Andreas
(2023).
'Them bars really ain't hittin' like a play fight' : Analysing weak alternative lineations and ambiguous lineation in relation to metrical structure in rap flows.
.
-
Oddekalv, Kjell Andreas; Bjørkheim, Terje; Sørli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
[Show all 8 contributors for this article]
(2023).
Sommeren for ti år sia : "Strekker meg, "Hvite sneakers", "Storeslem".
-
Oddekalv, Kjell Andreas; Bjørkheim, Terje; Sørli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Sinsenfist på Parkteateret - med Horny Horns og Fister Sisters - Support louilexus & Åse.
-
Oddekalv, Kjell Andreas; Bjørkheim, Terje; Sørli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Sinsenfist på Stødt.
-
Câmara, Guilherme Schmidt; Danielsen, Anne & Oddekalv, Kjell Andreas
(2023).
Funky rhythms – broken beats! Kulturelle og estetiske perspektiver på groove-basert musikk.
-
Oddekalv, Kjell Andreas; Bjørkheim, Terje; Sørli, Anders Ruud; Ugstad, Magnus; Hole, Erik & Walderhaug, Bendik
(2023).
Sinsenfist på Strynefestivalen.
-
Blenkmann, Alejandro Omar
(2023).
Expectation and attention in auditory prediction.
-
Swarbrick, Dana
(2023).
Being in Concert: Fostering Togetherness in Audiences.
-
Jensenius, Alexander Refsum
(2023).
The assessment of researchers is changing – how will it impact your career?
Show summary
Changes are happening in the world of research assessment, for example by recognizing several competencies as merits and a better balance between quantitative and qualitative goals. In Norway, for example, Universities Norway presented the NOR-CAM report in 2021 which sparked a movement for reform. As an early career researcher, it's crucial to understand how these changes may impact your research career. In this talk, Jensenius will discuss the evolving landscape of research assessment and what it means for you.
-
Jensenius, Alexander Refsum
(2023).
Innovasjon og åpen forskning.
-
Jensenius, Alexander Refsum
(2023).
Observing spaces while standing still.
Show summary
Throughout 2023, I stand still for ten minutes around noon every day, in a different room each day. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. Previously, I have been interested in the impact of music. Now, I am listening to ventilation systems, elevators, and people walking and talking and reflecting on how they influence my body and
mind. The aim is to understand more about the rhythms of the environment.
-
Swarbrick, Dana
(2023).
Song Talk Radio: Interview with Dana Swarbrick and Alex Whorms.
-
Swarbrick, Dana
(2023).
Les Effets du Musique sur Grimper .
-
Swarbrick, Dana; Bosnyak, Dan; Marsh Rollo, Susan; Fu, Nicole; Trainor, Laurel & Vuoskoski, Jonna Katariina
(2023).
Being in Concert: The Effects of Audience Participation on Motion, Emotion, and Connectedness.
-
Swarbrick, Dana & Vuoskoski, Jonna Katariina
(2023).
Exploring the Relationship Between Experiences of Awe, Being Moved, and Social Connectedness in Concert Audiences.
-
Swarbrick, Dana; Palmer, Caroline; Keller, Peter; Clayton, Martin; Henry, Molly & Toiviainen, Petri
(2023).
Entrainment Workshop Panel Discussion.
Show summary
Definitions of entrainment vary across disciplines including mechanics, behavioural psychology, neuroscience, and biology. Generally, entrainment involves the adjustment of rhythmic signals to each other. Neural entrainment and rhythmic entrainment are common terms to distinguish the types of entrainment that occur in the brain or behaviour, respectively. Some use the term emotional entrainment to describe how individuals align their emotions with one another. Can a single definition truly encompass all crucial elements and be used across disciplines or are these disciplines using the term in ways that are too different from each other to be unified? One general definition from empirical musicology is “the process by which independent rhythmical systems interact with each other” (Clayton, 2012). The importance of this definition is in specifying that the independent systems must generate their own, self-sustaining rhythmic fluctuations, and that entrainment is the process of their interaction and their adjustments, whether both adjust to each other (symmetrical) or one to another (asymmetrical) (ibid.). Coincidental alignment is not necessarily a marker of entrainment processes because measuring alignment does not imply that a system has adjusted to another (ibid.). Instead, measuring adjustments after perturbations may provide stronger evidence for entrainment (ibid.). Many of the measures used to capture entrainment capture some element of alignment, however they do not necessarily measure outcomes of perturbations. A panel discussion with experts on entrainment from various disciplines will aim to highlight the successes and shortcomings of the current body of literature on entrainment and how we can improve research and methods on this phenomenon. Questions will probe researchers’ definitions of entrainment and its correspondences and distinctions with other related phenomena including general coordination and synchrony. Finally, we will aim to highlight the gaps that still exist in the literature and how these can be addressed with the currently available methods.
-
Swarbrick, Dana; Danielsen, Anne; Jensenius, Alexander Refsum & Vuoskoski, Jonna Katariina
(2023).
The Effects of “Feeling Moved” and “Groove” On Standstill.
-
Swarbrick, Dana
(2023).
The Effects of Music on Climbing.
-
Masu, Raul; Morreale, Fabio & Jensenius, Alexander Refsum
(2023).
The O in NIME: Reflecting on the Importance of Reusing and Repurposing Old Musical Instruments.
Show summary
In this paper, we reflect on the focus of “newness” in NIME research and practice and argue that there is a missing O (for “Old”) in framing our academic discourse. A systematic review of the last year’s conference proceedings reveals that most papers do, indeed, present new instruments, interfaces, or pieces of technology. Comparably few papers focus on the prolongation of existing NIMEs. Our meta-analysis identifies four main categories from these papers: (1) reuse, (2) update, (3) complement, and (4) long-term engagement. We discuss how focusing more on these four types of NIME development and engagement can be seen as an approach to increase sustainability.
-
Karbasi, Seyed Mojtaba; Jensenius, Alexander Refsum; Godøy, Rolf Inge & Tørresen, Jim
(2023).
Exploring Emerging Drumming Patterns in a Chaotic Dynamical System using ZRob.
Show summary
ZRob is a robotic system designed for playing a snare drum. The robot is constructed with a passive flexible spring-based joint inspired by the human hand. This paper describes a study exploring rhythmic patterns by exploiting the chaotic dynamics of two ZRobs. In the experiment, we explored the control configurations of each arm by trying to create un- predictable patterns. Over 200 samples have been recorded and analyzed. We show how the chaotic dynamics of ZRob can be used for creating new drumming patterns.
-
Bukvic, Ivica Ico; Jensenius, Alexander Refsum; Wittman, Hollis & Masu, Raul
(2023).
Implementing the new template for NIME music proceedings with the community.
Show summary
We will analyze a new possible template for NIME submissions which would simplify the integration of NIME music performances in the COMPEL, a database which facilitates navigation across different categories (pieces, persons, instruments). The template emerges from a workshop run last year at NIME about the structure of COMPEL and the process of entering all performances presented last year. From this workshop we expect to improve the template and validate it with a community.
-
Laczko, Balint
(2023).
Fluid.jit.plotter: a Max abstraction for plotting and querying millions of points fast using the Fluid Corpus Manipulation library.
-
Laczko, Balint
(2023).
Two-part guest lecture about spatial audio and Ambisonics for MCT students.
-
Riaz, Maham
(2023).
An Investigation of Supervised Learning in Music Mood Classification for Audio and MIDI.
Show summary
This study aims to use supervised learning – specifically, support vector machines – as a tool for a music mood classification task. Four audio and MIDI datasets, each containing over four hundred files, were composed for use in the training and testing processes. Mood classes were formed according to the valence-arousal plane, resulting in the following: happy, sad, relaxed, and tense. Additional runs were also conducted with the linear discriminant analysis, a dimensionality reduction technique commonly used to better the performance of the classifier. The relevant audio and MIDI features were carefully selected for extraction. MIDI datasets for the same music generated better classification results than corresponding audio datasets. Furthermore, when music is composed with each mood associated with a particular key instead of mixed keys, the classification accuracy is higher.
-
Guo, Jinyue
(2023).
Automatic Recognition of Cascaded Guitar Effects.
-
Riaz, Maham
(2023).
Using SuperCollider with OSC Commands for Spatial Audio Control in a Multi-Speaker Setup.
Show summary
With the ever-increasing prevalence of technology, its application in various music-related processes, such as music composition and performance, has become increasingly prominent. One fascinating area where technology finds utility is in music performance, offering opportunities for extensive sound exploration and manipulation. In this paper, we introduce an approach utilizing SuperCollider and Open Sound Control (OSC) commands in a multi-speaker setup, enabling spatial audio control for a truly interactive audio spatialization experience. We delve into the musicological dimensions of these distinct methods, examining their integration within a live performance setting to uncover their artistic and expressive potential. By merging technology and musicology, our research aims to unlock new avenues for immersive and captivating musical experiences.
-
Ellefsen, Kai Olav
(2023).
Kunstig intelligens: Verden sett gjennom en maskins øyne.
-
Ellefsen, Kai Olav
(2023).
Hva er Kunstig Intelligens?
-
Bernhardt, Emil
(2023).
Hva er musikk?
-
Riaz, Maham
(2023).
Sound Design in Unity: Immersive Audio for Virtual Reality Storytelling.
Show summary
Research talk on sound design for games and immersive environments. The Unity game engine is used for environmental modeling. The Oculus Spatializer plugin provides control over binaural spatialization with native head related transfer functions (HRTF). Game scenes included C# scripts, which accounted for intermittent emitters (randomly triggered sounds of nature, critters and birds), crossfades, occlusion and raycasting. In the mixing stage, mixer groups, mixer snapshsots, snapshot triggers, SFX reverb sends, and low/high-pass filters were some of the tools demonstrated.
-
Upham, Finn
(2023).
Breathing Together in Music, a RESPY Workshop.
Show summary
Respiration is a subtle but inescapable element of real time musical experiences, sometimes casually accompanying whatever we are hearing, other times directly involved in the actions of sound generation. This workshop explores respiratory coordination in music listeners and ensemble musicians with respy, a new python library for evaluating respiration information from single belt chest stretch recordings. Following an introduction to the human respiratory system and breathing in music, the workshop demonstrates how the respy algorithms evaluate phase and breath type, and presents statistical tools for assessing shared information in these features of people listening to or making music together. Rather than only use aggregate statistics such as respiration rate, respy aims to elevate the details of the respiratory sequence to facilitate our exploration of how breathing is involved in musical experiences, second-by-second. Measurable coordination of the respiratory system to musical activities challenges our expectations for interacting oscillatory systems. This session will conclude with a discussion on the different categories of relationships possible between people breathing together in music.
-
Ellefsen, Kai Olav
(2023).
More human robot brains with inspiration from biology, psychology and neuroscience.
-
Martin, Remy Richard
(2023).
Our Aesthetic Categories.
-
Martin, Remy Richard
(2023).
Sensing Contexts: An Audio Walk Through the Nasjonalmuseet.
-
Martin, Remy Richard & Bernhardt, Emil
(2023).
Entrainment, free will, and musicking: an enactivist perspective.
-
Martin, Remy Richard; Cross, Ian; Upham, Finn; Bishop, Laura; Sørbø, Solveig & Øland, Frederik
(2023).
What can one learn from more naturalistic concert research?
-
Câmara, Guilherme Schmidt; Sioros, Georgios; Danielsen, Anne; Nymoen, Kristian & Haugen, Mari Romarheim
(2023).
Sound-producing actions in guitar performance of groove-based microrhythm.
Show summary
This study reports on an experiment that investigated how guitarists signal the intended timing of a rhythmic event in a groove-based context via three different features related to sound-producing motions of impulsive chord strokes (striking velocity, movement duration and fretboard position). 21 expert electric guitarists were instructed to perform a simple rhythmic pattern in three different timing styles—“laidback,” “on-the-beat,” and “pushed”—in tandem with a metronome. Results revealed systematic differences across participants in the striking velocity and movement duration of chords in the different timing styles. In general, laid-back strokes were played with lower striking velocity and longer movement duration relative to on-the-beat and pushed strokes. No differences in the fretboard striking position were found (neither closer to the “bridge” [bottom] or to the “neck” [head]). Correlations with previously reported audio features of the guitar strokes were also investigated, where lower velocity and longer movement duration generally corresponded with longer acoustic attack duration (signal onset to offset).
-
Câmara, Guilherme Schmidt; Spiech, Connor & Danielsen, Anne
(2023).
To asynchrony and beyond: In search of more ecological perceptual heuristics for microrhythmic structures in groove-based music.
Show summary
There is currently a gap in rhythm and timing research regarding how we perceive complex acoustic stimuli in musical contexts. Many studies have investigated timing acuity in non-musical contexts involving simple rhythmic sequences comprised of clicks or sine waves. However, the extent to which these results transfer to our perception of microrhythmic nuances in multilayered musical contexts rife with complex instrumental sounds remains poorly understood. In this talk we will present an overview of a planned series of just-noticeable difference (JND) experiments that will generate ecologically valid perceptual heuristics regarding timing discrimination thresholds. The aim is to investigate the extent to which microrhythmic timing and sonic nuances are perceived in groove-based music and connect these heuristics to the pleasurable urge to move in groove-based contexts, as well as acoustic (e.g., intensity, duration, frequency) and musical features (e.g., tempo, genre), and listener factors (e.g. musical training, stylistic familiarity). Overall, we expect timing thresholds to be higher for polyphonic/musical than for monotonic/non-musical stimuli/contexts and higher for pulse attribution (whether one can perceive a “beat”; Madison & Merker 2002, Psychol Res) than for simple detection of asynchrony and anisochrony (whether one can perceive “rhythmic irregularities”). Thresholds will likely be modulated by intensity (Goebl & Parncutt 2002, ICMPC7), tempo (Friberg & Sundberg 1995, J Acous Soc Am), instrumentation (Danielsen et al. 2019, J Exp Psychol), and genre/stylistic conventions (Câmara & Danielsen 2019, Oxford). Musically trained/stylistically familiar listeners may also display style-typical sensitivity to microrhythmic manipulations (Danielsen et al. 2021 Atten Percept Psychophys; Jakubowski et al. 2022; Cogn). In terms of subjective experience, we expect that onset asynchrony exaggerations will likely elicit lower pleasure and movement ratings compared to performances with idiomatic timing profiles (Senn et al. 2018, PLoS One). Higher ratings should also be biased in favor of familiar styles (Senn et al. 2021) and rhythmic patterns that do not engender excessive metrical ambiguity are likely to elicit higher ratings (Spiech et al. 2022, preprint; Witek et al. 2014, PLoS One).
-
Serdar Göksülük, Bilge
(2023).
The Implications of Laban/Bartenieff Movement Studies in the Field of Dance Anthropology.
-
Serdar Göksülük, Bilge
(2023).
Hybrid Format Movement Training Under the Pandemic Measures: A Clash Between Physical and Digital Realm.
-
Serdar Göksülük, Bilge
(2023).
Phenomenological Inquiry of Movement as a Methodology in Performing Arts Education.
-
Serdar Göksülük, Bilge
(2023).
From Ritualistic Dance to Political Act: Embodying Oppositions Through Dancing Halay in Mass Demonstrations of Turkey .
-
Serdar Göksülük, Bilge
(2023).
Embodied Knowledge Production Through Telematics in the Hybrid Realm.
-
Serdar Göksülük, Bilge
(2023).
Performative Quality of Aesthetics in Bio-Cultural Paradigm.
-
Ellefsen, Kai Olav
(2023).
Evolutionary Robotics.
-
Upham, Finn
(2023).
Using Metrically-entrained Tapping to Align Mobile phone sensor measurements from In-person and Livestream Concert Attendees.
Show summary
Music is often made and enjoyed in large groups, but simultaneously capturing measurements from dozens or hundreds of people is technically difficult. When measurements are not constrained to wired or continuous connected wireless systems, we can record much bigger groups, potentially taking advantage of the wearable sensors in our phones, watches, and more dedicated devices. However, aligning measurements captured by independent devices is not always possible, particularly to a precision relevant for music research. Phone clocks differ and update sporadically, wearable device clocks drift, and for online broadcast performances, exposure times can vary by tens of seconds across the remote audience. Many measurement devices that are not open to digital synchronisation triggers still include accelerometers; with a suitable protocol, participant movement can be used to imbed synchronisation cues in accelerometry measurements for alignment regardless of clock times. In this paper, we present a tapping synchronisation protocol that has been used to align measurements from phones worn by audience members and a variety sensors worn by a symphony orchestra. Alignment with the embedded cues demonstrate the necessity of such a protocol, correcting offsets of more than 700 ms for devices supposedly initialised with the same computer clock, and over 10 s for online audience participants. Audience tapping performance improved cell phone measurement alignment to a median of 100 ms offset, and professional musicians tappings improved alignment precision to around 40 ms. While the temporal precision achieved with entrained tapping is not quite good enough for some types of analyses, this improvement over uncorrected measurements opens a new range of group coordination measurement and analysis options.
-
Upham, Finn & Oddekalv, Kjell Andreas
(2023).
Fingers and Tongues: Appreciating Rap Flows through Proprioceptive Interaction in Rhythm Hive.
Show summary
Rhythm games have been studied for their potential to develop interest in music making (Cassidy and Paisley, 2013) and transferable musicianship skills (Richardson and Kim, 2011), but how might they influence players appreciation for specific musical works? Proprioceptive interaction, a concept by game designer Matt Bloch (Miller, 2017), refers to changes in a game player's perception of music as they practice specific movements to it. By drawing attention to coincidental sounds, players can develop their hearing and appreciation for nuances of production and performance. Many fans of rap enjoy performances in languages they do not speak themselves. Without specific language skills, expertise in rap performance, and/or time to learn lyrics phonetically, their experience of a rap flow is hampered by an inability to imitate and imagine the generative action of performance. Rhythm Hive is a mobile rhythm game based on the music of BTS, Enhyphen, and TXT, Kpop groups with substantial followings outside of Korea. Game play presents players with finger choreographies to these groups’ hit songs, tapping sequences to the vocal performances across four to seven positions in a line. For these groups’ many non-rapping and non- Korean-speaking fans, playing Rhythm Hive may offer deeper understanding of performances by rappers like RM, Suga, and J-Hope. Through expert analysis of rap performance, transcriptions of game play, and reflections on the experience of playing Rhythm Hive, we consider shared structure between the prescribed finger choreographies and the rap flows they accompany. We studied rap verses from four BTS songs along side their Easy and Hard level tapping sequences (vocal versions only) to identify parallels in rhythm, segmentation, repetition, and accents. Easy mode choreographies tend to mark their relationship to rap vocals by hitting the start of lines and then articulating structure with repeated contours tapped on quarter and eighth notes. Hard mode choreographies tend to hit every rapped syllable and incorporate more gestural flourishes to mark pitch changes, ending and internal rhymes, and interesting breaks from a steady 16th note flow. Both Easy and Hard tappings sequences consistently follow the rap track when it deviates from a quantized beat. The finger choreographies of Rhythm Hive illuminate rap performances by directing and rewarding players’ attention to details of flows that may otherwise be missed. Game feedback pushes players to replicate delivery microtiming, while spatial patterns underline linguistic and rhythmic structure. Hard mode tapping sequences articulate distinguishing characteristics of specific rap styles, given players tangible sensitivity to degrees of technicality and nuances of genre. While fans may be motivated to play rhythm games like Rhythm Hive out of a preexisting love of the music and bands, tapping along offers them a chance to attend to, appreciate, and even rehearse key aspects of these rappers’ expert performance choices, regardless of how well they might follow by ear.
-
Upham, Finn
(2023).
Insight into human respiration through the study of orchestras and audiences.
-
Bishop, Laura & Upham, Finn
(2023).
Bodies in Concert.
Show summary
Increasingly, research on music performance is moving out of controlled laboratory settings and into concert halls, where there are opportunities to explore how performance unfolds in high-arousal conditions and how performers and audiences interact. In this session, we will present findings from a series of live research concerts that we carried out with the Stavanger Symphony Orchestra. The orchestra performed the same program of classical repertoire for four audiences of schoolchildren and an audience of families. Orchestra members wore sensors that collected cardiac activity, respiration, and body motion data, and the conductor additionally wore a full-body motion capture suit and eye-tracking glasses. Audience members in some of the concerts were invited to wear reflective wristbands, and wristband motion was captured using infrared video recording. We will begin the session with a discussion of the scientific and methodological challenges that arose during the project, in particular relating to the large scale of data capture (>50 musicians and hundreds of audience members), the visible nature of research that is carried out on a concert stage, and the development of procedures for aligning data from different recording modalities. Next, we will present findings from two lines of analysis that investigate different aspects of behavioural and physiological coordination within the orchestra. One analysis investigates the effects of audience noise and musical roles on coherence in (i) cardiac rate and variability and (ii) respiratory phase and rate. The second analysis investigates the effects of musical demands on synchronization of body sway, bowing, and respiration in string sections. We will conclude the session with an open discussion of how live concert research might be optimized.
-
Upham, Finn & Christophersen, Bjørn Morten
(2023).
Bodies in Concert: RITMO project with the Stavanger symfoniorkester.
-
Tørresen, Jim
(2023).
From Adapting Robot Body and Control to Human–Robot Interaction.
-
-
Blenkmann, Alejandro Omar; Ianni, Pablo & Verdugo, Rodrigo
(2023).
Interview for TV show "Salud en Movimiento", Radio Television Neuquén.
[TV].
Neuquen, Argentina.
-
Blenkmann, Alejandro Omar
(2023).
Neural correlates of auditory predictions using intracranial EEG.
Show summary
Website https://gumed.edu.pl/76777.html
-
Blenkmann, Alejandro Omar
(2023).
Altered hierarchical auditory predictive processing after lesions to the orbitofrontal cortex - Quantifying Evoked Responses through Encoded Information.
-
Laczko, Balint
(2023).
Online guest lecture about The Hum - a real-time 3D audiovisual performance in Max.
-
Blenkmann, Alejandro Omar & Puppio, Daniel
(2023).
Interview at "La Mañana de la Radio" FM 97.3.
[Radio].
Neuquen, Argentina.
-
Asko, Olgerta; Solbakk, Anne-Kristin; Leske, Sabine Liliana; Meling, Torstein Ragnar; Knight, Robert T. & Endestad, Tor
[Show all 7 contributors for this article]
(2023).
The orbitofrontal cortex (OFC) has a critical role in the generation of high-level expectations.
-
Asko, Olgerta; Solbakk, Anne-Kristin; Leske, Sabine Liliana; Meling, Torstein Ragnar; Knight, Robert T. & Endestad, Tor
[Show all 7 contributors for this article]
(2023).
Orbitofrontal lesion impacts formation of auditory expectations.
Show summary
Current findings of orbitofrontal cortex (OFC) function suggest that this region might have a role in the generation of prediction error signals associated with top-down expectation of upcoming stimuli. We investigated the impact of lesions to the OFC on the Contingent Negative Variation (CNV), an electrophysiological marker of cognitive expectation and time perception. Twelve OFC patients and fifteen healthy controls performed an auditory local-global paradigm while brain electrical activity was recorded. The structural regularities of the tones were controlled at two hierarchical levels by rules defined at a local (i.e., between tones within sequences) level with a short timescale and at a global (i.e., between sequences) level with a longer timescale. At the global level, deviant tone sequences were interspersed among standard tone sequences in a pseudorandom order, rendering some deviant sequences more anticipated than others. We found that healthy controls exhibited CNV build-up before the occurrence of deviant sequences. The CNV drift rate was modulated by the expectancy of deviant sequences (i.e., the higher the expectancy, the higher the CNV drift rate), reflecting their ability to anticipate when a deviant tone sequence would occur. However, patients with OFC lesions did not show CNV drift modulations by the expectancy of the deviant tone sequences, indicating impaired anticipation of these upcoming events. These findings suggest involvement of the OFC in generating auditory expectations based on the contextual and temporal structure of the task.
-
Blenkmann, Alejandro Omar; Solbakk, Anne-Kristin; Leske, Sabine Liliana; Llorens, Anaïs; Funderud, Ingrid & Larsson, Pål Gunnar
[Show all 10 contributors for this article]
(2023).
Temporal and precentral brain activity in automatic auditory deviance detection. Evidence from human intracranial EEG recordings.
-
Solbakk, Anne-Kristin; Blenkmann, Alejandro Omar; Leske, Sabine Liliana & Endestad, Tor
(2023).
Orbitofrontal lesion impacts formation of auditory expectations.
-
Solli, Sandra; Danielsen, Anne; Leske, Sabine Liliana; Solbakk, Anne-Kristin & Endestad, Tor
(2023).
Periodic vs Aperiodic Temporal Predictions: Shared or Separate Mechanisms?
-
Solli, Sandra; Danielsen, Anne; Leske, Sabine Liliana; Blenkmann, Alejandro Omar; Solbakk, Anne-Kristin & Endestad, Tor
(2023).
Periodic vs Aperiodic Temporal Predictions: Shared or Separate Mechanisms?
.
-
Rezende Carvalho, Vinicius; Collavini, Santiago; Kochen, Silvia; Solbakk, Anne-Kristin & Blenkmann, Alejandro Omar
(2023).
Single-neuron responses to a multifeature oddball paradigm.
-
Leske, Sabine Liliana; Endestad, Tor; Volehaugen, Vegard; Foldal, Maja Dyhre; Blenkmann, Alejandro Omar & Solbakk, Anne-Kristin
[Show all 7 contributors for this article]
(2023).
Predicting the Beat Bin – Beta Oscillations Support Top-Down Prediction of The Temporal Precision of a Beat .
-
Veenstra, Frank; Norstein, Emma Stensby & Glette, Kyrre
(2023).
Tutorial: Evolving Robot Bodies and Brains with Unity.
Show summary
The evolution of robot bodies and brains allows researchers to investigate which building blocks are interesting for evolving artificial life. Agnostic to the evolutionary approach used, the supplied building blocks influence how artificial organisms will behave. What should these building blocks look like? How should we associate control units with these building blocks? How should we represent the genomes of these robots? In this tutorial we discuss (1) previous approaches to evolving robots and virtual creatures, (2) outline how Unity simulations and Unity's ML-agents package can be used as an interface, and (3) our approach to evolving bodies and brains using Unity.
There are many existing solutions that are tailored to experimenting with body brain co-optimization and we have been using several simulation approaches to evolve modular robots that are represented by directed trees (directed acyclic graphs). Since evolving bodies can be relatively complex, we give participants an overview of existing methods and invite the participants to get some guided hands-on experience using the Unity ML-Agents for evolving robots. The Unity ML-Agents toolkit is an open-source toolkit for game developers, AI researchers, and hobbyists that can be used to train agents using various AI methods. Similar to OpenAI gym, it supplies a Python API through which one can optimize agents in a variety of environments. The Unity ML-Agents toolkit provides an easy-to-use interface that is flexible enough to allow for quick design iterations for evolving robot bodies and brains.
This tutorial is aimed at researchers that are interested in simulating the evolution of bodies and brains of robots. The tutorial will provide an overview of existing approaches to evolving bodies and brains of robots, and demonstrate how to design and incorporate control units, morphological components, environments and objectives. Participants will learn how to use Unity ML-Agents as a tool with evolutionary algorithms and learn how they can create incorporate their own robotic modules for evolving robots.
-
Lartillot, Olivier
(2023).
Towards a comprehensive model for computational music transcription and analysis: a necessary dialog between machine learning and rule-based design?
-
Lartillot, Olivier
(2023).
Computational audio and musical features extraction: from MIRtoolbox to the MiningSuite.
-
Lartillot, Olivier
(2023).
Dynamic Visualisation of Fugue Analysis, Demonstrated in a Live Concert by the Danish String Quartet.
-
-
Christodoulou, Anna-Maria; Lartillot, Olivier & Anagnostopoulou, Christina
(2023).
Computational Analysis of Greek Folk Music of the Aegean.
-
Lartillot, Olivier & Monstad, Lars Løberg
(2023).
MIRAGE - A Comprehensive AI-Based System for Advanced Music Analysis.
-
Maidhof, Clemens; Agres, Kat; Fachner, Jörg & Lartillot, Olivier
(2023).
Intra- and inter-brain coupling during music therapy.
-
Wosch, Thomas; Vobig, Bastian; Lartillot, Olivier & Christodoulou, Anna-Maria
(2023).
HIGH-M (Human Interaction assessment and Generative segmentation in Health and Music).
-
Lartillot, Olivier
(2023).
Music Therapy Toolbox, and prospects.
-
Lartillot, Olivier; Swarbrick, Dana; Upham, Finn & Cancino-Chacón, Carlos Eduardo
(2023).
Video visualization of a string quartet performance of a Bach Fugue: Design and subjective evaluation.
-
Bishop, Laura; Høffding, Simon; Laeng, Bruno & Lartillot, Olivier
(2023).
Mental effort and expressive interaction in expert and student string quartet performance.
-
Laczko, Balint
(2023).
Video poster of the project entitled “The autophagic symphony – Unveiling the final rhythm”.
-
Riaz, Maham; Upham, Finn; Burnim, Kayla; Bishop, Laura & Jensenius, Alexander Refsum
(2023).
Comparing inertial motion sensors for capturing human micromotion.
Show summary
The paper presents a study of the noise level of accelerometer data from a mobile phone compared to three commercially available IMU-based devices (AX3, Equivital, and Movesense) and a marker-based infrared motion capture system (Qualisys). The sensors are compared in static positions and for measuring human micromotion, with larger motion sequences as reference. The measurements show that all but one of the IMU-based devices capture motion with an accuracy and precision that is far below human micromotion. However, their data and representations differ, so care should be taken when comparing data between devices.
-
Laczko, Balint
(2023).
Guest lecture about granular synthesis with onset detection in Max.
-
Tørresen, Jim
(2023).
What is AI?
-
Tørresen, Jim
(2023).
What is Robotics?
-
Keeley, Crocket & Tørresen, Jim
(2023).
Workshop on Ethical Challenges within Artificial Intelligence - From Principles to Practice.
-
Tørresen, Jim & Yao, Xin
(2023).
Tutorial: Ethical Risks and Challenges of Computational Intelligence.
-
Tørresen, Jim
(2023).
From Adapting Robot Body and Control Using Rapid-Prototyping to Human–Robot Interaction with TIAGo.
-
Monstad, Lars Alfred Løberg
(2023).
Demonstrasjon av Kunstig Intelligens som verktøy for komponister.
-
Monstad, Lars Løberg
(2023).
Kunstig Intelligens i kunst og kultur.
[TV].
NRK Dagsrevyen.
-
Monstad, Lars Løberg; Silje Larsen, Borgan & Vegard, Waske
(2023).
AI i musikken: konsekvenser og muligheter.
-
Tørresen, Jim
(2023).
Future Intelligent and Adaptive Robots in Real-World Environments.
-
Tørresen, Jim
(2023).
Human Intuition and its Impact on Human–Robot Interaction Regarding Safety and Accountability.
-
Tørresen, Jim
(2023).
Robots-assistants that Care about Privacy, Security and Safety.
-
Tørresen, Jim; Saplacan, Diana & Mahler, Tobias
(2023).
Ethical, Legal and User Perspectives on Social and Assistive Robots (ELUPSAR) workshop.
-
D'Amario, Sara; Ternström, Sten; Goebl, Werner & Bishop, Laura
(2023).
Impact of singing togetherness and task complexity on choristers' body motion.
-
Bishop, Laura; Niemand, Anna Maria; D'Amario, Sara & Goebl, Werner
(2023).
Coordinated head motion predicts cognitive effort and experiences of musical togetherness in singing-piano duos.
-
Brøvig, Ragnhild
(2023).
Users’ Freedom of Expression in the Digital Era .
-
Brøvig, Ragnhild
(2023).
You’re not supposed to sample and rely on copyright exceptions.
-
Brøvig, Ragnhild
(2023).
Publishing Panel (on the publishing of Parody in the Age of Remix).
-
Brøvig, Ragnhild & Stevenson, Alex
(2023).
Machine Aesthetics: An Analytical Framework .
-
Brøvig, Ragnhild
(2023).
Crisis in the Flow of Remixes and in the Maintenance of Copyright Exceptions.
-
Brøvig, Ragnhild
(2023).
Presentation of the book Parody in the Age of Remix.
-
Brøvig, Ragnhild
(2023).
Crises affecting the economy, production, and consumption of music: Perspectives from remixers.
-
Brøvig, Ragnhild & Furunes, Marit Johanne
(2023).
Presentasjon av RITMOs karriereprogram for UiO sentralt (Seksjon for kompetanse- og organisasjonsutvikling, Avdeling for organisasjon og personal).
-
Brøvig, Ragnhild
(2023).
RITMOs Mentorprogram.
-
Monstad, Lars Løberg & Lartillot, Olivier
(2023).
Automatic Transcription Of Multi-Instrumental Songs: Integrating Demixing, Harmonic Dilated Convolution, And Joint Beat Tracking.
Show summary
In the rapidly expanding field of music information retrieval (MIR), automatic transcription remains one of the most sought-after capabilities, especially for songs that employ multiple instruments. Musscribe emerges as a state-of-the-art transcription tool that addresses this challenge by integrating three distinct methodologies: demixing, harmonic dilated convolution, and joint beat tracking. Demixing is employed to isolate individual instruments within a song by separating overlapping audio sources, thus ensuring each instrument is transcribed distinctly. Beat tracking is then run as a parallel process to extract the joint beat and downbeat estimations. These processes results in an output midi file, which is then quantized using information derived from the beat tracking. As such, this method paves the way for more accurate and sophisticated analyses, bridging the gap between human and machine understanding of music. Together, these methodologies allow us to produce transcriptions that are not only accurate but also highly representative of the original compositions. Preliminary tests and evaluations showcase the potential in transcribing complex musical pieces with high fidelity, outperforming many contemporary tools in the market. This innovative approach not only has implications for music transcription but also for broader applications in audio analysis, remixing, and digital music production. The model has been instrumental in accelerating the composition process for several Norwegian television shows. Moreover, its efficacy can be observed in the Netflix series "A Storm for Christmas." Renowned composer Peter Baden harnessed this tool to enhance his workflow, proving the demand for innovative tools like this in the professional music industry.
-
Brøvig, Ragnhild
(2023).
Wakeful Sleep and Sleepy wakefulness in EDM.
-
Blenkmann, Alejandro Omar; Asko, Olgerta; Volehaugen, Vegard; Foldal, Maja Dyhre; Solli, Sandra & Leske, Sabine Liliana
[Show all 9 contributors for this article]
(2023).
Auditory perception, memory, and predictions.
-
Hope, Mikael; Spiech, Connor & Bégel, Valentin
(2023).
Synchronizing at Slower Tempi Increases Pupil Activity Compared With One’s Own Spontaneous Motor Tempo.
-
Blenkmann, Alejandro Omar & Agrawal, Rahul Omprakash
(2023).
Intracranial Electrode Localization workshop.
-
Monstad, Lars Alfred Løberg; Baden, Peter & Wærstad, Bernt Isak Grave
(2023).
Kan kunstig intelligens brukes i låtskriverprosessen?
-
Volehaugen, Vegard; Leske, Sabine Liliana; Funderud, Ingrid; Llorens, Anaïs; Carvalho, Vinicius Rezende & Endestad, Tor
[Show all 8 contributors for this article]
(2023).
Echoes of the unheard: An intracranial electrophysiology study of expectation and attention in auditory omission processing.
-
Ranjan, Snehal; Hari, Kancharla Aditya; Vuoskoski, Jonna Katariina & Alluri, Vinoo
(2023).
Sad songs say so much: Analyzing moving music shared online.
-
Jensenius, Alexander Refsum
(2023).
Tverrfaglig forskning på rytme, tid og bevegelse.
Show summary
RITMO er et unikt SFF på grunn av sin radikalt tverrfaglige oppbygning. Hvordan fungerer det i praksis?
-
-
Jensenius, Alexander Refsum
(2023).
Sound Actions: Conceptualizing Musical Instruments.
Show summary
How do new technologies change how we perform and perceive music? What happens when composers build instruments, performers write code, perceivers become producers, and instruments play themselves? These are questions addressed in the new book by Professor Alexander Refsum Jensenius: Sound Actions: Conceptualizing Musical Instruments published by the MIT Press.
-
Jensenius, Alexander Refsum
(2023).
Sound Actions - Conceptualizing Musical Instruments.
-
Jensenius, Alexander Refsum
(2023).
Explorations of human micromotion through standing still.
Show summary
Throughout 2023, I will stand still for ten minutes around noon every day, in a different room each day. The aim is to collect data about my micromotion and compare it to the qualities of the environment. This project follows a decade-long exploration of human micromotion from both artistic and scientific perspectives. In the talk, I will present results from the annual Norwegian Championships of Standstill, where we have studied the influence of music on people's micromotion. I will also talk about how micromotion can be used in interactive music systems, allowing for conscious and unconscious control of musical sounds.
-
Jensenius, Alexander Refsum
(2023).
Exploring Human Micromotion Through Standing Still.
Show summary
Moving slowly likely puts us into a special state of mind. Subjective reports from various practices including dance, Tai Chi and walking meditation suggest that slow movements can bring participants into a special state involving increased relaxation and awareness. Interestingly, relatively little research has been performed specifically to understand the underlying mechanisms and the possible applications of human slow movement. One reason might be that slow movements are not common in day-to-day life: when we want to move – for example to pick up our cup of coffee - we usually want to do it now. Some evidence suggests that humans tend to avoid moving slowly in different tasks, for example, when improvising movements together. The goal of this meeting is to bring together scholars and practitioners interested in slow movement, and to foster interdisciplinary research on this somewhat neglected topic.
-
Jensenius, Alexander Refsum
(2023).
Introducing MusicLab.
Show summary
In 2021, one of the world’s finest string quartets, The Danish String Quartet (DSQ), and a large team of international researchers based at RITMO, co-hosted MusicLab Copenhagen – a groundbreaking event where DSQ performed their best repertoire while researchers experimented with, measured, and analyzed the experiences and behavior of musicians and audience. Some of the questions we tried to answer were: Do we become one grand “we” when absorbed in music together? How do we synchronize our bodily rhythms with the music during a concert? As an innovative musical and scientific format, the concert has been widely reported and won “Event of the Year” by the Danish National Broadcasting Corporation (DR P2). Now, the researchers have completed their analyses, and we are excited to share findings in a hybrid launch event.
-
Pleiss, Martin Peter
(2023).
The affective cycle of orientation in unfamiliar contexts within an aesthetic Virtual Reality environment.
Show summary
Art has been proposed as an opportunity (Noë) to observe the ‘strange’ (Gallagher) in the quest for phenomenological descriptions. These fringe forms of factual variations help illuminate experiential structures (Merleau-Ponty).
This paper is presenting
1) an observed, shared cycle of relationships between perceptive actions, affect and degrees of familiarity with a novel and hard-to- grasp Virtual Reality artwork.
This description results from 2) a concrete experimental methodology to utilise the combination of factual variations and experiential artworks in a phenomenological project.
As part of the analysis of my PhD project, this paper describes an experience of orientation that we subjected experiment participants to. The experiment used a multimodal VR artwork, which features a very abstract and unexpectedly inter- actable world, devoid of apparent contexts, symbolisms or real-world references. While intended as an aesthetic experience by its creators, the artwork is rich in its underlying governing laws of physics, visual- and auditory design and in its interaction dynamics.
The resulting ‘strange’ experience made it possible to observe the changing relationship of how-it-mattered: The VR world appearing as the unknown initially and then gradually becoming familiar as something through an action-centred being-with its objects. Different stages of affect become apparent, outlined as an ‘affective cycle of orientation’. Further the paper describes an observable spectrum in the quality of orienting actions and their respective intentions and stances. At one pole of this spectrum actions serve in a mediating and enabling function, the other end falls in a more classical definition of ‘affordances’ (Gibson). I will discuss this with respect to gestures and habits (Merleau-Ponty). And the paper showcases the self as more than just re-acting, but having self-enabling capacities for intentional actions by being enacted, embedded and embodied in an aesthetic experience.
-
Pleiss, Martin Peter
(2023).
Action cycles before ‘affordances’ in unfamiliar contexts within a playful Virtual Reality environment.
Show summary
Virtual realities as playful, encompassing and somewhat ecological experiences offer unique opportunities for phenomenological inquiries into subjectivity. This paper is presenting
1) an observed, shared cycle of relationships between perceptive (inter)actions, affect and degrees of familiarity in the emergence of affordances while experiencing novel and hard-to-grasp objects and dynamics within a Virtual Reality.
This description results from
2) a concrete experimental methodology to utilise the potential of interactive virtual realities as factual variations to investigate subjectivity and the phenomenology of aesthetic experiences in particular. Aesthetic experiences have been proposed (Noë) to observe the ‘strange’ (Gallagher) in the quest for phenomenological descriptions. This paper highlights virtual realities as fringe forms of these factual variations, which help illuminate experiential structures (Merleau-Ponty). As part of the analysis of my PhD project, the paper describes the emergence of affordances in an initially unknown VR environment. My experiment used a multimodal VR artwork, which features a very abstract and unexpectedly inter-actable world, devoid of apparent contexts, symbolisms or real- world references. While intended as an aesthetic experience by its creators, the artwork is rich and playful in its underlying governing laws, design and in its
interaction dynamics.
The resulting ‘strange’ experience made it possible to observe the changing relationship of how-it-mattered: The unknown world gradually becoming familiar as something through an action-centred being-with its objects. Different stages of affect become apparent, outlined as an ‘affective cycle of orientation’. Further the paper describes an observable spectrum in the quality of orienting actions and their respective intentions and stances. At one pole of this spectrum actions serve in a mediating and enabling function, the other end falls in a more classical definition of ‘affordances’ (Gibson). I will discuss this in relation to gestures and habits (Merleau- Ponty) and our self-enabling capacities for intentional actions by being enacted, embedded and embodied (4E-d) in an aesthetic experience.
-
-
-
-
-
-
Brøvig, Ragnhild
(2023).
Digitalisering i musikkutdanningen.
-
Solli, Sandra; Danielsen, Anne; Leske, Sabine Liliana; Blenkmann, Alejandro Omar; Solbakk, Anne-Kristin & Endestad, Tor
(2023).
Both periodic and aperiodic rhythms facilitate
perceptual processing.
-
Brøvig, Ragnhild
(2023).
Machine Rhythms.
-
Brøvig, Ragnhild
(2023).
Arousal, Expectancy Violation, and Pleasure.
-
Brøvig, Ragnhild
(2023).
Different approaches to qualitative methods.
-
Blenkmann, Alejandro Omar
(2023).
Neurophysiological Mechanisms of Human Auditory Predictions: From population- to single neuron recordings.
-
-
Jónsson, Björn Thór
(2023).
kromosynth.
Show summary
Sonic design with evolutionary algorithms: The engine behind synth.is and kromosynth-cli for audio waveform breeding with neuro-evolution of pattern producing networks and quality diversity search.
-
-
Martin, Remy Richard
(2023).
Ultima Listeners.
-
Tørresen, Jim
(2023).
Artificial Intelligence – diverse in methods and applications.
-
-
-
-
Brøvig, Ragnhild & Furunes, Marit Johanne
(2023).
Karriereløpsprogrammet på RITMO med mentorordning.
-
Brøvig, Ragnhild
(2023).
My way to becoming a full professor.
-
Martin, Remy Richard
(2023).
Aesthetic Resonances: Senses of Self in Rhythm, Musical Time, and Space.
Show summary
Resonance is a rich concept that is receiving significant attention in current psychology, philosophy, and neuroscience. In ecologically-oriented literature its usage centres on perceivers’ adaptive detection of environmental information (Clarke, 2005; Raja, 2019). This is instructive of the modulating—and enhancing—nature of attention, awareness, and action. Distinct from perceptual notions of resonance in ecological psychology, physical understandings, and accounts of neural activity, appear in several related fields. These typically concern the oscillatory interactions of two systems including forms of phase locking, synchronisation, and entrainment. Elsewhere the metaphor of acoustic resonance, as manifest in political contexts, is receiving philosophical attention (James, 2019).
Resonance is also a central metaphor in the context of aesthetic subjectivities. Vernacular uses of the term in response to aesthetic entanglements (‘I resonate with this song’; ‘that artwork resonates with me’) are called to mind. Adopting the ecological approach, this paper foregrounds resonance as a means of understanding the relationship between the phenomenology of music reception and underlying perceptual and affective interactions. Particularly important, self- luminous aesthetic resonances – experienced as senses of agency, ownership, affirmation, and affiliation – form the focus of a discussion which draws empirical support from quantitative studies of live music spectatorship and rich reports of ‘private’ music listening gathered through media- stimulated, phenomenological interviews.
-
Lartillot, Olivier & Monstad, Lars Løberg
(2023).
Computational music analysis: Significance, challenges, and our proposed approach.
Show summary
Music is something that we mostly all appreciate, yet it remains a hidden and enigmatic concept for many of us. Music notation, in the form of music scores, facilitates practicing and enhances the understanding of the richness of musical works. However, acquiring musical scores for any music performance is a tedious and demanding task (called music transcription) that demands considerable proficiency. Hence the interest of computational automation. But music is not just notes, it is also melody, rhythm, themes, timbre, and very subtle aspects such as form. While many of us may not be consciously familiar with these concepts, they still have a subconscious influence on our aesthetic experience. Interestingly, it often happens that the more we consciously understand the underlying language of music, the more we tend to appreciate and enjoy it. Therefore, there is value in creating computational tools that can automate and enhance these types of analyses.
The presenters' past work resulted in the creation of Matlab's MIRtoolbox, which measures a broad range of musical characteristics directly from audio through signal processing techniques. Currently, the MIRAGE project prioritises music transcription (with a particular focus on Norwegian folk music), blending neural-network-based deep learning with conventional rule-based models. Through this project, they highlight the importance of acknowledging the interconnectedness between all musical elements. Additionally, they have crafted animated visualisations to make analyses more accessible to the general public and are aiming to make music transcription technology available to the public, with support from UiO Growth House.
-
-
Lartillot, Olivier; Thedens, Hans-Hinrich; Mjelva, Olav Luksengård; Elovsson, Anders; Monstad, Lars Løberg & Johansson, Mats Sigvard
[Show all 8 contributors for this article]
(2023).
Norwegian Folk Music & Computational Analysis.
Show summary
As a prélude for Norway's Constitution Day, this special event celebrated the Norwegian folk music tradition, showcasing our new online archive and demonstrating the richness of Hardanger fiddle music, with live performance. One aim of the project is to conceive new technologies allowing to better access, understand and appreciate Norwegian folk music.
In this event, we introduced a new online version of the Norwegian Folk Music Archive and discuss underlying theoretical and technical challenges. A live concert/workshop, with the participation of Olav Luksengård Mjelva, offered a lively introduction to Hardanger fiddle music and its elaborate rhythm. The interests and challenges of automated transcription and analysis were discussed, with the public release of our new software Annotemus.
The symposium was organised in the context of the MIRAGE project (RITMO, in collaboration with the National Library of Norway's Digital Humanities Laboratory).
-
-
Oddekalv, Kjell Andreas
(2022).
Public defense: Kjell Andreas Oddekalv.
-
Oddekalv, Kjell Andreas
(2022).
Rap music’s black cultural heritage: How does “pushing the limits” of dopeness relate to hip hop values of excellence and/as badness?
-
Oddekalv, Kjell Andreas
(2022).
Intervju om rap flows - Studio 2, NRK P2.
[Radio].
NRK P2.
-
Oddekalv, Kjell Andreas
(2022).
KARPE KARPE KARPE - Aftenposten Forklart.
[Internet].
Aftenposten Forklart Podcast.
-
Oddekalv, Kjell Andreas; Bjørkheim, Terje; Ugstad, Magnus; Hole, Erik; Sørli, Anders Ruud & Walderhaug, Bendik
(2022).
Sinsenfist på Samfunnet Bislet 2022.
-
Oddekalv, Kjell Andreas
(2022).
Hva gir hiphop flow? En norsk forsker mener han har funnet svaret.
[Newspaper].
Morgenbladet.
-
-
-
-
Lesteberg, Mari & Jensenius, Alexander Refsum
(2022).
MICRO and MACRO - Developing New Accessible Musicking Technologies.
Show summary
This paper describes the development of two musical instrument prototypes developed to explore how non-haptic music technologies can be accessed from a web browser and how they can offer accessibility for people with low fine motor skills. Two approaches to browser-based motion capture were developed and tested during an iterative design process. This was followed by observational studies of two user groups: one with low fine motor skills and one with normal motor skills. Contrary to our expectations, we found that avoiding the use of buttons and mice did not make the apps more accessible for the participants with low fine motor skills. Furthermore, motion speed was considered more important for people with low motor skills than the size of the control action. The most important finding is that browser-based musical instruments using sensor-based and video-based motion tracking are not only feasible but allow for reaching much larger groups of people than previously possible. This may ultimately lead to both more personalized and accessible musical experiences.
-
Malmierca, Manuel S.; Auksztulewicz, Ryszard; Teichert, Tobias; Blenkmann, Alejandro Omar & Melloni, Lucia
(2022).
The Neuronal basis of predictive coding: Evidence from brain studies across species.
-
Spiech, Connor
(2022).
Oscillatory Brain and Pupil Activity Varies with Rhythmic Complexity and Groove Ratings.
-
Polak, Rainer
(2022).
Empirical research in rhythm performance and perception.
-
Polak, Rainer & London, Justin
(2022).
Roundtable discussion: Analysis, Cognition and World Music.
-
Polak, Rainer
(2022).
Data graphs as context for music analysis: Examples from research on drum ensemble music from Mali.
-
Polak, Rainer
(2022).
Swing-based Meter in Music from Mali.
-
Upham, Finn
(2022).
Uncovering the active listener.
Show summary
Our experiences of music are both highly idiosyncratic, special to each of us in each moment, and collective, with common influences across a listening crowd. Through a series of empirical studies on how people feel and behave during music listening, Finn Upham traces their trajectory from a basic model of performed stimulus and audience response to an empowered-listener view of musical engagement. If participation is part of all musical experiences, this poses a question to all producers of music: “What are you asking your audience to do?”
-
Szorkovszky, Alexander; Veenstra, Frank & Glette, Kyrre
(2022).
From real-time adaptation to social learning in robots.
-
-
Krzyzaniak, Michael Joseph & Bishop, Laura
(2022).
Professor Plucky—Expressive body motion in human- robot musical ensembles.
-
Bishop, Laura
(2022).
Shared attention and shared expressive goals affect classical piano duos' playing quality and experiences of togetherness.
-
Bishop, Laura
(2022).
Intersubjectivity and musical togetherness: What is the overlap?
-
Bishop, Laura & Laeng, Bruno
(2022).
Expertise modulates the relationship between musical demands and mental effort.
-
Bishop, Laura
(2022).
Emergent coordination of ancillary gestures motivates musical and interperformer engagement during group music-making.
-
Swarbrick, Dana & Whorms, Alex
(2022).
LIVELab Concert Experiment - Audience Motion and Emotion - Effects of Participation and Shared Presence on Motion, Emotion, and Bonding.
Show summary
September 23rd, 2022, Alex Whorms and her band, Konrad Swierczek (bass), Nigel Stewart (drums), and Stephen Orr (guitar) performed for a concert experiment on audience motion and emotion. Lead researcher Dana Swarbrick, doctoral researcher at the University of Oslo's RITMO Centre for Interdisciplinary Studies in Rhythm, Time and Motion, delivered a Science Snapshot presentation on the research experiment. The live audience's head motion was measured with the LIVELab's motion capture system and both the live and livestreaming audiences had their body motion measured with MoCap hats and the MusicLab App. Both audiences filled surveys that measured their emotions.
-
Jensenius, Alexander Refsum
(2022).
NOR-CAM - en introduksjon.
Show summary
A working group appointed by Universities Norway (UHR) was mandated to recommend guiding principles for the assessment and evaluation of research(ers) in light of the transition to Open Science. This working group proposed a more flexible and holistic framework for recognition and rewards in academic research assessment. The ambition has been to develop a guide that adopts three core principles for assessment: more transparency, greater breadth, and comprehensive assessments as opposed to the one-sided use of indicators.
-
Jensenius, Alexander Refsum
(2022).
From ideas to reality: interdisciplinary collaborations.
-
Jensenius, Alexander Refsum
(2022).
Alternatives to journal-based metrics in research assessment.
Show summary
Science Europe invites institutional leaders, researchers at all stages of their careers, and experts from the field to join its 18 and 19 October 2022 conference on Open Science to discuss two key questions: (1) Is Open Science ready to become the norm in research? (2) How do we ensure this becomes an equitable transition? To find answers to these questions, the conference will provide a comprehensive overview of practical and policy initiatives, research assessment reforms, and financial measures that support the transition to Open Science. We will also look forward to new and emerging trends.
-
Jensenius, Alexander Refsum
(2022).
Experiencing the world through sound actions.
Show summary
This talk will reflect on my year-long project recording a daily "sound action". These are multimodal entities consisting of body motion and its resultant sound. When we only see a sound action, we can imagine its sound. If we only hear a sound action, we can imagine the body motion and objects involved in the interaction. Sound actions are ubiquitous in everyday life yet rarely discussed and reflected upon. My attempts at analyzing sound actions show some of the complexity involved in making sense of actions, reactions, and interactions with the world. This complexity can also inspire creative usage. I will present examples of meaningless and cognitively conflicting sound actions in the talk.
-
Jensenius, Alexander Refsum
(2022).
Publish or Perish? Researcher assessment is about to change.
Show summary
In July 2022, the European Commission launched an Agreement On Reforming Research Assessment. After years of talking, there is significant momentum for changing how researchers are assessed. In this talk, I will present some work leading up to the new agreement and how Universities Norway took a lead when developing the Norwegian Career Assessment Matrix (NOR-CAM). The core idea is that academics need to get recognition for a broader range of activities. This is important for transitioning to more open research practices and diverse career paths within and outside academia.
-
Herrebrøden, Henrik & Bjørndal, Christian Thue
(2022).
Youth international experience is a limited predictor of senior success in football: The relationship between U17, U19, and U21 experience and senior elite participation.
-
Herrebrøden, Henrik
(2022).
How we learn to move: a revolution in the way we coach & practice sports skills,
by Rob Gray, Perception Action Consulting & Education LLC, 2021, 265 pp., $16.99 USD (paperback), ISBN 979-8751331184.
Sports Coaching Review.
ISSN 2164-0629.
doi:
10.1080/21640629.2022.2099652.
-
Herrebrøden, Henrik
(2022).
Mental Effort in Olympic Athletes.
-
Herrebrøden, Henrik & Bjørndal, Christian T.
(2022).
Corrigendum: Youth International Experience Is a Limited Predictor of Senior Success in Football: The Relationship Between U17, U19, and U21 Experience and Senior Elite Participation Across Nations and Playing Positions (Front. Sports Act. Living, (2022), 4, (875530), 10.3389/fspor.2022.875530).
Frontiers in Sports and Active Living.
ISSN 2624-9367.
4.
doi:
10.3389/fspor.2022.954943.
-
-
-
van Otterdijk, Marieke; Song, Heqiu; Tsiakas, Konstantinos; van Zeijl, Ilka & Barakova, Emilia
(2022).
Nonverbal Cues Expressing Robot Personality - a Movement Analysts Perspective.
-
van Otterdijk, Marieke; Saplacan, Diana; Laeng, Bruno & Torresen, Jim
(2022).
Explorative Study on Human Intuitive Responses to Observing
Expressive Robot Behavior.
-
-
-
-
-
-
Jensenius, Alexander Refsum
(2022).
Kunstfag og åpen forskning.
Show summary
Hvilke dilemmaer oppstår når forskningsdata og resultater skal deles og gjenbrukes? Og hvilke muligheter medfører mer åpenhet og økt deling av data for fag som eksempelvis musikk, visuell kunst, film, scenekunst og design?
-
Kjus, Yngvar; Brøvig, Ragnhild & Wang, Solveig Margrethe
(2022).
Encountering new technology: A study of how female creators explore DAWs.
-
-
-
-
Brøvig, Ragnhild
(2022).
PhD course on academic formats: monograph, article, kappe.
Show summary
Presentation and panel discussion
-
Brøvig, Ragnhild
(2022).
Genre in the Context of Digital Media Culture Seminar and Doctoral Workshop.
Show summary
Presentation ("Do Genre Boundaries Belong to a Bygone Era?") and panel discussion
-
Oddekalv, Kjell Andreas
(2022).
Skreiv bok om Høge Brelle.
[Newspaper].
Møre.
-
Brøvig, Ragnhild & Aareskjold-Drecker, Jon Marius
(2022).
Vocal Chops: Another Human/Machine Hybrid.
Show summary
Mainstream popular music has, over the past decade, been characterized by the rhythmic and weird-sounding vocal effect known as vocal chops—short vocal samples that are juxtaposed, rearranged, and repitched to create hooks and effects. The samples are usually manipulated (transposed, formant shifted, or cut up, for example) to create a hybrid human-syntheziser sound and rearranged into a melody with abrupt and often unexpected transitions between the sounds. In this paper, we will first explore the historical origins of vocal chops, framing this effect as the most recent incarnation of producers’ and listeners’ enduring fascination with vocal manipulation. Drawing on ecological affordance theory, we will further speculate on what it is that people find so fascinating about this effect. We will also demonstrate various approaches to creating vocal chops and the various effects that these approaches produce, arguing that the intense exploration of this technique in recent years has led to its normalization, and to its progressively subtler integration into music productions.
-
Oddekalv, Kjell Andreas; Bjørkheim, Terje; Ugstad, Magnus; Hole, Erik; Sørli, Anders Ruud & Walderhaug, Bendik
(2022).
Self-made man.
-
Oddekalv, Kjell Andreas; Bjørkheim, Terje; Ugstad, Magnus; Hole, Erik; Sørli, Anders Ruud & Walderhaug, Bendik
(2022).
Bilkollektivbil.
-
-
-
-
-
-
-
-
Fuhrer, Julian; Blenkmann, Alejandro Omar; Endestad, Tor; Solbakk, Anne-Kristin & Glette, Kyrre
(2022).
Complexity-Based Encoded Information Quantification in Neurophysiological Recordings.
-
Volehaugen, Vegard; Leske, Sabine Liliana; Endestad, Tor; Solbakk, Anne-Kristin & Blenkmann, Alejandro Omar
(2022).
Violation of rule-based auditory patterns is detected independently of attention.
-
Fuhrer, Julian; Glette, Kyrre; Ivanovic, Jugoslav; Larsson, Pål Gunnar; Bekinschtein, Tristan & Kochen, Silvia
[Show all 10 contributors for this article]
(2022).
Direct brain recordings reveal continuous encoding of structure in random stimuli.
-
Blenkmann, Alejandro Omar; Solbakk, Anne-Kristin; Leske, Sabine Liliana; Llorens, Anaïs; Funderud, Ingrid & Collavini, Santiago
[Show all 13 contributors for this article]
(2022).
Human brain network involved in auditory deviance detection. An intracranial EEG study.
-
Asko, Olgerta; Blenkmann, Alejandro Omar; Leske, Sabine Liliana; Foldal, Maja Dyhre; Llorens, Anaïs & Funderud, Ingrid
[Show all 10 contributors for this article]
(2022).
Altered hierarchical auditory predictive processing after lesions to the orbitofrontal cortex.
Show summary
In this study, we tested the causal involvement of the OFC in noticing breaches of predictions (i.e., PEs) at different hierarchical levels of task structural complexity. With this aim, we examined the event-related potentials (ERPs) of patients with focal OFC lesions and healthy adults while performing an auditory local-global oddball paradigm. Altogether, we found that after OFC damage, low-level PEs (i.e., processing of stimuli that are unpredicted at the local level) and combined low- and high-level PEs (i.e., processing of stimuli that are unpredicted at both the local and global level) were impacted. However, the processing of standard tones was not affected. We conclude that the OFC may contribute to a top-down process that modulates the deviance detection system in the primary auditory cortices, and may be involved in connecting PEs at lower hierarchical areas with predictions at higher areas. The study sheds new light on the poorly explored deficits of hierarchical auditory prediction in patients with damaged OFC.
-
Fuhrer, Julian; Glette, Kyrre; Ivanovic, Jugoslav; Larsson, Pål Gunnar; Bekinschtein, Tristan & Kochen, Silvia
[Show all 10 contributors for this article]
(2022).
Direct brain recordings reveal continuous encoding of structure in random stimuli.
-
Akca, Merve
(2022).
Trial Lecture for Ph.D. disputation: Attention to sounds – theoretical frameworks and applied perspectives in music psychology .
-
Akca, Merve
(2022).
Presentation and Summary of Ph.D. Project: Attending to Sounds in the Blink of an Eye.
-
-
Akca, Merve; Bishop, Laura; Vuoskoski, Jonna Katariina & Laeng, Bruno
(2022).
Tracing the Temporal Limits of Auditory Information Processing with Pupillometry.
-
Akca, Merve
(2022).
The Limits of Auditory Perception and Cognition in Humans: Detecting and Attending to Sounds
.
-
Martin, Remy Richard
(2022).
Senses of Self in Rhythm and Time.
-
Martin, Remy Richard
(2022).
Music and Identity.
-
Martin, Remy Richard
(2022).
Affordance and Autonomy .
-
Leske, Sabine Liliana
(2022).
Inter-Trial Coherence (ITC).
Show summary
An introduction to the inter-trial coherence measure (ITC) and how it is applied to EEG data (with example code/scripts in MATLAB). Furthermore caveats of the measure are discussed along with it's relation to phase opposition measures.
-
Leske, Sabine Liliana
(2022).
Fourier Transform.
Show summary
An introduction to the Fourier transform and how it is applied to EEG data. The short time fourier transform (STFT) and different measures (phase and amplitude) derived from it are explained.
-
Danielsen, Anne
(2022).
RITMO: Forskning og infrastruktur.
-
-
Leske, Sabine Liliana
(2022).
Phase Amplitude Coupling (PAC).
Show summary
An introduction to the Phase Amplitude Coupling (PAC) measure and how it is applied to EEG data (example code in MATLAB). The caveats of the measure are covered and which sanity checks might be necessary.
-
Danielsen, Anne
(2022).
Rhythm, Time, and Presence.
-
Spiech, Connor; Hope, Mikael; Câmara, Guilherme Schmidt; Sioros, Georgios; Endestad, Tor & Laeng, Bruno
[Show all 7 contributors for this article]
(2022).
PredicTAPbility: Sensorimotor Synchronization Increases Groove.
-
Solbakk, Anne-Kristin; Endestad, Tor & Knight, Robert Thomas
(2022).
Oslo - Berkeley collaboration in cognitive neuroscience and neuropsychology.
-
-
Oddekalv, Kjell Andreas; Gudnason, Runar & Opsvik, Olav
(2022).
Høge Brelle – Runar Gudnason, Kjell Andreas Oddekalv og louilexus.
-
Reilstad, Didrik Spanne; Strand, Ørjan; Wu, Zhenying; Castro da Silva, Bruno; Torresen, Jim & Ellefsen, Kai Olav
(2022).
RADAR: Reactive and Deliberative Adaptive Reasoning – Learning When to Think Fast and When to Think Slow.
-
Stenseth, Nils Christian; Andreassen, Karin Marie; Danielsen, Anne; Helgaker, Trygve; Jansen, Eystein & Moser, Edvard Ingjald
[Show all 11 contributors for this article]
(2022).
Grunnforskningen er truet.
Klassekampen.
ISSN 0805-3839.
-
Sandvik, Bjørnar Ersland
(2022).
Sample, Slice, Stretch! Four Innovative Moments in the History of Waveform Representation.
-
Sandvik, Bjørnar Ersland
(2022).
Lydens utseende: Fra usynlig til gjenkjennelig på skjermer vi alle går rundt med.
Musikkmagasinet Ballade.
ISSN 0805-5041.
-
Oddekalv, Kjell Andreas
(2022).
On Being a White Norwegian Analysing Rap.
Dansk Musikforskning Online.
ISSN 1904-237X.
DMO Special Issue 2022,
p. 115–122.
-
Tørresen, Jim & Nakasawa, Atsushi
(2022).
Tutorial: Ethical Considerations in User Modeling and Personalization (ECUMAP).
-
Tørresen, Jim
(2022).
Research on health-related treatment and care technology – A technical and ethical view.
-
-
Tørresen, Jim
(2022).
Sensing, acting and adapting in the real world.
-
Tørresen, Jim
(2022).
Ethical Perspectives of Robotics and AI – How to develop preferable system?
.
-
Tørresen, Jim
(2022).
Tutorial: Ethical challenges for Autonomous and Multiagent Systems.
-
Tørresen, Jim
(2022).
Keynote: Multi-Modal Sensing for Care Robots for Older People.
-
Tørresen, Jim
(2022).
Tutorial: Ethical Challenges in Computational Intelligence Research.
-
-
Tørresen, Jim
(2022).
Introduction to Research Visit to University of Oslo in Norway.
-
Tørresen, Jim
(2022).
Introduction to Research Visit to University of Oslo in Norway.
-
Saplacan, Diana; Tørresen, Jim; Weng, Yueh-Hsuan & Li, Phoebe
(2022).
Tutorial: Robots and Society (RO-SO 2022) / Ethical perspectives and technical challenges and opportunities with care robots.
-
Tørresen, Jim & Kwak, Dongho
(2022).
Tutorial: Rhythm in Development and Learning – Similarities and Differences Between Humans and Technology.
-
-
Jensenius, Alexander Refsum
(2022).
Erfaringer med å lage 3xMOOC.
Show summary
I denne presentasjonen vil jeg presentere hvordan vi gjennom årene har utviklet tre komplette nettkurs ved Universitetet i Oslo: Music Moves (2016), Motion Capture (2022) og Pupillometry (2023). Fokuset vil ligge på muligheter og utfordringer i video i utdanningssammenheng.
-
-
Jensenius, Alexander Refsum
(2022).
Open music research between art and science.
Show summary
Many music researchers are turning towards studying music performance and perception in real-world settings. Collecting data in a concert situation is non-trivial, and FAIRifying the data is even more challenging. In this talk, I will discuss some challenges with handling privacy and copyright matters in music research. I will also discuss some benefits of working towards more open music research.
-
Remache-Vinueza, Byron; Trujillo-León, Andrés; Clim, Maria-Alena; Sarmiento-Ortiz, Fabián; Topon-Visarrea, Liliana & Jensenius, Alexander Refsum
[Show all 7 contributors for this article]
(2022).
Mapping Monophonic MIDI Tracks to Vibrotactile Stimuli Using Tactile Illusions.
Show summary
In this project, we propose an algorithm to convert musical features and structures extracted from monophonic MIDI files to tactile illusions. Mapping music to vibrotactile stimuli is a challenging process since the perceptible frequency range of the skin is lower than that of the auditory system, which may cause the loss of some musical features. Moreover, current proposed models do not warrant the correspondence between the emotional response to music and the vibrotactile version of it. We propose to use tactile illusions as an additional resource to convey more meaningful vibrotactile stimuli. Tactile illusions enable us to add dynamics to vibrotactile stimuli in the form of movement, changes of direction, and localization. The suggested algorithm converts monophonic MIDI files into arrangements of two tactile illusions: “phantom motion” and “funneling”. The validation of the rendered material consisted of presenting the audio rendered from MIDI files to participants and then adding the vibrotactile component to it. The arrangement of tactile illusions was also evaluated alone. Results suggest that the arrangement of tactile illusions evokes more positive emotions than negative ones. This arrangement was also perceived as more agreeable and stimulating than the original audio. Although musical features such as rhythm, tempo, and melody were mostly recognized in the arrangement of tactile illusions, it provoked a different emotional response from that of the original audio.
-
-
Jensenius, Alexander Refsum
(2022).
Exploring music performance and perception through motion capture.
Show summary
This talk will present different approaches to capturing human bodily activity. Motion capture can be performed with sensor-based and camera-based systems, each of which has benefits and limitations. Sensor-based systems are flexible and scalable and can easily be used outside laboratory environments. They are good at tracking relative motion and rotation information but less suitable for tracking position. Camera-based systems come in many flavors and can be used with and without markers. They excel at tracking positions but are prone to reflections and environmental noise. As a consequence, camera-based motion capture systems are better suited for laboratory settings. I will discuss my twenty-year-long experience using different motion capture systems to study music-related body motion. This includes research on musicians, including rehearsal techniques and performance strategies. Such studies push the limits of the technology when it comes to precision and accuracy. It is particularly challenging when using motion capture equipment in real-world concert settings. At the University of Oslo, we have successfully captured the motion of both solo and ensemble performances and are currently trying to scale up to a full orchestra. We are also carrying out motion capture of perceivers, audience members in concerts, dancers, and other people moving to music. Through the Norwegian Championship of Standstill, we have delved into human micromotion, the tiniest actions we can perform and perceive. At this level, motion capture can detect physiological signals, such as breathing and heart rate. Data from such studies are interesting scientifically and have also been used in artistic practice. Finally, I will give examples of how real-time motion capture can be used in various creative applications, including "inverse" sonic interaction.
-
Jensenius, Alexander Refsum & Lome, Ragnhild
(2022).
Mer mangfold innenfor humaniora?
[Business/trade/industry journal].
Forskningspolitikk.
Show summary
En ny kunnskapspolitisk prosess er i gang, om hvordan akademiske karrierer skal vurderes. Hvordan påvirker det humaniora i Norge?
-
Vuoskoski, Jonna Katariina
(2022).
Empathy, Entrainment and Social Bonding.
Show summary
Music is an inherently social phenomenon. Even when we listen to music in solitude, social cognitive and affective processes play an important role in shaping our perception and experience. In my own work, I have explored how empathy in particular facilitates and modulates our engagement with music. Through recent empirical studies, I will demonstrate how empathy contributes to both affective attunement and bodily entrainment to music. Furthermore, I will argue that trait empathy may also facilitate the social bonding effects of musical engagement, whether in the context music listening or interpersonal synchrony. Finally, I will also discuss how feelings of being moved by music could be understood through a ‘social lens’ as experiences and appraisals of connectedness.
-
Bernhardt, Emil
(2022).
Rhythm and Form: Tradition and Radicalism in Schubert.
-
Laeng, Bruno & Vemøy, Silje Kilmork
(2022).
Pupillens hemmeligheter.
[Radio].
radio.nrk.no.
Show summary
Pupillens hemmeligheter
At øynenes er sjelens speil, er kanskje en mer presis observasjon enn mange er klar over. Den sorte pupillen kan fortelle en hel del om oss mennesker, for eksempel om hvordan vi har det.
-
Bishop, Laura
(2022).
Attention focus affects togetherness and body interactivity in piano duos.
-
Pileberg, Silje & Oddekalv, Kjell Andreas
(2022).
The magic of rap.
[Internet].
Forskning.no.
-
-
Silje, Pileberg & Erdem, Cagri
(2022).
The new, artificial composers .
[Internet].
Forskning.no.
Show summary
In recent years artificial intelligence (AI) has affected music to an increasing extent, for instance in music production: While a human writes the main melody the machine, through AI, may produce the background arrangement.
-
Pileberg, Silje & Lan, Qichao
(2022).
He Made It Easier to Code Music With Others.
[Internet].
Forskning.no.
Show summary
Qichao Lan wanted to make music live coding accessible to anyone. As part of his PhD, he made a music language and an app easily accessible in a web browser.
-
Wallace, Benedikte & Melteig, Elina
(2022).
Benedikte ba maskinen lage egne danser, basert på data fra ekte dansere.
[Internet].
titan.uio.no.
-
Danielsen, Anne
(2022).
Bidrag til enquete om rock.
In Karlsen, Ole & Markussen, Bjarne (Ed.),
Sanglyrikk. Teori - Metode - Sjangrer.
Scandinavian Academic Press.
ISSN 978-82-304-0342-6.
Show summary
Lyrikken er den mest populære og utbredte av alle diktarter – vel å merke sanglyrikken, den som fremføres til musikk og formidles gjennom radio, grammofonplater, CD-er og strømmetjenester. Den omgir oss til daglig og er samtidig den eldste formen for lyrikk vi kjenner til. I det gamle Hellas ble diktene fremsagt til lyrespill.
Til tross for dette har sanglyrikken vært mindre utforsket enn skriftlyrikken, og det har skortet på teoretiske og metodiske perspektiver. Det søker denne boka å råde bot på. Her diskuteres først de grunnleggende likheter og forskjeller mellom skrift- og sanglyrikk, mellom øye- og ørekunst. Videre drøftes metodiske innfallsvinkler til studiet av sanglyrikk, med tanke på samspillet mellom ord og musikk. Deretter gjør boka rede for en rekke kjente sanglyriske sjangrer: ballader, skillingsviser, salmer, joik, viser, blues, rock, indie-folk og rap.
Boka er den første i sitt slag i Norge. Den er særlig rettet mot forskere og studenter i høyere utdanning, og lærere som vil arbeide med sanglyrikk i skoleverket. Men alle som interesserer seg for sanglyrikkens sjangrer, vil finne noe å glede seg over her.
-
Upham, Finn
(2022).
Audience reconstructed: Fan interactions on twitter during livestreamed BTS concerts.
-
Swarbrick, Dana; Upham, Finn; Erdem, Cagri; Jensenius, Alexander Refsum & Vuoskoski, Jonna Katariina
(2022).
Measuring Virtual Audiences with The MusicLab App: Proof of Concept.
Show summary
We present a proof of concept by using the mobile application MusicLab to measure motion during a livestreamed concert and examining its relation to musical features. With the MusicLab App, participants’ own smartphones’ inertial measurement unit (IMU) sensors can be leveraged to record their motion and their subjective experiences collected through survey responses. The MusicLab Lock-down Rave was an Algorave (live-coded dance music) livestreamed concert featuring prolific performers Renick Bell and Khoparzi. They livestreamed for an international audience who wore their smartphones with the MusicLab App while they listened/danced to the performances. From their acceleration, we computed quantity of motion and compared it to musical features that have previously been associated with music-related motion, namely pulse clarity and low and high spectral flux. By encountering challenges and implementing improvements, the MusicLab App has become a useful tool for researching music-related motion.
-
Swarbrick, Dana & Vuoskoski, Jonna Katariina
(2022).
Collectively Classical: Social connection at a classical concert.
Show summary
We aimed to examine the difference between live and livestreamed concerts, the influence of musical piece, and participant characteristics such as empathy and fan-status on audience social connectedness and feeling moved.
Concerts are fundamentally social experiences in which an audience and musicians gather to witness and create an aesthetic experience. Concerts and the music featured there may facilitate connectedness and the sociorelational emotion kama muta (frequently labelled “feeling moved”) through a variety of mechanisms. Recent research suggests that in virtual concerts, both concert characteristics (e.g. liveness, technological platform) and individual characteristics (e.g. empathy, loneliness, concentration) influence feelings and behaviours associated with social connectedness (Swarbrick et al., 2021; Onderdijk, Swarbrick et al., 2021). Social bonding during collective music listening has previously been demonstrated in the context of dance (Tarr et al., 2016). Questions remain on how concert and personal characteristics influence social connectedness at a live concert and how the effects of live and virtual concerts differ.
MusicLab Copenhagen was a concert experiment in which the Danish String Quartet performed to a live (n = 91) and a livestreaming audience (n = 45). Participants responded to questions on their personal characteristics and their social and emotional concert experiences using a questionnaire in response to three distinct pieces of music. Specifically, participants reported feelings of social connectedness that they felt towards the performers and the other audience members, and they responded to the kama muta scale.
Although the live audience members felt more socially connected to other audience members than the virtual audience members, both live and virtual audience members felt similarly connected to the performers. There was also a main effect of the piece of music for both social connectedness and feeling moved such that these outcome measures were highest for the folk, then Beethoven, and then Schnittke. When examining awe, the main effect of piece was also present however with awe presenting an opposite trend, with Schnittke producing the highest levels of awe followed by Beethoven and then folk. This research has helped us understand the experience of live and virtual classical concert audiences. Furthermore, this research contributes to a burgeoning field comparing the effects of live and virtual experiences and the implications of their differences on our social well-being.
Interdisciplinary implications. The MusicLab Copenhagen project was an interdisciplinary collaboration between psychologists, technologists, musicians, and philosophers. This project offered meaningful perspectives on the challenges and advantages of conducting research on such an interdisciplinary team. The MusicLab Copenhagen model could be employed by future research teams to get the most out of a concert experiment. In this particular study, we combine disciplinary expertise in social psychology and music cognition to better understand participants’ social experience of concerts.
References
Swarbrick, D., Seibt, B., Grinspun, N., and Vuoskoski, J. K. (2021). Corona Concerts: The Effect of Virtual Concert Characteristics on Social Connection and Kama Muta. Front. Psychol. 12, 1–21. doi:10.3389/fpsyg.2021.648448.
Onderdijk, K. E., Swarbrick, D., Van Kerrebroeck, B., Mantei, M., Vuoskoski, J. K., Maes, P. J., et al. (2021). Livestream Experiments: The Role of COVID-19, Agency, Presence, and Social Context in Facilitating Social Connectedness. Front. Psychol. 12, 1–25. doi:10.3389/fpsyg.2021.647929.
Tarr, B., Launay, J., and Dunbar, R. I. M. (2016). Silent disco: dancing in synchrony leads to elevated pain thresholds and social closeness. Evol. Hum. Behav. 37, 343–349. doi:10.1016/j.evolhumbehav.2016.02.004.
-
Swarbrick, Dana & Vuoskoski, Jonna Katariina
(2022).
Collectively Classical: Social Connectedness at a Classical Concert.
Show summary
Submission:
Motivation:
Concerts are fundamentally social experiences in which an audience and musicians gather to witness and create an aesthetic experience. Live concerts are important sociocultural events that normally involve gathering at the same time and in the same space. In livestreamed virtual concerts, participants may gather in time, but not in space, providing a natural manipulation for studying concert experiences. Our previous research indicated that livestreamed virtual concerts can promote more social connectedness than pre-recorded virtual concerts. Additionally, live concerts promote more movement than listening to recorded music in a group. However, to the best of our knowledge, a comparison between live and virtual concerts and their effects on motion and emotion has not yet been conducted.
Methodology:
The Danish String Quartet is an acclaimed classical music group who performed a concert to both live and livestreaming audiences. Audience members were invited to participate by downloading a smartphone application that records motion with their own smartphones’ inertial measurement unit sensors. Survey responses were collected information on their experience of the music, social connectedness, and the sociorelational emotion of feeling moved before the concert and after each piece.
Results:
Survey responses were collected from 91 participants in the live audience and 67 participants in the livestreaming audience. Motion data was collected from 79 participants in the live audience and 34 from the livestreaming audience. While analyses are ongoing, preliminary results of the questionnaire data revealed that although the live audience felt more connected to other audience members than the virtual audience, both live and virtual audience members felt equally connected to the performers.
Implications:
This research contributes to the field of embodied music cognition by reinforcing the importance of the social nature of musical experiences.
-
Noori, Farzan Majeed; Tørresen, Jim; Uddin, Md Zia & Riegler, Michael A.
(2023).
Multimodal Deep Learning Approaches for Human Activity recognition.
Universitetet i Oslo.
ISSN 1501-7710.
Full text in Research Archive
Show summary
Smart homes may be beneficial for people of all ages, but this is especially true for those with care needs, such as the elderly. To assist, monitor for emergencies, and provide companionship for the elderly, a substantial amount of research on human activity recognition systems has been conducted. Several algorithms for activity recognition and prediction of future events have been reported in the scientific literature. However, the majority of published research does not address privacy concerns or employ a variety of ambient sensors.
The objective of this thesis is to contribute to the progress in research relevant to activity recognition systems that use sensors that collect less privacy-related information. The following tasks are included in the work: assessment of sensors while keeping privacy concerns in mind, selection of cutting-edge classification methods, and how to fuse the data from multiple sensors. This thesis contributes to making progress on systems for analyzing human activity and state—or vital signs—for application in a mobile robot.
This dissertation examines two topics. First, it examines the privacy concerns associated with having a robot in the home. On a robot, an ultra-wideband (UWB) radar-based sensor and an RGB camera (for ground truth) were installed. An actigraphy device was also worn by the users for heart rate monitoring. The UWB sensor was selected to maintain privacy while monitoring human activities. Considering different ways to represent data from a single sensor is the second topic under investigation. That is, how data from multiple representations can be combined. For this purpose, we investigate various data representations from a single sensor’s data and analysis using cutting-edge deep learning algorithms.
The contributions provide considerations for equipping a mobile home robot with activity recognition abilities while reducing the amount of privacy-sensitive sensor data. The work also concerns examining the potential privacy restrictions that must be established for the analyzing systems. The thesis contains new methods for combining data from multiple information sources. To achieve our objective, convolutional neural networks and recurrent neural networks were applied and validated using conventional methods.
The conclusion of the thesis is that we can achieve good accuracy with limited sensors while maintaining privacy. It is, however, likely adequate for assisting healthcare personnel and caregivers in their work by indicating current activity status and measuring activity levels, providing alerts about abnormal activities. The results can hopefully contribute to older people being able to live alone in their homes with a larger chance of any unwanted events being quickly detected and notified to the caregivers and providers.
-
Wallace, Benedikte
(2023).
AI-generated Dance and The Subjectivity Challenge.
Universitetet i Oslo.
ISSN 1501-7710.
-
Kocan, Danielius & Ellefsen, Kai Olav
(2023).
Attention-Guided Explainable Reinforcement Learning: Key State Memorization and Experience-Based Prediction.
Universitetet i Oslo.
-
Taye, Eyosiyas Bisrat & Ellefsen, Kai Olav
(2023).
Accountability Module: Increasing Trust in Reinforcement Learning Agents.
Universitetet i Oslo.
Show summary
Artificial Intelligence requires trust to be fully utilised by users and for them to feel safe while using them. Trust, and indirectly, a sense of safety, has been overlooked in the pursuit of more accurate or better-performing black box models. The field of Explainable Artificial Intelligence and the current recommendations and regulations around Artificial Intelligence require more transparency and accountability from governmental and private institutes. Creating a self-explainable AI that can be used to solve a problem while explaining its reasoning is challenging to develop. Still, it would be unable to explain all the other AIs without self-explainable abilities. It would likely not function for different problem domains and tasks without extensive knowledge about the model. The solution proposed in this thesis is the Accountability Module. It is meant to function as an external explanatory module, which would be able to function with different AI models in different problem domains. The prototype was inspired by accident investigations regarding autonomous vehicles but was created and implemented for a simplified simulation of vehicles driving on a highway. The prototype's goal was to attempt to assist an investigator in understanding why the vehicle crashed. The Accountability Module found the main factors in the decision that resulted in an accident. It was also able to facilitate the answering of whether the outcome was avoidable and if there were inconsistencies with the agent's logic by examining different cases against each other. The prototype managed to provide useful explanations and assist investigators in understanding and troubleshooting agents. The thesis and the Accountability Module indicate that a similar explanatory module is a robust direction to explore further. The chosen explainability methods and techniques were highly connected to the problem domain and limited by the scope of the thesis. Therefore, a more extensive test of the prototype with different problems needs to be performed to check the system's rigidity and versatility as well as the significance of the results. Nevertheless, in a collaboration between an Accountability Module expert and a domain expert, I expect a modular explainability solution to create more insight into an AI model and its significant incidents.
-
-
Spiech, Connor
(2022).
Predictive and Dynamic Mechanisms of Rhythm and Groove.
Universitetet i Oslo.
-
Christodoulou, Anna-Maria; Anagnostopoulou, Christina & Lartillot, Olivier
(2022).
Computational Analysis of Greek folk music of the Aegean islands.
National and Kapodistrian University of Athens.
-
Roa Gran, Kristian & Ellefsen, Kai Olav
(2022).
Learning to drive by predicting the future: Direct Future Prediction.
Universitetet i Oslo.
Show summary
The use of artificial intelligence in systems for autonomous vehicles is growing in
popularity [1, 2]. Following the rapid development of deep learning techniques over the
past years, reinforcement learning has enabled automating the learning of prediction
abilities. Controlling an autonomous vehicle with reinforcement learning is typically
done by either learning a direct mapping from observations to actions, or by learning a
model of the environment and using the model to make decisions. Model-free approaches
have previously seen the most success as errors can easily propagate in an inaccurate
model.
The predictive reinforcement learning algorithm «Direct Future Prediction» (DFP) won
the Visual Doom AI competition in 2016 with results 50% better than the second best
submission [3]. By learning a simpler model of the environment that only focuses on a few
measurable quantities this approach can efficiently solve challenging control tasks. Prior
to this thesis, the utility of the method had not yet been tested for relevant real world
tasks, such as sensorimotor control of an autonomous vehicle.
DFP is tested on a variety of traffic scenarios with the aim of investigating the potential for
predictive deep learning algorithms to learn to control an autonomous vehicle. The more
classical reinforcement learning algorithm «Deep Q-Networks» (DQN) is also trained and
tested on the same scenarios, and is used as a reference for determining the performance
of DFP. Experiments are conducted in a more difficult version of the CarRacing simulator
from OpenAI gym, where DQN has previously performed well [4, 5].
DFP is able to solve all of the traffic scenarios from the conducted experiments, and is
able to drive around a challenging track while avoiding cleverly placed obstacles. The
performance of DFP is equal to, or better, in every experiment when compared to DQN.
The driving style of the DFP agents are calm and controlled, which is further highlighted
by the sporadic driving of the DQN agents. Performance is strong also in previously
unseen environments, indicating that the method has good generalization abilities, which
is further explained when visualizing the future DFP predictions.
-
Bordvik, David Andreas; Ellefsen, Kai Olav & Riemer-Sørensen, Signe
(2022).
Forecasting regulation market balancing volumes from market data and weather data using Deep Learning and Transfer Learning.
Universitetet i Oslo.
Show summary
The energy and power sector is a major value contributor to our society and our high
living standards. In recent times the power sector has gained increased complexity
while undergoing significant changes, with the increased share of renewable production
being one of the contributors. An increased portion of renewable contributors in the
power mix from, e.g., wind power, results in more volatile power production, increasing
the need for grid balancing, making the regulating power market more challenging
for power producers to participate in. The purpose of the regulating power market
is to compensate the gap between the planned production that has been settled in
the day-ahead market and the actual production and demand. The ability to forecast
the regulating power volumes and prices some hours in advance of the hour when
it is actually traded would enable power producers to balance their positions in the
market more optimally. This project exploits historical regulation data together with
different market data and weather data to train deep learning models to forecast future
regulation volumes. A thorough time-series analysis of regulating power volumes
revealed some predictive potential. Furthermore, Bidirectional LSTM showed satisfying
results when forecasting up to four hours into the future using data from 2016-to 2021.
No previous research was found that uses more than two years of data, no previous
research uses recent data, and no previous work has utilized deep learning to forecast
the Norwegian regulation market volumes. Additionally, this project did a deep analysis
of topographical weather images and transfer learning to evaluate the potential of
predicting regulating power volumes using weather images. Different weather forecasts,
actual weather, and weather uncertainties were all utilized. The weather data was
generally not found to have a considerable direct influence on regulation volumes.
However, the weather is considered to have an increasing influence in the future as more
volatile renewable power production is expected in the power markets. No previous
research has been found to investigate weather images in the context of the regulation
market.