14 avril 2005 01 h 01 min
14 avril 2005 24 min
12 mai 2005 52 min
4 février 2005 01 h 18 min
17 octobre 2007 49 min
27 juin 2007 01 h 12 min
11 juillet 2007 48 min
12 septembre 2007 01 h 07 min
19 septembre 2007 01 h 13 min
26 septembre 2007 01 h 00 min
3 octobre 2007 01 h 12 min
10 octobre 2007 01 h 10 min
24 octobre 2007 50 min
21 novembre 2007 57 min
0:00/0:00
The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures.
Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression).
In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.
The aim of this research project is to model the multivariate information structures inherent to multiple sound signals through different methods of machine learning. Here, we consider structure as any underlying sequence that constitutes a higher-level abstraction of an original input sequence. In musical audio signals, this includes both the high-level properties (eg. chords progressions, key changes, thematic organization) and resulting audio signal (eg. emerging timbral properties well known in orchestration) of sound mixtures.
Our application case is to develop a software that interacts in real-time with a musician by inferring expected structures (e.g. chord progression).
In order to achieve this goal, we divided the project into two main tasks: a listening module and a symbolic generation module. The listening module extracts the musical structure played by the musician whereas the generative module predicts musical sequences based on the extracted features.
1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43
Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche
Hôtel de Ville, Rambuteau, Châtelet, Les Halles
Institut de Recherche et de Coordination Acoustique/Musique
Copyright © 2022 Ircam. All rights reserved.