Les médias liés à cet évènement

Mettre en temps une structure musicale : l'activité de composition de Voi(rex) par Philippe Leroux - Nicolas Donin, Jacques Theureau

14 avril 2005 01 h 01 min

Mettre en temps une structure musicale : l'activité de composition de Voi(rex) par Philippe Leroux - Nicolas Donin, Jacques Theureau

14 avril 2005 24 min

L'estimation de fréquences fondamentales multiples

12 mai 2005 52 min

La harpe électroacoustique

4 février 2005 01 h 18 min

Utilisation de Modalys pour le projet VoxStruments, lutherie numérique intuitive et expressive - Nicholas Ellis, Joël Bensoam

17 octobre 2007 49 min

Présentation des travaux l'équipe PdS dans le cadre du projet européen CLOSED : "Closing the Loop of Sound Evaluation and Design" - Olivier Houix

27 juin 2007 01 h 12 min

Sparse overcomplete methods, matching pursuit and basis pursuit - Bob L. Sturm

11 juillet 2007 48 min

Transformations de type et de nature de la voix - Snorre Farner, Axel Roebel, Xavier Rodet

12 septembre 2007 01 h 07 min

Segmentations et reconnaissances automatiques de phonèmes de la voix, temps différé, temps réel - Pierre Lanchantin, Julien Bloit, Xavier Rodet

19 septembre 2007 01 h 13 min

Synthèse de la parole à partir du texte et construction d'une base de données d'unités de la voix - Christophe Veaux, Grégory Beller, Xavier Rodet

26 septembre 2007 01 h 00 min

Projet ECOUTE - Jerome Barthelemy, Nicolas Donin, Geoffroy Peeters, Samuel Goldszmidt

3 octobre 2007 01 h 12 min

Projet MusicDiscover - David Fenech Saint Genieys

10 octobre 2007 01 h 10 min

Projet CASPAR - Jerome Barthelemy, Alain Bonardi

24 octobre 2007 50 min

Projet CONSONNES 1ère partie - René Caussé, Vincent Freour, David Roze

21 novembre 2007 57 min

Manifold-based representations of musical signals and generative spaces

0:00/0:00

Among diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, as it jointly nourishes both scientific and artistic practices since its creation. Some processes naturally handle both pathways, hence providing invertible representations of given sounds. On the top of that, recent trends in machine learning gave rise to powerful data-centered methods, raising several epistemological questions amongst researchers about their possible uses and concrete significations. Particularly, generative models focus on the generation of original content from automatically extracted features, not only questioning previous approaches in generation but also how these processes could be exploited for artistic purposes. Particularly, a specific family of generative models called variational methods are based on both unsupervised inference of features and direct generation. The interest of such methods is twofold : first, they resort to Bayesian inference to extract continuous low-dimensional representations, aiming to reflect the underlying structure of a data corpus. Secondly, these continuous representations, called latent spaces, can be inverted to generate the data back, providing powerful synthesis and in-domain interpolation capabilities.

Hence, such bijective systems can be interestingly used for sound synthesis, providing data-centered generation methods whose controls are automatically extracted from the data. Furthermore, the flexibility of such systems allow numerous ways of influencing the construction of these representations, by example with external information or perceptual constraints, such that the training process can also be embedded in their creative use. We will review the generative abilities of these methods when applied to the audio domain, and how the extracted spaces can be used as high-level features for audio analysis. We will also introduce diverse ways of using it for creative purposes, and also how these methods can be extended to integrate the temporal dimension of audio information. Finally, we will present how these generative processes can be embedded in musical and composition tools, and how they can be used to figure a novel use of synthesis algorithms.

intervenants

informations

Type
Séminaire / Conférence
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
01 h 01 min
date
11 décembre 2019

Axel Chemla--Romeu-Santos : Manifold-based representations of musical signals and generative spaces

Among diverse research fields within computer music, synthesis and generation of audio signals epitomize the cross-disciplinarity of this domain, as it jointly nourishes both scientific and artistic practices since its creation. Some processes naturally handle both pathways, hence providing invertible representations of given sounds. On the top of that, recent trends in machine learning gave rise to powerful data-centered methods, raising several epistemological questions amongst researchers about their possible uses and concrete significations. Particularly, generative models focus on the generation of original content from automatically extracted features, not only questioning previous approaches in generation but also how these processes could be exploited for artistic purposes. Particularly, a specific family of generative models called variational methods are based on both unsupervised inference of features and direct generation. The interest of such methods is twofold : first, they resort to Bayesian inference to extract continuous low-dimensional representations, aiming to reflect the underlying structure of a data corpus. Secondly, these continuous representations, called latent spaces, can be inverted to generate the data back, providing powerful synthesis and in-domain interpolation capabilities.

Hence, such bijective systems can be interestingly used for sound synthesis, providing data-centered generation methods whose controls are automatically extracted from the data. Furthermore, the flexibility of such systems allow numerous ways of influencing the construction of these representations, by example with external information or perceptual constraints, such that the training process can also be embedded in their creative use. We will review the generative abilities of these methods when applied to the audio domain, and how the extracted spaces can be used as high-level features for audio analysis. We will also introduce diverse ways of using it for creative purposes, and also how these methods can be extended to integrate the temporal dimension of audio information. Finally, we will present how these generative processes can be embedded in musical and composition tools, and how they can be used to figure a novel use of synthesis algorithms.

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.