From the same archive

From psychoacoustics to deep learning: learning low-level processing of sound with neural networks - Neil Zeghidour

March 19, 2021 18 min

Deep Learning for Voice processing - Nicolas Obin, Axel Roebel, Yann Teytaut

March 19, 2021 32 min

Towards helpful, customer-specific Text-To-Speech synthesis - David Guennec

March 19, 2021 29 min

Tools for creative AI and noise - Philippe Esling

March 19, 2021 20 min

Xtextures - Convolutional neural networks for texture synthesis and cross synthesis - Axel Roebel

March 19, 2021 20 min

Round Table IA : Questions/discussions

March 19, 2021 30 min

Melodic Scale and Virtual Choir, Max ISiS - Grégory Beller

March 19, 2021 26 min

Greg Beller, David Guennec, Nicolas Obin, Axel Roebel, Hugues Vinet. Table ronde

March 19, 2021 20 min

Interaction with musical generative agents - Jérôme Nika

March 19, 2021 21 min

Point sur MacIntel et les logiciels du Forum - Carlos Amado Agon, Riccardo Borghesi, Karim Haddad, Nicholas Ellis

November 29, 2006 20 min

Nouveautés AudioSculpt 2.7 et SuperVP 2.91 - Xavier Rodet, Alain Lithaud, Niels Bogaards, Axel Roebel

November 29, 2006 01 h 07 min

Nouveautes OpenMusic - Gérard Assayag, Jean Bresson, Carlos Amado Agon, Karim Haddad

November 29, 2006 59 min

Point sur le Spatialisateur - Olivier Warusfel, Rémy Muller, Terence Caulkins

November 29, 2006 12 min

Nouveautés Modalys - Joël Bensoam, Nicholas Ellis, Jean Lochard

November 29, 2006 50 min

Mlys - une interface de contrôle de Modalys dans Max/MSP - Manuel Poletti

November 29, 2006 47 min

Accueil - Andrew Gerzso

November 29, 2006 18 min

Développements récents de l'équipe applications temps réel - Diemo Schwarz, Riccardo Borghesi, Norbert Schnell

November 29, 2006 51 min

Session IA - An overview of AI for Music and Audio Generation

0:00/0:00

An overview of AI for Music and Audio Generation

I'll discuss recent advances in AI for music creation, focusing on Machine Learning (ML) and Human-Computer Interaction (HCI) coming from our Magenta project (g.co/magenta). I'll argue that generative ML models by themselves are of limited creative value because they are hard to use in our current music creation workflows. This motivates research in HCI and especially good user interface design. I'll talk about a promising audio-generation project called Differentiable Digital Signal Processing (DDSP; Jesse Engel et al.) and about recent progress in modeling musical scores using Music Transformer (Anna Huang et al.). I'll also talk about work done in designing experimental interfaces for composers and musicians. Time permitting I'll relate this to similar work in the domain of creative writing. Overall my message will be one of restrained enthusiasm: Recent research in ML has offered some amazing advances in tools for music creation, but aside from a few outlier examples, we've yet to bring these models successfully into creative practice.

speakers

information

Type
Ensemble de conférences, symposium, congrès
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
47 min
date
March 19, 2021

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.