Les médias liés à cet évènement

Deep Learning for Voice processing - Nicolas Obin, Axel Roebel, Yann Teytaut

19 mars 2021 32 min

Towards helpful, customer-specific Text-To-Speech synthesis - David Guennec

19 mars 2021 29 min

Tools for creative AI and noise - Philippe Esling

19 mars 2021 20 min

Xtextures - Convolutional neural networks for texture synthesis and cross synthesis - Axel Roebel

19 mars 2021 20 min

Round Table IA : Questions/discussions

19 mars 2021 30 min

Melodic Scale and Virtual Choir, Max ISiS - Grégory Beller

19 mars 2021 26 min

Greg Beller, David Guennec, Nicolas Obin, Axel Roebel, Hugues Vinet. Table ronde

19 mars 2021 20 min

Session IA - An overview of AI for Music and Audio Generation - Doug Eck

19 mars 2021 47 min

Interaction with musical generative agents - Jérôme Nika

19 mars 2021 21 min

Point sur MacIntel et les logiciels du Forum - Carlos Amado Agon, Riccardo Borghesi, Karim Haddad, Nicholas Ellis

29 novembre 2006 20 min

Nouveautés AudioSculpt 2.7 et SuperVP 2.91 - Xavier Rodet, Alain Lithaud, Niels Bogaards, Axel Roebel

29 novembre 2006 01 h 07 min

Nouveautes OpenMusic - Gérard Assayag, Jean Bresson, Carlos Amado Agon, Karim Haddad

29 novembre 2006 59 min

Point sur le Spatialisateur - Olivier Warusfel, Rémy Muller, Terence Caulkins

29 novembre 2006 12 min

Nouveautés Modalys - Joël Bensoam, Nicholas Ellis, Jean Lochard

29 novembre 2006 50 min

Mlys - une interface de contrôle de Modalys dans Max/MSP - Manuel Poletti

29 novembre 2006 47 min

Accueil - Andrew Gerzso

29 novembre 2006 18 min

Développements récents de l'équipe applications temps réel - Diemo Schwarz, Riccardo Borghesi, Norbert Schnell

29 novembre 2006 51 min

From psychoacoustics to deep learning: learning low-level processing of sound with neural networks

0:00/0:00

Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limitations of handmade representations. In this talk, I will present LEAF, a new, lightweight, fully learnable neural network that can be used as a drop-in replacement of mel-filterbanks. LEAF learns all operations of audio features extraction, from filtering to pooling, compression and normalization, and can be integrated into any neural network at a negligible parameter cost, to adapt to the task at hand. I will show how LEAF outperforms mel-filterbanks on a wide range of audio signals, including speech, music, audio events and animal sounds, providing a general-purpose learned frontend for audio classification.

intervenants

informations

Type
Ensemble de conférences, symposium, congrès
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
18 min
date
19 mars 2021

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.