16 mars 2021 04 min
16 mars 2021 14 min
16 mars 2021 29 min
16 mars 2021 39 min
Mel-filterbanks are fixed, engineered audio features which emulate human perception and have been used through the history of audio understanding up to today. However, their undeniable qualities are counterbalanced by the fundamental limita
19 mars 2021 18 min
Deep Neural Networks are increasingly dominating the research activities in the Analysis/Synthesis team and elsewhere. The session will present some of the recent results of the research activities related to voice processing with deep neur
19 mars 2021 32 min
The subject of automatic speech synthesis began to be popularised as early as the 1990s. Each of us has already had to deal with automatic answering machine voices that made us all suffer in the beginning. Today, however, the progress made
19 mars 2021 29 min
We will present the latest creative tools developed by the RepMus team (ACIDS project), enabling real-time audio synthesis as well as music generation and production and synthesizer control, all in open-source code, as well as Max4Live and
19 mars 2021 20 min
Neural style transfer applied to images has received considerable interest and has triggered many research activities aiming to use the underlying strategies for manipulation of music or sound. While the many fundamental differences between
19 mars 2021 20 min
19 mars 2021 30 min
Dans cette présentation, Greg Beller exposera les développements récents dans le domaine du traitement de la voix. Melodic Scale est un dispositif Max For Live qui modifie automatiquement une ligne mélodique en temps réel, en changeant sa g
19 mars 2021 26 min
19 mars 2021 20 min
An overview of AI for Music and Audio Generation I'll discuss recent advances in AI for music creation, focusing on Machine Learning (ML) and Human-Computer Interaction (HCI) coming from our Magenta project (g.co/magenta). I'll argue tha
19 mars 2021 47 min
The Musical Representations team explores the paradigm of computational creativity using devices inspired by artificial intelligence, particularly in the sense of new symbolic musician-machine interactions. The presentation will focus in pa
19 mars 2021 21 min
This demonstration gives an overview of recent developments in the standalone corpus-based concatenative sound synthesis program AudioGuide. New features and methods include revamped hierarchal sound segment matching routines and polyphonic
18 mars 2021 44 min
Patching With Complex Data I will show some recent experiments using the Max dictionary in a variety of scenarios related to generative sequencing. Since a patcher itself is in the form of a dictionary, we can also capture and manipulate t
18 mars 2021 30 min
My musical collaboration with percussionist Irwin took an unplanned turn when we started working remotely. Over the past year we've developed a workflow that allows us to perform together in real time using instruments that I write in Pure
18 mars 2021 27 min
18 mars 2021 58 min
18 mars 2021 31 min
18 mars 2021 41 min
OMChroma new tutorials and documentation. This presentation will focus on my new set of tutorial patches, videos and documentation of the OpenMusic library OM-Chroma. After a five-years teaching experience at the HMDK Stuttgart, I collecte
18 mars 2021 21 min
17 mars 2021 19 min
The SkataRT environment is built at the intersection of a research and development work on concatenative corpus-based sound synthesis (CataRT) and a European research project on the issue of voice imitation as a tool for sketching and rapid
17 mars 2021 28 min
This presentation introduces two software developments, TS2 and Analyse, heirs of the tools offered by the AudioSculpt environment. The IRCAM Lab. TS2 is a sound processing software developed within IRCAM's IMR department. Built around the
17 mars 2021 28 min
17 mars 2021 26 min
17 mars 2021 31 min
17 mars 2021 33 min
In this session, we will present the software developments made in 2020 around Spat (for Max) and Panoramix (standalone). These developments concern the addition of new functionalities (notably for the manipulation and decoding of Ambisonic
17 mars 2021 28 min
During the past decade, new object-based immersive audio content formats and creation tools were developed for cinematic and musical production. These technologies free the music creator from the constraints of normalized loudspeaker confi
17 mars 2021 47 min
17 mars 2021 26 min
During this session, 5 new devices distributed by the Forum will be introduced: MarblesLFO and PendulumsLFO use the physical function of Max to emulate two physical systems (marbles falling in a box and a double pendulum) and transform thei
17 mars 2021 13 min
SpeaK is a sound lexicon that offers definitions of main sound properties. Each term of the lexicon is illustrated by sound examples that have been created or recorded on purpose, in order to highlight the given property. The tool is embedd
17 mars 2021 18 min
1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43
Du lundi au vendredi de 9h30 Ă 19h
Fermé le samedi et le dimanche
Hôtel de Ville, Rambuteau, Châtelet, Les Halles
Institut de Recherche et de Coordination Acoustique/Musique
Copyright © 2022 Ircam. All rights reserved.