March 23, 2022 45 min
March 23, 2022 27 min
March 23, 2022 33 min
March 23, 2022 34 min
March 23, 2022 31 min
March 23, 2022 28 min
March 23, 2022 39 min
March 23, 2022 49 min
November 29, 2006 20 min
November 29, 2006 01 h 07 min
November 29, 2006 59 min
November 29, 2006 12 min
November 29, 2006 50 min
November 29, 2006 47 min
November 29, 2006 18 min
November 29, 2006 51 min
0:00/0:00
The goal of this residency is to propose a gestural control of Artificial Intelligence models in real-time scenarios. The idea is to develop a dedicated control interface (hardware and software) in order to offer novel and innovative ways of generating electroacoustic sounds using the most recent models offered by deep neural networks. A dedicated electronic interface allows to develop instrumental gestures precisely linked to sound, while having an involvement of the body and producing expressive electroacoustic materials. AI systems have great potential to generate expressive and highly musical sound. Combining these two aspects of generating nowadays sounds would provide fascinating and unexpected expressiveness of machines.