From the same archive

Ada's Song: Making machine-learning processes visible and tangible

March 31, 2023 24 min

Media Specific Performance: the screen mediated production during the Pandemic

March 31, 2023 34 min

AI, networked performance and aesthetic judgment

March 31, 2023 27 min

The World of Freedom

March 31, 2023 22 min

ISMM team (IRCAM) - Presentation of the latest projects for Max: Mubu, CataRT, SkataRT, Gesture&Sound Toolkit.

March 31, 2023 30 min

Family Life - recomposed

March 31, 2023 31 min

Conclusions

March 31, 2023 24 min

Shallow Steps - Spatial Cognitive Sonification of Generative Visuals

March 31, 2023 27 min

Max/MSP Spat library, sensors, and Unreal Engine: a workflow for a real-time generative VR project

March 31, 2023 32 min

Point sur MacIntel et les logiciels du Forum - Carlos Amado Agon, Riccardo Borghesi, Karim Haddad, Nicholas Ellis

November 29, 2006 20 min

Nouveautés AudioSculpt 2.7 et SuperVP 2.91 - Xavier Rodet, Alain Lithaud, Niels Bogaards, Axel Roebel

November 29, 2006 01 h 07 min

Nouveautes OpenMusic - Gérard Assayag, Jean Bresson, Carlos Amado Agon, Karim Haddad

November 29, 2006 59 min

Point sur le Spatialisateur - Olivier Warusfel, Rémy Muller, Terence Caulkins

November 29, 2006 12 min

Nouveautés Modalys - Joël Bensoam, Nicholas Ellis, Jean Lochard

November 29, 2006 50 min

Mlys - une interface de contrôle de Modalys dans Max/MSP - Manuel Poletti

November 29, 2006 47 min

Accueil - Andrew Gerzso

November 29, 2006 18 min

Développements récents de l'équipe applications temps réel - Diemo Schwarz, Riccardo Borghesi, Norbert Schnell

November 29, 2006 51 min

Gestural-Based Sound Spatialization & Synthesis Strategies in 3D Virtual Environment in Interactive Audiovisual Composition

0:00/0:00

This presentation/demo is based on my final doctoral project that explores computer game technology's artistic potential, particularly through the adaptation of the hand gesture of the Sabetan Technique of Indonesia Wayang Kulit to create performative strategies for Interactive Audiovisual Composition. Through this presentation, I will demonstrate how I acquired the gestural information processed through a machine learning model to control the spatialization and synthesis parameters, including the locomotion and behaviour of visual objects in the virtual environment.

speakers

information

Type
Ensemble de conférences, symposium, congrès
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
28 min
date
March 31, 2023

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.