Investigating the shared neural processing of music and speech with data-driven modeling of brain data

video

information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
date
February 18, 2026

In this presentation, I will give an overview of my work on music perception, with a particular focus on how the brain processes music compared to speech in naturalistic scenarios. I will discuss how data-driven modeling can be used in this context to link continuous, complex sounds to multivariate neural activity, complementing more traditional paradigms that rely on discrete, controlled stimuli. Such modelling allows to probe underlying cog-neural processes that are otherwise hard to access as they operate on the natural unfolding of musical and linguistic structures over time (e.g., predictive mechanisms) and are modulated by complex internal states (e.g., attention). Thanks to this framework, we could probe signatures of shared and distinct neural processing of music and speech, as well as how factors such as attention, structure, and context shape their representations in the brain.


Biography:
Giorgia Cantisani is a CNRS researcher working at the intersection of auditory neuroscience and machine learning. She earned her PhD at Télécom Paris, where she worked on brain data decoding for music-related BCIs. Since then, she has been working at École Normale Supérieure and now at IRCAM, investigating how the brain processes complex sounds such as music and speech and how the two interact in song.

speakers


share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.