information

Type
Séminaire / Conférence
performance location
Ircam, Salle Igor-Stravinsky (Paris)
duration
01 h 20 min
date
June 8, 2016

Several features of the auditory environment are analysed and predicted even before the intervention of attention in an automatic and irrepressible way in order to facilitate response to salient and potentially dangerous events. Music capitalises on variations of “low-level” spectrotemporal features common to other auditory signals, and is also charac- terised by “high-level” sound schemata based on conventional agreement between members of a certain musical culture. In this talk I will review my recent neurophysiological and neuroimaging studies on the attentional resources required for encoding and predicting “low-“ vs. “high-level” sound features in isolation or in a realistic music context.


Archétypes émotionnels : musique et neurosciences, journée du 8 juin 2016

Music holds tremendous power over our emotions. Through a particularly touching phrase, a forceful chord or even a single note, musical sounds trigger powerful subjective reactions. For neuroscientists, these strong reactions are vexing facts, because such emotional reactions are typically understood as survival reflexes: our increased heart rates, suddenly- sweaty hands or deeper breath are responses preparing our organism to, for example, fight or run away if we stumble into a bear in the woods. Stumbling into music, be it a violin or a flute, a C or a C#, hardly seems a similar matter of life or death. In the past decade or so, experimentalists have tried to dissect musical sounds to see what exactly makes our brains think them worthy of such strong reactions – perhaps because they mimic the dissonant roar of a predator, reproduce the accents and prosody of emotional speech, or the spectral patterns of certain environmental sounds.
For music composers, sonic events that are able to drive us into such Darwinian reactions also are the topic of an endless quest. With careful workmanship, the art of the composer is to sculpt sounds – how they’re written, how they’re performed, how they’re heard – that are optimally significant for the listening audience. For a certain school of contemporary crea- tion in particular, music making proceeds by delibera- tely reducing and rarefying its sonic material to the point of imitating our most minimal biological acts, e.g. in voice (crying, shouting, breathing) or movement (brushing, sliding, springing).
With this symposium, featuring invited contributions by some of the most influent voices in the worlds of music neurosciences and contemporary music, our aim is to explore and confront the views of both scientists and composers on this issue – what are the origins of musical emotions?

speakers

From the same archive

How do we process complex sensory signals when judging high-level attributes? - Emmanuel Ponsot

Part #1: Vocal Archetypes in Music One of the most natural ways to explain emotions created by musical sounds is their proximity to lan- gage, and to expressive speech in particular. Our first guest talk, the workshop’s keynote address by

June 8, 2016 26 min

Video

Music, Language, Emotion, and the Brain: a Cognitive Neuroscience Perspective - Aniruddh D. Patel

Speech and instrumental music are very ancient, with the earliest known instruments dating to at least 40,000 years ago. These two forms of expression have many salient differences, including their acoustic structure, the way in which they

June 8, 2016 01 h 22 min

Video

Time perception and neural oscillations modulated by speech rate - Pablo Arias

Part #1: Vocal Archetypes in Music One of the most natural ways to explain emotions created by musical sounds is their proximity to lan- gage, and to expressive speech in particular. Our first guest talk, the workshop’s keynote address by

June 8, 2016 20 min

Video

Voice Synthesis Technologies in Contemporary Music Creation - Grégory Beller

Expressivity and emotion can be precisely analyzed though our modern sound processing technologies. It is possible to modify various parameters in the voice, altering the perception of it. Real-time algorithms can even change the way a spea

June 8, 2016 01 h 01 min

Video

Studio Report #1: The “6months” Project - Jean-Julien Aucouturier

Part #1: Vocal Archetypes in Music One of the most natural ways to explain emotions created by musical sounds is their proximity to lan- gage, and to expressive speech in particular. Our first guest talk, the workshop’s keynote address by

June 8, 2016 09 min

Video

share


Do you notice a mistake?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

opening times

Monday through Friday 9:30am-7pm
Closed Saturday and Sunday

subway access

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.