Les médias liés à cet évènement

Mettre en temps une structure musicale : l'activité de composition de Voi(rex) par Philippe Leroux - Nicolas Donin, Jacques Theureau

14 avril 2005 01 h 01 min

Mettre en temps une structure musicale : l'activité de composition de Voi(rex) par Philippe Leroux - Nicolas Donin, Jacques Theureau

14 avril 2005 24 min

L'estimation de fréquences fondamentales multiples

12 mai 2005 52 min

La harpe électroacoustique

4 février 2005 01 h 18 min

Utilisation de Modalys pour le projet VoxStruments, lutherie numérique intuitive et expressive - Nicholas Ellis, Joël Bensoam

17 octobre 2007 49 min

Présentation des travaux l'équipe PdS dans le cadre du projet européen CLOSED : "Closing the Loop of Sound Evaluation and Design" - Olivier Houix

27 juin 2007 01 h 12 min

Sparse overcomplete methods, matching pursuit and basis pursuit - Bob L. Sturm

11 juillet 2007 48 min

Transformations de type et de nature de la voix - Snorre Farner, Axel Roebel, Xavier Rodet

12 septembre 2007 01 h 07 min

Segmentations et reconnaissances automatiques de phonèmes de la voix, temps différé, temps réel - Pierre Lanchantin, Julien Bloit, Xavier Rodet

19 septembre 2007 01 h 13 min

Synthèse de la parole à partir du texte et construction d'une base de données d'unités de la voix - Christophe Veaux, Grégory Beller, Xavier Rodet

26 septembre 2007 01 h 00 min

Projet ECOUTE - Jerome Barthelemy, Nicolas Donin, Geoffroy Peeters, Samuel Goldszmidt

3 octobre 2007 01 h 12 min

Projet MusicDiscover - David Fenech Saint Genieys

10 octobre 2007 01 h 10 min

Projet CASPAR - Jerome Barthelemy, Alain Bonardi

24 octobre 2007 50 min

Projet CONSONNES 1ère partie - René Caussé, Vincent Freour, David Roze

21 novembre 2007 57 min

Cortical Representation of Complex Sounds

0:00/0:00

Complex acoustic signals such as music are usually composed of multiple sound streams that emanate from numerous sources that simultaneously change their loudness, timbre, pitch, and rhythm. Humans are able to integrate effortlessly the multitude of acoustic cues arriving at the ears, and to derive coherent percepts and judgments about the attributes of this sound. This facility to analyze an auditory scene is conceptually based on a multi-stage process in which sound is first analyzed in terms of a relatively few perceptually significant attributes (the alphabet of auditory perception), followed by higher level cortical integrative processes that organize and group the extracted attributes according to specific context-sensitive rules - the syntax of auditory perception. In this talk, I shall outline a mathematical model of this process based on physiological and psychoacoustical studies that have revealed a multiresolution representation of sound in the cortex as well as a variety of adaptive mechanisms that actively organize our perceptual space.

Biography:
Shihab Shamma is a Professor of Electrical and Computer Engineering and the Institute for Systems Research. His research deals with auditory perception, cortical physiology, role of attention and behavior in learning and plasticity, computational neuroscience, and neuromorphic engineering. One focus has been on studying the computational principles underlying the processing and recognition of complex sounds (speech and music) in the auditory system, and the relationship between auditory and visual processing. Another aspect of the research deals with how behavior induces rapid adaptive changes I neural selectivity and responses, and the mechanisms that facilitate these changes and control them. Finally, signal processing algorithms inspired by data from these neurophysiological and psychoacoustic experiments have been developed and applied in a variety of systems such as speech and voice recognition, diagnostics in industrial manufacturing, and underwater and battlefield acoustics. Other research interests include aVLSI implementations of auditory processing algorithms, and development of robotic systems for the detection and tracking of multiple simultaneous sound sources.

intervenants

informations

Type
Conférence scientifique et/ou technique
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
01 h 15 min
date
13 juin 2012

Cortical Representation of Musical Timbre

Complex acoustic signals such as music are usually composed of multiple sound streams that emanate from numerous sources that simultaneously change their loudness, timbre, pitch, and rhythm. Humans are able to integrate effortlessly the multitude of acoustic cues arriving at the ears, and to derive coherent percepts and judgments about the attributes of this sound. This facility to analyze an auditory scene is conceptually based on a multi-stage process in which sound is first analyzed in terms of a relatively few perceptually significant attributes (the alphabet of auditory perception), followed by higher level cortical integrative processes that organize and group the extracted attributes according to specific context-sensitive rules - the syntax of auditory perception. In this talk, I shall outline a mathematical model of this process based on physiological and psychoacoustical studies that have revealed a multiresolution representation of sound in the cortex as well as a variety of adaptive mechanisms that actively organize our perceptual space.

Biography:
Shihab Shamma is a Professor of Electrical and Computer Engineering and the Institute for Systems Research. His research deals with auditory perception, cortical physiology, role of attention and behavior in learning and plasticity, computational neuroscience, and neuromorphic engineering. One focus has been on studying the computational principles underlying the processing and recognition of complex sounds (speech and music) in the auditory system, and the relationship between auditory and visual processing. Another aspect of the research deals with how behavior induces rapid adaptive changes I neural selectivity and responses, and the mechanisms that facilitate these changes and control them. Finally, signal processing algorithms inspired by data from these neurophysiological and psychoacoustic experiments have been developed and applied in a variety of systems such as speech and voice recognition, diagnostics in industrial manufacturing, and underwater and battlefield acoustics. Other research interests include aVLSI implementations of auditory processing algorithms, and development of robotic systems for the detection and tracking of multiple simultaneous sound sources.

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.