informations

Type
Conférence scientifique et/ou technique
Lieu de représentation
Ircam, Salle Igor-Stravinsky (Paris)
durée
01 h 15 min
date
13 juin 2012

Complex acoustic signals such as music are usually composed of multiple sound streams that emanate from numerous sources that simultaneously change their loudness, timbre, pitch, and rhythm. Humans are able to integrate effortlessly the multitude of acoustic cues arriving at the ears, and to derive coherent percepts and judgments about the attributes of this sound. This facility to analyze an auditory scene is conceptually based on a multi-stage process in which sound is first analyzed in terms of a relatively few perceptually significant attributes (the alphabet of auditory perception), followed by higher level cortical integrative processes that organize and group the extracted attributes according to specific context-sensitive rules - the syntax of auditory perception. In this talk, I shall outline a mathematical model of this process based on physiological and psychoacoustical studies that have revealed a multiresolution representation of sound in the cortex as well as a variety of adaptive mechanisms that actively organize our perceptual space.

Biography:
Shihab Shamma is a Professor of Electrical and Computer Engineering and the Institute for Systems Research. His research deals with auditory perception, cortical physiology, role of attention and behavior in learning and plasticity, computational neuroscience, and neuromorphic engineering. One focus has been on studying the computational principles underlying the processing and recognition of complex sounds (speech and music) in the auditory system, and the relationship between auditory and visual processing. Another aspect of the research deals with how behavior induces rapid adaptive changes I neural selectivity and responses, and the mechanisms that facilitate these changes and control them. Finally, signal processing algorithms inspired by data from these neurophysiological and psychoacoustic experiments have been developed and applied in a variety of systems such as speech and voice recognition, diagnostics in industrial manufacturing, and underwater and battlefield acoustics. Other research interests include aVLSI implementations of auditory processing algorithms, and development of robotic systems for the detection and tracking of multiple simultaneous sound sources.


Cortical Representation of Musical Timbre

Complex acoustic signals such as music are usually composed of multiple sound streams that emanate from numerous sources that simultaneously change their loudness, timbre, pitch, and rhythm. Humans are able to integrate effortlessly the multitude of acoustic cues arriving at the ears, and to derive coherent percepts and judgments about the attributes of this sound. This facility to analyze an auditory scene is conceptually based on a multi-stage process in which sound is first analyzed in terms of a relatively few perceptually significant attributes (the alphabet of auditory perception), followed by higher level cortical integrative processes that organize and group the extracted attributes according to specific context-sensitive rules - the syntax of auditory perception. In this talk, I shall outline a mathematical model of this process based on physiological and psychoacoustical studies that have revealed a multiresolution representation of sound in the cortex as well as a variety of adaptive mechanisms that actively organize our perceptual space. Biography: Shihab Shamma is a Professor of Electrical and Computer Engineering and the Institute for Systems Research. His research deals with auditory perception, cortical physiology, role of attention and behavior in learning and plasticity, computational neuroscience, and neuromorphic engineering. One focus has been on studying the computational principles underlying the processing and recognition of complex sounds (speech and music) in the auditory system, and the relationship between auditory and visual processing. Another aspect of the research deals with how behavior induces rapid adaptive changes I neural selectivity and responses, and the mechanisms that facilitate these changes and control them. Finally, signal processing algorithms inspired by data from these neurophysiological and psychoacoustic experiments have been developed and applied in a variety of systems such as speech and voice recognition, diagnostics in industrial manufacturing, and underwater and battlefield acoustics. Other research interests include aVLSI implementations of auditory processing algorithms, and development of robotic systems for the detection and tracking of multiple simultaneous sound sources.

intervenants


partager


Vous constatez une erreur ?

IRCAM

1, place Igor-Stravinsky
75004 Paris
+33 1 44 78 48 43

heures d'ouverture

Du lundi au vendredi de 9h30 à 19h
Fermé le samedi et le dimanche

accès en transports

Hôtel de Ville, Rambuteau, Châtelet, Les Halles

Institut de Recherche et de Coordination Acoustique/Musique

Copyright © 2022 Ircam. All rights reserved.