Feindel Brain and Mind Seminar Series: Neural Dynamics and Computations Constraining Speech and Music Processing
La série Feindel Brain and Mind Seminar s’inscrit dans la ligne de pensée du Dr William Feindel (1918-2014), directeur du Neuro de 1972 à 1984, qui consiste à maintenir un lien constant entre pratique clinique et recherche. Les présentations porteront sur les dernières avancées et découvertes en neuropsychologie, en neurosciences cognitives et en neuro-imagerie.Ìý
Les scientifiques du Neuro, ainsi que des collègues et collaborateurs venus du milieu ou du monde entier, se chargeront des conférences. Cette série se veut un forum virtuel pour les chercheurs et les stagiaires en vue de favoriser les échanges interdisciplinaires sur les mécanismes des troubles cérébraux et cognitifs, leur diagnostic et leur traitement.Ìý
Pour assister en personne,Ìý
Pour regarder via Vimeo,Ìý
Benjamin Morillon
Directeur de Recherche, Aix Marseille Université, INSERM, INS, Institut de Neurosciences des Systèmes, Marseille, France
±áô³Ù±ð: robert.zatorre [at] mcgill.ca (Robert Zatorre)
´¡²ú²õ³Ù°ù²¹³¦³Ù:ÌýBenjamin Morillon will depict the neural dynamics underlying music perception and speech comprehension, emphasizing time scales and adaptive processes. First, he will explore why humans spontaneously dance to music, presenting behavioral and neuroimaging evidence that motor dynamics reflect predictive timing during music listening. While auditory regions track the rhythm of melodies, intrinsic neural dynamics at delta (1.4 Hz) and beta (20-30 Hz) frequencies in the dorsal auditory pathways encode the wanting-to-move experience, or "groove." These neural dynamics are organized along the pathway in a spectral gradient, with the left sensorimotor cortex coordinating groove-related delta and beta activity. Predictions from a neurodynamic model suggest that spontaneous motor engagement during music listening arises from predictive timing, driven by interactions of neural dynamics along the dorsal auditory pathway. Second, to investigate speech comprehension, a framework was developed utilizing the concept of channel capacity. This approach examines the influence of various acoustic and linguistic features on the comprehension of compressed speech. Results demonstrate that comprehension is independently affected by each feature, with varying degrees of impact and a clear dominance of the syllabic rate. Complementing this framework, human intracranial recordings reveal how neural dynamics in the auditory cortex adapt to different acoustic features, facilitating parallel processing of speech at syllabic and phonemic time scales. These findings underscore the dynamic adaptation of neural processes to temporal characteristics in speech and music, enhancing our understanding of language and music perception.