QLS Seminar Series - Pouya Bashivan
Artificial neural networks in visual neuroscience: towards a quantitative explanation of visual object recognition in the brain
Pouya Bashivan, 平特五不中
Tuesday January 18, 12-1pm
Zoom Link:聽
Abstract:聽Within tens of milliseconds, the neuronal networks in the primate brain process the patterns of light that strike the eyes in a series of six interconnected cortical areas called the ventral visual pathway. These areas form a necessary substrate for our ability to recognize objects and their relationships in the world. One of the core scientific questions in visual neuroscience is concerned with the visual patterns that neurons at each level of processing along the ventral stream represent. In recent years, artificial neural network (ANN) models have become increasingly more common in neuroscience studies aiming at providing quantitative explanations for the patterns of neuronal responses observed in the animal brains. Despite their many shortcomings in achieving human-level visual perception, the current ANN models are by far our best quantitative models of the computations underlying biological vision. In this talk, I will review some of our recent work in using ANN models to predict the neuronal responses in the human and nonhuman primate visual cortex and discuss how we have adopted these models to facilitate more quantitative explanations of how population activity across the visual cortex gives rise to our perception of the world.
Relevant publications: