Conscious processing, inductive biases and generalization in deep learning
DIC-ISC-CRIA Seminar at UQAM
Speaker: , Universit茅 de Montreal
Abstract:
Humans are very good at 鈥渙ut-of-distribution鈥 generalization (compared to current AI systems). It would be useful to determine the inductive biases they exploit and translate them into machine-language architectures, training frameworks and experiments. I will discuss several of these hypothesized inductive biases. Many exploit notions in causality and connect abstractions in representation learning (perception and interpretation) with reinforcement learning (abstract actions). Systematic generalizations may arise from efficient factorization of knowledge into recomposable pieces. This is partly related to symbolic AI (aas seen in the errors and limitations of reasoning in humans, as well as in our ability to learn to do this at scale, with distributed representations and efficient search). Sparsity of the causal graph and locality of interventions -- observable in the structure of sentences -- may reduce the computational complexity of both inference (including planning) and learning. This may be why evolution incorporated this as "consciousness.鈥 I will also suggest some open research questions to stimulate further research and collaborations.聽
Bio:
Recognized worldwide as one of the leading experts in artificial intelligence, Yoshua Bengio is most known for his pioneering work in deep learning, earning him the 2018 A.M. Turing Award, 鈥渢he Nobel Prize of Computing,鈥 with Geoffrey Hinton and Yann LeCun.
He is a Full Professor at Universit茅 de Montr茅al, and the Founder and Scientific Director of Mila 鈥 Quebec AI Institute. He co-directs the CIFAR Learning in Machines & Brains program as Senior Fellow and acts as Scientific Director of IVADO.聽 He is also an alumnus of the Centre for Intelligent Machines.