Date
Share

Gido van de Ven

KU Leuven

Gido van de Ven is a postdoctoral researcher at the KU Leuven, Belgium. In his research, he uses insights and intuitions from neuroscience to make the behavior of deep neural networks more human-like. In particular, he is interested in the problem of continual learning, and generative models are his principal tool to address this problem. Previously, for his doctoral research at the University of Oxford, he used optogenetics and electrophysiological recordings in mice to study the role of replay in memory consolidation in the brain.

Three types of incremental learning: a framework for continual learning

Incrementally learning new information from a non-stationary stream of data, referred to as ‘continual learning’, is a key feature of natural intelligence, but an open challenge for deep learning. For example, standard deep neural networks tend to catastrophically forget previous tasks or data distributions when trained on a new one. Enabling these networks to incrementally learn, and retain, information from different contexts has become a topic of intense research. In this presentation I put forward the view that there are three fundamental types, or ‘scenarios’, of continual learning: task-incremental, domain-incremental and class-incremental learning. I argue that each of these scenarios has its own set of challenges, and I demonstrate that there are substantial differences between them in terms of difficulty and in terms of the effectiveness of different computational strategies.