Date
Share

Timothée Masquelier

CNRS

Timothée Masquelier is a researcher in AI & computational neuroscience. His research is highly interdisciplinary – at the interface between biology, computer science, and physics. He uses numerical simulations and analytical calculations to gain understanding on how the brain works, and more specifically on how neurons process, encode and transmit information through action potentials (a.k.a spikes), in particular in the visual and auditory modalities. He is also interested in bio-inspired computer vision and audition. He is convinced that spiking neural networks (SNN) can be an appealing alternative to the conventional artificial neural networks commonly used in AI, especially if implemented on low-power neuromorphic chips.
He was trained at Ecole Centrale Paris (Ingénieur 1999), MIT (M. Sc. 2001), and Univ. Toulouse 3 (PhD 2008). He was recruited by the CNRS in 2012.

Learning delays with backprop in SNNs - a new method based on Dilated Convolutions with Learnable Spacings

Biological neurons use short stereotyped electrical impulses called “spikes” to compute and transmit information. The spike times, in addition to the spike rates, are known to play an important role in how neurons process information. SNNs are thus more biologically realistic than the real-valued artificial neural networks used in deep learning, and are arguably the most viable option if one wants to understand how the brain computes at the neuronal description level. But SNNs are also appealing for AI technology, because they can be implemented efficiently on low-power neuromorphic chips. SNNs have been studied for several decades, yet interest in them has surged recently due to a major breakthrough: surrogate gradient learning (SGL). SGL allows the training of SNNs with backpropagation, just like real-valued networks, outperforming other training methods by a large margin. In my group, we have demonstrated that SGL enables the learning of not only connection weights but also connection delays. These delays represent the time needed for one spike to travel from the emitting to the receiving neurons. Delays matter because they shift the spike arrival times, which should be synchronous to trigger an output spike. Thus plastic delays significantly enhance the expressivity of SNNs. If this fact is well established theoretically, efficient algorithms to learn these delays have been lacking. We proposed one such algorithm, based on a new convolution method known as dilated convolution with learnable spacings (DCLS), which outperforms previous proposals. Our results show that learning delays with DCLS in fully connected and convolutional SNNs strongly increases the accuracy on several temporal vision and audition tasks, leading to new state-of-the-arts. Our approach is relevant for the neuromorphic engineering community, as most existing neuromorphic chips support programmable delays. In the long term, our research could also provide insights into the role of myelin plasticity in the brain.