Biological neurons use short stereotyped electrical impulses called “spikes” to compute and transmit information. The spike times, in addition to the spike rates, are known to play an important role in how neurons process information. SNNs are thus more biologically realistic than the real-valued artificial neural networks used in deep learning, and are arguably the most viable option if one wants to understand how the brain computes at the neuronal description level. But SNNs are also appealing for AI technology, because they can be implemented efficiently on low-power neuromorphic chips. SNNs have been studied for several decades, yet interest in them has surged recently due to a major breakthrough: surrogate gradient learning (SGL). SGL allows the training of SNNs with backpropagation, just like real-valued networks, outperforming other training methods by a large margin. In my group, we have demonstrated that SGL enables the learning of not only connection weights but also connection delays. These delays represent the time needed for one spike to travel from the emitting to the receiving neurons. Delays matter because they shift the spike arrival times, which should be synchronous to trigger an output spike. Thus plastic delays significantly enhance the expressivity of SNNs. If this fact is well established theoretically, efficient algorithms to learn these delays have been lacking. We proposed one such algorithm, based on a new convolution method known as dilated convolution with learnable spacings (DCLS), which outperforms previous proposals. Our results show that learning delays with DCLS in fully connected and convolutional SNNs strongly increases the accuracy on several temporal vision and audition tasks, leading to new state-of-the-arts. Our approach is relevant for the neuromorphic engineering community, as most existing neuromorphic chips support programmable delays. In the long term, our research could also provide insights into the role of myelin plasticity in the brain.