Date
Share
xavier Hinaut

Xavier Hinaut

INRIA

Xavier Hinaut is Research Scientist in Bio-inspired Machine Learning and Computational Neuroscience at Inria, Bordeaux, France since 2016. He received a MSc and Engineering degree form Compiègne Technology University (UTC), FR in 2008, a MSc in Cognitive Science & AI from EPHE, FR in 2019, then his PhD of Lyon University, FR in 2013. He is a member (Vice Chair) of IEEE CIS Task Force on Reservoir Computing. His work is at the frontier of neurosciences, machine learning, robotics and linguistics: from the modeling of human sentence processing to the analysis of birdsongs and their neural correlates. He both uses reservoirs for machine learning (e.g. birdsong classification) and models (e.g. sensorimotor models of how birds learn to sing). He manages the “DeepPool” ANR project on human sentence modeling with Deep Reservoirs architectures and the Inria Exploratory Action “BrainGPT” on Reservoir Transformers.
He leads ReservoirPy development: the most up-to-date Python library for Reservoir Computing. https://github.com/reservoirpy/reservoirpy He is also involved in public outreach, notably by organising hackathons from which fun projects with reservoirs came out (ReMi project on reservoir generating MIDI and sounds).

Tutoriel Reservoir Computing & recent works

Reservoir Computing (RC) is an alternative artificial learning paradigm for time series processing, particularly efficient in terms of data and computational requirements. Generally, it relies on the partial training of a recurrent neural network, which simplifies the learning process. Its advantages lie in its low energy consumption and its ability to learn with limited data. Reservoir Computing is comparable to a temporal Support Vector Machine (SVM) due to the use of high-dimensional random projections. Surprisingly, it is possible to exploit other physical substrates acting as reservoirs, for example, with light, enabling computations across various physical systems. We are developing ReservoirPy, a Python library, to make the RC paradigm accessible and reproducible. ReservoirPy allows for the easy design of complex RC architectures and their use in time series prediction or classification tasks. In this presentation, I will introduce the philosophy and intuitions behind reservoir computing, followed by a demonstration on how to use ReservoirPy to apply this paradigm to different types of problems. Language involves several levels of abstraction, from small sound units like phonemes to contextual sentence-level understanding. Large Language Models (LLMs) have shown an impressive ability to predict human brain recordings. For instance, while a subject is listening to a book chapter from Harry Potter, LLMs can predict parts of brain imaging activity (recorded by functional Magnetic Resonance Imaging or Electroencephalography) at the phoneme or word level. These striking results are likely due to their hierarchical architectures and massive training data. Despite these feats, they differ significantly from how our brains work and provide little insight into the brain’s language processing. We will see how simple Recurrent Neural Networks like Reservoir Computing can model language acquisition from limited and ambiguous contextual data better than LSTMs. From these results, in the BrainGPT project, we explore various architectures inspired by both reservoirs and LLMs, combining random projections and attention mechanisms to build models that can be trained faster with less data and greater biological insight.