Date
Share
Alessandro Londei

Alessandro Londei

Sony CSL – Rome

Alessandro Londei is a physicist with a PhD in Electronic Engineering and Computer Science from Rome, Italy. In his scientific career, he has mainly worked on theoretical models of artificial cognition in connectionist and symbolic domains. He has worked on the development of innovative methods for the analysis of Functional Magnetic Resonance Imaging (fMRI) sequences, as well as on the identification of functional brain circuits using entropic techniques. He has also developed several ‘What-If’ models in European projects to study social and political conflict resolution in coastal areas and European youth mobility. From 2001 to 2010, he taught Artificial Intelligence and Neural Networks at the Faculty of Psychology, Sapienza University of Rome. Since 2018, he has been a researcher at Sony CSL, first in Paris and from 2022 in Rome. His current research concerns supporting human creativity through innovative AI and machine learning methods, with recent applications to studying body movement and dance.

Exploring Adjacent Possible in AI as a metaphor for human creativity

A crucial element in the ability of artificial systems to include mechanisms for anticipation, inclusion and novelty search is the ability to efficiently explore the space of the Adjacent Possible introduced by S. Kauffman. Such a search requires the ability to explore the space of the possible using the information currently stored in the artificial system, along with a generative mechanism for including and selecting novel elements to improve the learning of the non-stationary reference system. To this end, we introduce the Dreaming Learning technique aimed to equip a neural network with an explorative tool for anticipating and predicting the potential new items or sequences per the previously stored knowledge. This method offers many advantages regarding resilience to unexpected paradigm shifts, non-stationary sequences, and auto-correlation. This latter is specifically relevant for improving the textual coherence in language models. However, this technique has the limitation that it can only operate on one-dimensional discrete-valued time sequences. To overcome this problem, we sought to exploit the nature of chaotic systems to make trajectories diverge on specific manifolds and be dissipative simultaneously. We then constructed artificial neural networks capable of controlling the spectrum of Lyapunov exponents as a regularization technique to be coupled with traditional learning algorithms based on backpropagation. The results are encouraging both in terms of defining chaotic attractors through their macroscopic features and making neural network learning of multi-dimensional non-stationary sequences much more prone to include and anticipate novelties.