The &inCSL seminar series are hosted by the Sony CSL Paris laboratories. &inCSL (étincelle) means spark in french and we hope these meetings will ignite new ideas and collaborations. You can attend the seminars from our youtube page or you can participate actively by joining the meetings through Microsoft Teams. For this and/or to receive updates about the calendar, please register on this webpage.
Sony R&D Center Europe-Stuttgart Laboratory 1 (SL1)
Vrije Universiteit Amsterdam
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
ENSIIE and LISN
Sony Computer Science Laboratories Paris
Leibniz IPHT Jena
Sony Computer Science Laboratories Paris
Sony CSL Tokyo
RWTH Aachen University
University of Memphis
Enrico Fermi Center
University of Cambridge, Bio-Inspired Robotics Laboratory
Sony Computer Science Laboratories Paris
MSC-University of Paris
Université Paris III-Sorbonne Nouvelle
Sony CSL Kyoto
Vrije Universiteit Brussel
University of the Arts London
From ICREA Barcelona
Complexity Science Hub Vienna
University of Southern California
Université Paris Diderot
Wageningen University & Research
Sony CSL Paris
University of Catania
University of Tokyo
Tallinn University of Technology
(EX-) Researcher at Sony CSL Paris
UN World Food Programme
Sony Computer Science Laboratories Paris
University of the Philippines
Sony Computer Science Laboratories Paris
Technical University of Denmark
UC San Diego
Sony Computer Science Laboratories Paris
Institute of Intelligent Systems and Robotics (ISIR), Sorbonne University, Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Sony Computer Science Laboratories Paris
Fritz Höhl made his Ph.D. in Computer Science at the University of Stuttgart, Germany. He works since 2000 at Stuttgart Lab 1 of the Stuttgart Technology Center which hosts two corporate-level research labs of Sony. In the last 10 years, Fritz worked in the area of Natural Language Processing.
Frank van Harmelen has a PhD in Artificial Intelligence from Edinburgh University, and has been professor of AI at the Vrije Universiteit Amsterdam since 2001, where he leads the research group on Knowledge Representation. He was one of the designers of the knowledge representation language OWL, which is now in use by companies such as Google, the BBC, New York Times, Amazon, Uber, Airbnb, Elsevier and Springer Nature among others. He co-edited the standard reference work in his field (The Handbook of Knowledge Representation), and received the Semantic Web 10-year impact award for his work on the Sesame RDF triple store. He is a Fellow of the European Association for Artificial Intelligence, member of the the Dutch Royal Academy of Sciences (KNAW), of The Royal Holland Society of Sciences and Humanities (KHWM) and of the Academia Europaea, and is adjunct professor at Wuhan University and Wuhan University of Science and Technology in China.
Massimo Vergassola holds a joint position as Professor at the Ecole Normale Supérieure in Paris and CNRS Directeur de Recherche. He is the Director of the ENS-PSL QBio initiative on Quantitative Biology, which is selected to be part of the PariSanté Val-de-Grâce Campus. After his education in Italy and France and his postdoc at Princeton University, Vergassola received a tenured research position at the French CNRS for work on the statistical physics of fluids. His CNRS position was held with joint appointments at the Ecole Polytechnique, and at the Pasteur Institute as the head of the Physics of Biological Systems group. In 2013-19, Vergassola was a Professor at the University of California San Diego and a founding member of the Qbio initiative at the UCSD campus. Vergassola was visiting scientist at Rockefeller University, KITP, IAS, LANL, and IHES, he was a plenary speaker at Statphys25, and he served on the board/leadership of a variety of professional journals and associations. He was Chair of the Biological Physics Division of the American Physical Society. Vergassola’s awards include the Grand Prix EADS from the French Academy of Sciences, the Fellowship and the Outstanding Referee awards from the American Physical Society, the CNRS Bronze Medal, the Biomedical prize Thérèse Lebrasseur from the Fondation de France, the Accademia dei Lincei student award, and grant awards from the Simons Foundation and the Fondation Recherche Médicale.
Humans constantly create narratives to provide explanations for how and why something happens. Sherlock Holmes is known for his observation and logical reasoning skills, and is called upon for finding the identity and motivations of the culprits. In other words, we constantly attempt to make sense of different inputs and to come up with a coherent story. In my research, I investigate how to computationally build structured narratives with knowledge graphs as inputs. The objective is to find and implement a meaningful knowledge representation, and to find relevant inputs for the output graph. I am particularly interested in dynamic representations and in reasoning over the sequence of events’ patterns. By adding the reasoning step, we will be able to make new hypotheses and, therefore, to discover new knowledge.
I have always been fascinated by how simple fundamental laws of nature can generate the complexity of our world. Complexity science is one of the modern tools of science to understand biological and social interactions. Simple structures mix and combine to give rise to higher order objects, and many parts of this long process are still unknown and hard to discover. In my research, I want to explore different topics to discover why seemingly unrelated systems actually work in similar ways. In particular, I am interested in the causes of human behaviour and how from biological and social needs of living creatures their complex interactions are born and evolve. I believe understanding these drivers can help create a better and more harmonious world. After finishing my PhD in network science, I joined Sony CSL in Rome to work in the creativity team, and in particular on topics of urban mobility. Here we work on new exciting projects at the forefront of science, with a focus on research that can be useful to build a better future and society. Every day there is the opportunity for me to learn and discover something new.
Ryota Kanai is the founder & CEO of Araya, Inc. After graduating from the Faculty of Science at Kyoto University in 2000, he received his PhD (Cum Laude) in 2005 from Utrecht University in the Netherlands, where he studied human visual information processing mechanisms. After working as a researcher at California Institute of Technology in the U.S. and University College London in the U.K., and as a JST PRESTO researcher and Associate Professor of Cognitive Neuroscience at the University of Sussex in the U.K., he founded Araya, Inc. and worked full time there since 2015. He is engaged in research on the principles of consciousness in the brain and the implementation of consciousness in AI through the fusion of neuroscience and information theory. He has been also working on the practical application of AI and neuroscience in industry. He has received many awards, including the Young Scientist Award from the Ministry of Education, Culture, Sports, Science and Technology, the JEITA Venture Award (2020), the ET/IoT Technology Award (2019) among others as Araya Inc. From 2020, he is working on the practical application of brain-machine interface as a project manager of the Moonshot Project in the Cabinet Office. Vittorio Loreto, PhD, is a Full Professor of Physics at Sapienza University in Rome and the Sony Computer Science Laboratories (CSL) director in Paris. He recently founded Sony CSL in Rome. His research activity focuses on complexity science and its interdisciplinary applications. Over the past years, he has been active in several fields, from granular media to complexity and information theory, from social dynamics to sustainability. His recent interests focus on unfolding the dynamics of creativity, novelties, and innovation. The key to this endeavour is to grasp the structure and dynamics of the "space of possibilities" to develop solid mathematical modelling of how human and artificial systems - biological, technological, social - explore the new at the individual and collective levels. This knowledge can be helpful to conceive the next generation of Artificial Intelligent algorithms able to cope with the occurrence of novelties, bridging, in this way, the gap between inference and unexpected events. An important application of all this is related to the Sustainable Development Goals (SDG) and how humanity can conceive new sustainable solutions to long-standing challenges. Through the newly founded CSL Rome, I'm addressing these challenges in the relevant areas of sustainable cities and information and social dialogue. Jun Rekimoto received his Ph.D. in Information Science from Tokyo Institute of Technology in 1996. Since 1994 he has worked for Sony Computer Science Laboratories (Sony CSL). In 1999 he formed and directed the Interaction Laboratory within Sony CSL. Since 2007 he has been a professor in the Interfaculty Initiative in Information Studies at The University of Tokyo. Since 2011 he also has been Deputy Director of Sony CSL. Rekimoto’s research interests include human-computer interaction, computer augmented environments, human augmentation, and human-AI-integration. He invented various innovative interactive systems and sensing technologies, including NaviCam (a hand-held AR system), Pick-and-Drop (a direct-manipulation technique for inter-appliance computing), CyberCode (the world’s first marker-based AR system), Augmented Surfaces, HoloWall, and SmartSkin (two earliest representations of multi-touch systems). He is a member of the ACM SIGCHI Academy, is very widely published and won numerous research and design awards for his research. Lana Sinapayen is Artificial Life and Artificial Intelligence researcher at Sony Computer Science Laboratories in Japan. She specialises in predictive coding (the role of prediction in intelligence), artificial perception (sensory illusions in neural networks), and measures of complexity. She has a keen interest in all forms of intelligence, especially the unexpected ones. She is an Associate Editor for the Journal of Artificial Life and is involved in outreach and equity for the International Society for Artificial Life. She is also a member of the Early Career Advisory Group for the eLife Journal, and is currently working on a web platform for collaborative open science called "Mimosa". Olaf Witkowski is the founding director of Cross Labs, a research institute based in Kyoto that focuses on the fundamental principles of natural and artificial intelligence, connecting research in academia and the industry. He also leads projects as an executive officer at AI company Cross Compass Ltd., a lecturer at the University of Tokyo, and a researcher at the Tokyo Institute of Technology. He is also the vice president of the International Society for Artificial Life, and recently co-founded ALife Japan.
Mathilde Marengo is an Australian – French – Italian Architect, with a PhD in Urbanism, whose research focuses on the Contemporary Urban Phenomenon, its integration with technology, and its implications on the future of our planet. Within today’s critical environmental, social and economic framework, she investigates the responsibility of designers in answering these challenges through circular and metabolic design. She is Head of Studies, Faculty and PhD Supervisor at the Institute for Advanced Architecture of Catalonia’s Advanced Architecture Group (AAG), an interdisciplinary research group investigating emerging technologies of information, interaction and manufacturing for the design and transformation of the cities, buildings and public spaces. Within this context, Mathilde researches, designs and experiments with innovative educational formats based on holistic, multi-disciplinary and multi-scalar design approaches, oriented towards materialization, within the AAG agenda of redefining the paradigm of design education in the Information and Experience Age. Her investigation is also actuated through her role in several National and EU funded research projects, among these Innochain, Knowledge Alliance for Advanced Urbanism, BUILD Solutions, Active Public Space, Creative Food Cycles, and more. Her work has been published internationally, as well as exhibited, among others: Venice Biennale, Shenzhen Bi-City Biennale, Beijing Design Week, MAXXI Rome. About IAAC: The Institute for Advanced Architecture of Catalonia (IAAC) is a centre for research, education, production and outreach, with the mission of envisioning the future habitat of our society and building it in the present. (iaac.net) About AAG: IAAC's Advanced Architecture Group (AAG) is an interdisciplinary research group investigating emerging technologies of information, interaction and manufacturing for the design and transformation of the cities, buildings and public spaces. Big & small data, responsiveness, smart energy systems, artificial intelligence, robotics, advanced materials, and additive manufacturing are few of the key topics developed by the AAG to explore how technologies can contribute in activating, socialising and establishing new responsive inhabitation models. (iaac.net/research-departments/advanced-architecture-group/)
Anne-Laure Ligozat is an associate professor in computer science at ENSIIE and LISN (Paris-Saclay & CNRS). Her research interests are the environmental impacts of Information and Communication Technologies and in particular of Artificial Intelligence. She is also the sustainable development referent for her laboratory and her establishment.
We often relate to innovation as if it could be an answer to our questions and needs. But what if innovations are actually the questions? Each advance raises an increasing number of open questions. On one side they unlock a wider range of opportunities, possibly leading to new discoveries (both at a collective and individual level). On the other side, each creative exploit can have a potentially huge impact on humans, at the cultural, societal, and psychological level. The focus of my research is twofold: the study of the exploration of the space of the unknown (in particular the so-called Adjacent Possible) and the assessment of the impact of this exploration on humans. I try to understand exploration behaviours in cultural systems, how creativity can emerge in such processes and the impact of innovations. The aim is to learn how we can use technology to "augment" our exploration and to improve our creativity.
After doing an apprenticeship as an electrician Benedict Diederich started studying electrical engineering at the University for Applied Science Cologne. A specialisation in optics and an internship at Nikon Microscopy Japan pointed him to the interdisciplinary field of microscopy. After working for Zeiss he finished his PhD in the Heintzmann Lab at the Leibniz IPHT Jena, where he focusses on bringing cutting edge research to everybody by relying on tailored image processing and low-cost optical setups. Part of his PhD program took place at the Photonics Center at the Boston University in the Tian Lab. A recent contribution was the open-source optical toolbox UC2 (You-See-Too) which tries to democratise science by making cutting-edge affordable and available to everyone, everywhere.
We are usually unaware of the enormous computing power needed by our brain when listening to music. When trying to make sense of music, we constantly have to classify, sort, remember, structure, and connect a vast number of musical events. Moreover, these events do not only consist of notes, chords, and rhythms but are also characterized by "colors of sound." These ever-changing frequencies, resulting in complex soundscapes, are at the heart of our musical experiences. I use computer models to simulate the cognitive processes involved when listening to music, to create better tools for music production and music analysis. Creating compositions, musical arrangements, and unique sounds using machine learning and artificial intelligence will lead to a streamlined music production workflow and to entirely different ways to engage with music as a whole.
Alexis André is an artist, researcher and designer aiming at redefining entertainment. In this golden age of computation and data overflow, why is our entertainment still designed to be consumed in a passive way? A few media are offering interactive experiences, but none of them are designed specifically for you. Alexis is working towards a future where you could enjoy unique experiences that were tailored to your preferences, where the power of generative systems is leveraged to offer individually custom-created pieces. As a first implementation of this concept, he created the robot toy platform “toio” that gathered various design awards (iF Design, reddot, Good Design…). His generative art pieces have been showcased all around the world (Siggraph, Tokyo, Art Basel Miami, COP26 in Glasgow…) and in auction at Sotheby’s and Christie’s. Engineer (Information Science and Energy, Supélec, 2003), M.S., Ph.D. (Computer Science, Tokyo Institute of Technology, 2004, 2009), Researcher at Sony Computer Science Laboratories, Inc., Tokyo since 2009.
Matuszyńska is a Junior Professor in Computational Life Science at RWTH Aachen University, Germany. She received her engineering and master’s degree in mathematics from the Gdańsk University of Technology in Poland and earned a second master’s degree in drug discovery from Aberdeen University in Scotland. In 2016, Matuszyńska completed her doctoral degree in computational biology from Heinrich-Heine University Düsseldorf, Germany. Her group is developing mechanistic, computational models of plant primary and secondary metabolism, with a particular focus on photosynthesis and light regulation in the context of agriculture limitations, plant bioenergetics and organisms interactions.
Andrew M. Olney presently serves as Professor in both the Institute for Intelligent Systems and Department of Psychology at the University of Memphis. Dr. Olney received a B.A. in Linguistics with Cognitive Science from University College London in 1998, an M.S. in Evolutionary and Adaptive Systems from the University of Sussex in 2001, and a Ph.D. in Computer Science from the University of Memphis in 2006. His primary research interests are in natural language interfaces. Specific interests include vector space models, dialogue systems, unsupervised grammar induction, robotics, and intelligent tutoring systems.
Since January 2021 he covers the role of Scientific Director of the “Enrico Fermi Center” in Rome and, since 2019, he is Associate Professor of Physics at the Engineering Department of the "Roma Tre" University in Rome. Previously Tenured Researcher at Istituto dei Sistemi Complessi (ISC) - Italian National Research Council (CNR), c/o Physics Department of the University Sapienza of Rome, Italy. Research activity in complex systems, stochastic processes and network theory with applications to physical, social, economical and biological systems.
Fumiya Iida is a Professor of Robotics at Department of Engineering, University of Cambridge, the director of Bio-Inspired Robotics, and the deputy director of EPSRC Centre of Doctoral Training in Agri-Food Robotics. He received his bachelor and master degrees in mechanical engineering at Tokyo University of Science (Japan, 1999), and Dr. sc. nat. in Informatics at University of Zurich (2006). In 2004 and 2005, he was also engaged in biomechanics research of human locomotion at Locomotion Laboratory, University of Jena (Germany). From 2006 to 2009, he worked as a postdoctoral associate at the Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology in USA. In 2006, he awarded the Fellowship for Prospective Researchers from the Swiss National Science Foundation, and in 2009, the Swiss National Science Foundation Professorship for an assistant professorship at ETH Zurich from 2009 to 2015. He was a recipient of the IROS2016 Fukuda Young Professional Award, Royal Society Translation Award in 2017, Tokyo University of Science Award in 2021. His research interest includes biologically inspired robotics, embodied artificial intelligence, and biomechanics, where he was involved in a number of research projects related to dynamic legged locomotion, dextrous and adaptive manipulation, human-machine interactions, and evolutionary robotics.
The unique mixture of sustainability, research, and software engineering immediately attracted me to Sony CSL. I am an engineer at heart. I love solving problems and making things work. Well.
I have worked on a wide range of software development projects: writing music for video games, mission control software for an environmental monitoring satellite, ticketing systems, motor control, ultra-low power vehicle tracking devices and more. From design and development of embedded devices to large-scale networking systems, my mission is to deliver quality, well designed software. Yeah, I'm a geek!
Sustainability plays an important part in my family life. It influences the decisions we make daily from the food we eat, to the products we buy, and the modes of transport we adopt. So I am happy to bring my experience to the Robotics for Microfarms project to build novel tools that can help multi-crop organic micro-farms stay economically viable in a world where cost is often more important than consequence. The work that the sustainability team does is important to all of us, and I'm proud to be a part of it. Plus... I build ROBOTS!
Ada Altieri obtained her Ph.D. in Theoretical Physics in February 2018 from both the University of Rome “Sapienza” and the University Paris-Sud XI co-advised by Giorgio Parisi and Silvio Franz. During her Ph.D. she focused on renormalization group techniques in disordered systems as well as on the connections between continuous constraint satisfaction problems and zero-temperature glassy phases in high dimensions. In November 2017 she moved to the Laboratoire de Physique Théorique of the École Normale Supérieure in Paris, obtaining a postdoctoral fellowship to work with Francesco Zamponi on the rheology of amorphous solids under shear deformations. Then, one year later she joined the team of Giulio Biroli dealing with various topics on which the Simons Collaboration “Cracking the Glass Problem” is based. In 2020 she was awarded the L’Oréal-UNESCO Fellowship for Women in Science for her research on the “Ecosystems’ complexity through the prism of statistical physics”. Since December 2020 she is Associate Professor in the Lab “Matière et Systèmes Complexes” at University of Paris.
Quentin Feltgen has studied Statistical Physics at the Ecole Normale Supérieure de Paris. He devoted his PhD thesis on the topic of language change, in the Laboratoire de Physique Statistique at the ENS, under the joint supervision of Jean-Pierre Nadal and Benjamin Fagard. Since then, he has worked at the ICM (Paris Brain Institute) on decision-making, and at the Université Sorbonne Nouvelle on the ANR research project Pro-Text, studying the statistical regularities of the bursts of language production in typing tasks. He is also working on the dynamical systemic organization of language over time, to which he will devote an upcoming three-years project at Ghent University.
Artificial Life and Artificial Intelligence researcher at Sony Computer Science Laboratories in Japan. She specialises in predictive coding (the role of prediction in intelligence), artificial perception (sensory illusions in neural networks), and measures of complexity. She has a keen interest in all forms of intelligence, especially the unexpected ones. She is an Associate Editor for the Journal of Artificial Life and is involved in outreach and equity for the International Society for Artificial Life. She is also a member of the Early Career Advisory Group for the eLife Journal, and is currently working on a web platform for collaborative open science called "Mimosa".
Lara Verheyen studied the Master of Linguistics at KU Leuven (Belgium). After graduation in 2018, she completed an Advanced Master in Artificial Intelligence with a specialization in Speech and Language Technology. Currently, Lara is a PhD student at the Artificial Intelligence Laboratory at the Vrije Universiteit Brussel (VUB) under the supervision of Prof. Dr. Katrien Beuls. Her PhD research focusses on building truly intelligent systems that interact with humans about their shared environment. Specifically, the goal of these systems is to hold coherent and meaningful conversations with humans. To achieve this, these systems build up knowledge during the conversation and ground the conversation in the shared environment and the acquired knowledge. Lara is particularly interested in operationalizing these systems through a hybrid approach that combines symbolic and subsymbolic techniques.
Pierre Baudot was graduated in 1998 from Ecole Normale supérieure Ulm magister of biology, and passed his PhD in electrophysiology of visual perception studying learning and information coding in natural condition. He started to develop information topology with Daniel Bennequin at Complex System Institute and Mathematical Institute of Jussieu from 2006 to 2013, and then at the Max Planck Institute for Mathematic in the Science at Leipzig. He then joined Inserm at Marseille to develop data applications notably to transcriptomics. Since 2018, he works at Median Technologies, a medical imaging AI company, to detect and predict cancers from CT scans. He received the K2 trophy (mathematics and applications 2017), and best entropy paper prize 2019 for his contributions to topological information data analysis.
David Rousseau received the M.S. degree in physics and signal processing from the Institut de Recherche Coordination Acoustique et Musique (IRCAM), Paris, France, in 1996 and the Ph.D. degree in signal and image processing from Université d’Angers, Angers, France, in 2004. From 2010 to 2017, he was a Full Professor of image processing applied to bioimaging with CREATIS, Université Lyon 1, France. Since 2018, he heads a Bioimaging Research Group with Université d’Angers. His research interests currently include computational instrumentation, machine learning based computer vision, and their applications to life sciences. Contact: email@example.com.
Dr Rebecca Fiebrink makes new accessible and creative technologies. As a Reader at the Creative Computing Institute at University of the Arts London, her teaching and research focus largely on how machine learning and artificial intelligence can change human creative practices. Fiebrink is the developer of the Wekinator creative machine learning software, which is used around the world by musicians, artists, game designers, and educators. She is the creator of the world’s first online class about machine learning for music and art. Much of her work is driven by a belief in the importance of inclusion, participation, and accessibility: she works frequently with human-centred and participatory design processes, and she is currently working on projects related to creating new accessible technologies with people with disabilities, and designing inclusive machine learning curricula and tools. Dr. Fiebrink previously taught at Goldsmiths University of London and Princeton University, and she has worked with companies including Microsoft, Smule, and Imagine Research. She holds a PhD in Computer Science from Princeton University.
Luc Steels studied linguistics at the University of Antwerp (Belgium) and computer science at the Massachusetts Institute of Technology (USA). His main research field is Artificial Intelligence covering a wide range of intelligent abilities, including vision, robotic behavior, conceptual representations and language. His work has found applications in knowledge-based systems, autonomous robots and digital community memories for the management of a commons based on community engagement and citizen science. In 1983 he founded the Artificial Intelligence Laboratory at the University of Brussels (VUB) and became a professor of computer science. In 1990 he co-founded the computer science department at the VUB and became the first chairman (from 1990 until 1995). He founded the Sony Computer Science Laboratory in Paris in 1996 and became its first director until 2014. Currently he is ICREA research professor at the Institute for Evolutionary Biology (CSIC,UPF) in Barcelona. ICREA is the Catalan Institution for Research and Advanced Studies. Steels has participated in dozens of large-scale European projects and more than 30 PhD theses have been granted under his direction. He has produced over 200 articles and edited 15 books directly related to his research. During the past decade he has focused on theories for the origins and evolution of language using computer simulations and robotic experiments to discover and test them. More recently he is pushing the boundaries of AI in the direction of a proper handling of meaning and understanding, with applications in the management of social media, the interpretation of art works, and the study of computational creativity.
Niklas Reisz is a PhD candidate at the Complexity Science Hub Vienna (CSH). He joined the CSH in April 2018 to research information flow processes in complex systems. He graduated with a master’s degree in physics from the University of Technology Vienna in 2018 where he specialized in simulations of particle beams. He is also currently completing his bachelor programme “Business, Economics and Social Sciences” with an emphasis on innovation and cryptoeconomics. Niklas’ research interests include scientometric and collaborative systems, agent-based modeling and distributed ledger technology. He works in close collaboration with Sony CSL Paris to monitor, understand and improve information flows in large collaborative environments such as large companies, conferences or open source software projects.
Barath Raghavan joined USC as an assistant professor of computer science in 2018 after many years in engineering and research. Previously he led the engineering team at Nefeli Networks, was a senior staff researcher at ICSI Berkeley, was CTO of a social-impact nonprofit, developed networked systems at Google, and taught complexity theory at Williams College. His work spans an equally diverse range of areas including Internet architecture, network function virtualization, digital agriculture, network security and privacy, rural Internet access, network troubleshooting and testing, and computing for urban resilience. He received his PhD in Computer Science from UC San Diego in 2009 and his BS in Electrical Engineering and Computer Science from UC Berkeley in 2002. He has received a number of paper awards including from ACM SIGCOMM, USENIX/ACM NSDI, ACM DEV, ACM CHI, and the IRTF.
Maks Ovsjanikov is a Professor at Ecole Polytechnique in France. He works on 3D shape analysis with emphasis on shape matching and correspondence. He has received a Eurographics Young Researcher Award in 2014 "in recognition of his outstanding contributions to theoretical foundations of non-rigid shape matching". In 2017 he received an ERC Starting Grant from the European Commission and in 2018 a Bronze Medal from the French National Center for Scientific Research (CNRS) for research contributions in Computer Science. His main research topics include 3D shape comparison and deep learning on 3D data.
Loic Landrieu is a machine learning researcher at IGN, the French mapping agency. His research focuses on developing new optimization and learning methods to exploit the structure of remote sensing data (spatial, temporal, spectral, and multi-modal) for improved precision and speed.
Julien Derr has a multidisciplinary background. Trained at ESPCI Paris, he started his career in solid state physics (working on quantum phase transitions, and silicon nanocrystals) before being interested by biophysics. He is generally interested in the concepts of self-organization. He has in particular been working on self-organization of proteins or RNA bases. Since 2010, he is "maitre de conférences" at université de Paris Diderot where he is interested in morphogenesis. He wants to understand pattern formation mechanisms, in living organisms, plants or also geophysical context.
Rick van de Zedde is project manager of the new phenotyping facility NPEC @ WUR, next to that he is senior scientist/ business developer Phenomics and Automation at the Wageningen Plant Science Group where he has worked at WUR since 2004. His background is in Artificial Intelligence with a focus on imaging and robotics. Netherlands Plant Eco-phenotyping Centre (NPEC) is an integrated, national research facility housed by Wageningen University & Research and Utrecht University and is co-funded by The Netherlands Organisation for Scientific Research (NWO). More info www.npec.nl. Pieter de Visser is a senior scientist in the team Crop Physiology of the business unit Greenhouse Horticulture at the Wageningen Plant Science Group. Since 2001 he developed into an expert in novel crop simulation models, in particular 3D crop models on architecture and physiology and self-learning models that are linked to plant sensors. The models are applied in decision support systems for horticulture with the focus on crop production, energy use and climate related crop diseases.
Gaëtan Hadjeres graduated from the École Polytechnique (France) and obtained a master in Pure Mathematics from Paris 6 University (Sorbonne Universités). He joined Sony CSL Paris in 2014 to do a Ph.D. thesis on music generation under the supervision of François Pachet and Frank Nielsen. In 2018, Gaëtan successfully defended his dissertation entitled "Interactive Deep Generative Models for Symbolic Music" and is now a permanent member of the Sony CSL Paris Music Team. Parallel to this scientific background, Gaëtan studied music composition at the Conservatoire de Paris (CNSMDP). He is also a pianist and a double bass player. His main research research interests involve generative modelling and self-supervised learning with a strong focus on human-computer interaction. Github: https://github.com/Ghadjeres/piano-inpainting-application LinkedIn: https://www.linkedin.com/in/ga%C3%ABtan-hadjeres-a01a67a7/ Twitter: https://twitter.com/gaetan_hadjeres
Alessandro Pluchino is associate professor of theoretical physics, mathematical methods and models, at the Department of Physics and Astronomy "E.Majorana" of the University of Catania (Italy) and has also the qualification of full professor in Theoretical Physics of fundamental interactions. He is also delegate of research at the INFN. Author of more than 100 scientific publications and several books, his research activity mainly focuses on the elaboration of mathematical and computational models of complex systems, with applications to biological, geological, ecological, economic and social systems, but he also addressed fundamental physics issues, statistical mechanics and optimization methods. More info at www.pluchino.it.
Vittoria Colizza completed her undergraduate studies in Physics at the University of Rome Sapienza, Italy, in 2001 and received her PhD in Statistical and Biological Physics at the International School for Advanced Studies in Trieste, Italy, in 2004. She then spent 3 years at the Indiana University School of Informatics in Bloomington, IN, USA, first as a post-doc and then as a Visiting Assistant Professor. In 2007 she joined the ISI Foundation in Turin, Italy, where she started a new lab after being awarded a Starting Independent Career Grant in Life Sciences by the European Research Council Ideas Program (more info on the EpiFor project webpage). In 2011 Vittoria joined the INSERM (French National Institute for Health and Medical Research) in Paris where she leads the EPIcx lab within the Equipe 1 Surveillance and modeling of communicable diseases of the Pierre Louis Institute of Epidemiology and Public Health (IPLESP). She works on the characterization and modeling of the spread of emerging infectious diseases, by integrating methods of complex systems with statistical physics approaches, computational sciences, geographic information systems, and mathematical epidemiology. In 2017 she was promoted Research Director at INSERM. Since 2020, she is also Visiting Professor at Tokyo Institute of Technology.
Wei Guo is currently working as an Assistant professor at The University of Tokyo, Japan. He established "International Field Phenomics Research Laboratory", the first plant phenomics laboratory in Japan, in 2017 as a core member. His research focuses on field-based phenotyping using advanced sensing platforms and technologies such as drones and ground robots, image processing, and machine learning approaches.
Alekos (Alexandros) Pantazis is a Core Member of the P2P Lab, an interdisciplinary research collective focused on the commons, a Junior Research Fellow at the Ragnar Nurkse Department of Innovation and Governance, Tallinn University of Technology, and a visiting lecturer at the Master's Degree in Political Ecology, Degrowth and Environmental Justice at the Autonomous University of Barcelona. He holds a PhD in Technology Governance from Tallinn University of Technology (2021), a B.Sc. (5-year-long) in environmental engineering, an M.Sc. in nautical and marine science, and a certificate of pedagogy and educational proficiency. Alekos has 20 years of involvement in international social movements, focusing on degrowth, agrarian populations and the commons. Moreover, Alekos has worked as a scientific assistant in European projects in the areas of commons, education and marine conservation. Currently, his work focuses on the convergence of convivial technologies, commons, and peer to peer education. He is participating in the COSMOLOCALISM (ERC), Smooth (H2020) and ComPra (Erasmus+) projects. Alekos speaks English, Spanish, French, and Greek.
Dr. Tianmin Shu is a postdoctoral associate in the Department of Brain and Cognitive Sciences at Massachusetts Institute of Technology, working with Joshua Tenenbaum and Antonio Torralba. He studies social AI and computational social cognition, with the goal of building socially intelligent agents that can understand and interact with humans. His work has received the 2017 Cognitive Science Society Computational Modeling Prize in Perception/Action, a Best Paper Award at the Cooperative AI workshop at NeurIPS 2020, and a Best Paper Award at the Shared Visual Representations in Human and Machine Intelligence workshop at NeurIPS 2020. He received his Ph.D. degree from University of California, Los Angeles in 2019.
Annette Werth is an Italian engineer and researcher passionate about technologies that can help to accelerate the adoption of renewable energies and prevent climate change. She worked on Energy Group at Sony CSL Paris and before that at CSL Tokyo, always looking at microgrids. Her last research focused on off-grid microgrids in collaboration with startups such as Okra in the Philippines and SOLshare in Bangladesh. She also worked in, or with, utilities and was a founding member of the startup TRENDE Inc, an online retailer offering free roof-top solar subscriptions in Japan. She holds a PhD in Systems Innovation department from the University of Tokyo, where she is still acting as a visiting researcher.
Melissa Roemmele is a research scientist at SDL in Los Angeles, working on interactive applications that use NLP to facilitate content understanding and creation. She completed her PhD in 2018 in the Department of Computer Science at University of Southern California. There she worked at the USC Institute for Creative Technologies in the Narrative Group led by her advisor Andrew Gordon, which pursues research at the intersection of artificial intelligence and storytelling. Her thesis explored machine learning approaches for interactively predicting “what happens next” in text-based stories, both in a commonsense reasoning framework as well as for human authoring support.
Elisa Omodei is the Lead Data Scientist of the Hunger Monitoring Unit at the UN World Food Programme’s Research, Assessment and Monitoring division. She also serves as Vice-President Secretary of the Complex Systems Society. She holds a BSc and a MSc in Physics from the University of Padua and Bologna, respectively, and a PhD in Applied Mathematics for the Social Sciences from the École Normale Supérieure of Paris. After her PhD, she spent two years as a postdoctoral researcher at the Rovira and Virgili University in Tarragona, Spain. She then joined the United Nations in 2017, first at UNICEF’s Office of Innovation in New York and now at the World Food Programme in Rome. Elisa is passionate about technological innovation for social good, and in her work she explores how to apply complexity science, data science and AI for development and humanitarian action.
Is the creativity process a prerogative of the human mind? Exploring new concepts, new different solutions to a given problem, or, more simply, taking into account an upcoming event never observed before, at present seems to be a challenging task for an artificial system. An efficient approach inspired by the human special way to address the concept of "new" should take into account a mix of hierarchical abstract and interconnected conceptual levels to be processed by an adaptively "fluid" artificial neural machine. Such approach would break several present constraints in the field of neural networks and deep learning, where the influence of static architectures and training algorithms limit the potential development of more promising neural topologies, mainly based on natural cognitive mechanisms, and allowing, at the same time, to deal with an incomplete knowledge of the perceived external world. In my research, I try to explore this unknown domain, looking for new neural architectures and efficient unsupervised training mechanisms driven by changing non-stationary environments, aimed at the identification and comprehension of an ecological artificial mind.
Daryl Peralta is a researcher and lecturer at the Electrical and Electronics Engineering Institute of the University of the Philippines. He received his master’s degree from the same university where he worked on the problem of 3D reconstruction under the supervision of Prof. Rowel Atienza and Prof. Rhandley Cajote. During his MS, he developed the Scan-RL algorithm in the paper “Next-Best View Policy for 3D Reconstruction”. His research interests are computer vision, machine learning and robotics.
Driven by a strong interest in experimental music as well as electronic dance music, I have attended over the last years many live music performances, looking for stimulating new ways of performing music. Often, I find computers interactions on stage unsatisfactory. It feels like the musician loses freedom once the computer gets into the loop, leading to very formulaic and constrained, pre-structured music. This is where I think generative probabilistic model can play an exciting role, by enabling the computer to be not just a static tool, producing predictable and repeated outputs for the same inputs, but rather become a creative force on stage, presenting the musician with ever-changing propositions. In this context, the musician can be, again, more than just an operator and indeed be a creator. Yet, a generative model in a creative context is only as powerful as the interface through which its user interacts with it. This calls for a constant back-and-forth between the design of powerful deep learning models and the design of the associated interfaces, in order to build meaningful – and useful – new systems.
Donn Healy is an electronic music producer from Ireland. Throughout the year 2020, Donn has been producing music with Sony CSL A.I. Music technology. Sony CSL is a fundamental research laboratory based in Tokyo and Paris whose “Music and Artificial Intelligence” team has an exclusively artist-centric vision: the development of a new generation of music production tools based on Artificial Intelligence, which increase creativity and are beneficial to the music creation process.
Dr Diego Garlaschelli is Associate Professor at the IMT School of Advanced Studies in Lucca (IT), where he directs the Networks research unit, and at the Lorentz Institute for Theoretical Physics of Leiden University (NL), where he leads the Econophysics and Network Theory group. His research interests are strongly interdisciplinary and include network theory, statistical physics, financial complexity, information theory, social dynamics and biological systems. He teaches courses in Complex Networks, Econophysics, and Complex Systems. He holds a 4-year master degree in theoretical physics from the University of Rome III (2001) and a PhD in Physics from the University of Siena (2005). He held postdoctoral positions at the Australian National University in Canberra (Australia), the University of Siena (Italy), the University of Oxford (UK) and the S. Anna School for Advanced Studies in Pisa (Italy). He has given more than 50 invited talks at international conferences, workshops, and scientific schools. He is author of more than 100 publications in peer-reviewed international journals and peer-reviewed book chapters, and of one co-authored monograph.
Dr Laura Alessandretti is Assistant Professor in Modelling of Human Dynamics at the Technical University of Denmark. She researches aspects of Human Behaviour through the statistical analysis and modelling of large-scale digital datasets, largely collected from smartphones. Topics of interest include: Human Mobility, Smartphone applications usage, Digital assets.
Dr Eva Wittenberg is Assistant Professor of linguistics at UC San Diego, where she directs the Language Comprehension Lab. Her research has two main objectives: advancing linguistic theory with the help of psycholinguistic data from a broad range of languages and varieties, and advancing psycholinguistic data by developing methods, paradigms, and research instruments. She is interested in language comprehension broadly as it speaks to linguistic architecture: What is language, so that our brain can process it? Before joining Linguistics, Dr Wittenberg was a postdoctoral researcher at the Center for Research in Language at UCSD, where she worked with Roger Levy (now MIT) and Victor Ferreira (UCSD Psychology). She received her Ph.D. from Potsdam University under the supervision of Heike Wiese, and she also closely worked with Ray Jackendoff, Gina Kuperberg and Jesse Snedeker.
The Obvious Collective is a group of friends, artists, and researchers, driven by a common sensibility regarding questions bounded to the increasing advent of Artificial Intelligence and Machine Learning. One of their goals is to explain and democratize these advances through their artworks. Their project began a year ago with the discovery of Generative Adversarial Networks (GANs), which are Machine Learning algorithms that generate images. This technology allows them to experiment with the notion of creativity for a machine.
Sebastian Groh is a 2013 Stanford Ignite Fellow from Stanford Graduate School of Business and holds a PhD from Aalborg University and the Postgraduate School Microenergy Systems at the TU Berlin where he wrote his doctoral thesis on the role of energy in development processes, energy poverty& technical innovations, with a special focus on Bangladesh. He published a book and multiple journal articles on the topic of decentralized electrification in the Global South. Dr Groh started his career and received his DNA at MicroEnergy International, a Berlin-based consultancy firm working on microfinance and decentralized energy. In 2014, Dr Groh founded SOLshare, acting as its CEO since then. He is also an Associate Professor in the Brac Business School at BRAC University in Dhaka (Bangladesh). On behalf of SOLshare, he received numerous awards, among them Tech Pioneer ‘18 by the World Economic Forum and best energy startup in the world by Free Electrons.
SINAN HALIYO is currently an Associate Professor at the Institute of Intelligent Systems and Robotics (ISIR), Sorbonne University, Paris, where he leads the 'Multiscale Interactions' Lab. He has been active in the field of microrobotics since 1999 on topics including control and design issues, physical interactions and user interfaces for microscale applications in assembly, characterization and user training. He also takes a particular interest in human-computer interaction issues in remote handling and teleoperation, especially with haptics and multimodal interfaces.
AnneMarie Maes is an artist who has been studying the close interactions and co-evolution within urban ecosystems. Her research practise combines art and science, with a keen interest in DIY technologies and biotechnology. She works with a range of biological, digital and traditional media, including live organisms. Her artistic research is materialised in techno-organic objects that are inspired by factual/fictional stories; in artefacts that are a combination of digital fabrication and craftsmanship; in installations that reflect both the problem and the (possible) solution, in multispecies collaborations, in polymorphic forms and models created by eco-data. On the rooftop of her studio in Brussels (BE), she has created an open-air lab and experimental garden where she studies the processes that nature employs to create form. Her research provides an ongoing source of inspiration for her artworks. The Bee Agency as well as the Laboratory for Form and Matter -in which she experiments with bacteria and living textiles – provide a framework that has inspired a wide range of installations, sculptures, photography works, objects and books – all at the intersection of art, science and technology. In 2017, she received an Honorary Mention in the Hybrid Art category at Ars Electronica for the Intelligent Guerrilla Beehive project.
Cities are the social and economic innovation core of modern nations. Despite their importance, they still suffer from many problems: social segregation, accessibility inequality, overcrowding, pollution and infrastructure malfunctions are only a few examples. In my research, I exploit the specific tools of Complex Systems Physics and Machine Learning to find new approaches for studying Urban Environments, looking for solutions to their sustainability problems. I am also interested in general techno-social systems' modelisation (Music Production, Railway Systems, Innovation Dynamics), involving citizens in the research process through their engagement in gamified social experiments.
Valentino Catricalà (PhD) is a scholar and contemporary art curator specialised in the analysis of the relationship of artists with new technologies and media. He is currently the director of the Art Section of the Maker Faire-The European Edition, the biggest Faire on creativity and innovation in Europe; Art Consultant at Paris Sony CS Lab and professor at Lecce Academy of Fine Art.
Today, powerful suites of natural language processing tools can be used to construct rich, spaghetti-like networks capturing the knowledge and semantics of text. Somewhere among these hotchpotch instantiations of various linguistic theories is a wealth of elaborate interconnected structures that capture something fundamental about natural language. Everything is connected, but which connections are important and for what? In my research, I explore how to automatically extract what is useful from such representations towards solving structure generation problems while attempting to find solutions that are agnostic to particular linguistic theories or problem domains. I am particularly interested in the problem of generating globally semantically coherent text, as this seems to be what is currently out of reach from the current state of the art and where I feel rich representations are crucial.
Alexei Grinbaum is a philosopher and physicist. Researcher at CEA-Saclay, he is a specialist in quantum information. Since 2003, he has been interested in ethical issues related to new technologies, including nanotechnologies, artificial intelligence and robotics. A member of the Research Commission on the Ethics of Research in Digital Science and Technology (CERNA), he recently published "Robots and Evil" (Desclée de Brouwer, 2019).
It has never been easier than today to become a music producer. All you need is a cheap computer and a bit of imagination. This drastic change in the music industry has logically led to an explosion in the number of songs produced throughout the world over the past decades. While the variety and efficiency of the tools available for music production increase, the key aspect for a music producer is to focus on creativity in order to keep the inspiration going. In that sense, I believe that we can create innovative tools based on A.I. to enhance artists’ creativity. The goal will never be to replace artists, but to give them extra tools that will help them to explore further than ever before. These tools need to be very easy to use, interactive and almost invisible to fit in any artist workflow. In order to achieve this task, my job is to gather and clean data on one hand, and insert the tools we create in a music production context on the other hand to ensure their compatibility with modern production methods.
I develop new technologies for music, but I am also passionate about technology enhanced learning and new artistic experiences based on innovation. In these domains, I like to see myself at the edge of an ancient world where I feel comfortable and a new exciting world to explore. On one hand, my strong classical music education led me to value traditions, time-consuming activities, and skills acquired through hard work. On the other hand, whatever the problem at hand, I cannot prevent myself from trying to find technological ways to solve it, optimize its resolution, make it easier and more efficient. I think that technology can help us get rid of tedious tasks in order to focus on creativity, but I also think that this process can eventually open up new paths for expressivity and some sort of innovative crafts based on technologies.
Human languages have evolved many fascinating solutions to complex communicative problems through the use of words and grammatical structures. And they keep on evolving: language is an open system, a unique ability that brings infinite variety to the ways in which we communicate with others about our experiences in life. How is this possible? Can we understand this linguistic creativity? In my research, I try to answer these questions by developing powerful cognitive language technologies, which can be used to study open-ended and robust language processing, to explore innovative linguistic applications, and to function in large open collaborative communities.
The dynamics of life is a fascinating puzzle unfolding at multiple scales. Biological systems involved in perception and behavior also underlies our ability to grasp meaning from the world. They amaze me by their robustness in sensing their environment, for example, for photosynthesis or vision, and they inspire me to develop new technologies and discover new scientific perspectives on animals, plants and microbes in their respective ecosystems. I currently develop artificial intelligence and robotic systems to help people managing micro-farms. Farmers and plant biologists are both experts at growing plants although they rarely exchange their knowledge. I aim at bringing them together around new technologies so that their knowledge can be transferred. I design new hardware to intervene on crops and acquire in-field data. These data are then integrated through mathematical modeling to get a useful description of the field and to point the farmer to the possible actions he can take. I also hope to surprise biologists and farmers about how machines might teach them about the biology of wild plants.
Recently, music production has turned essentially digital, hence, drastically increasing the scope of possibilities in terms of sounds synthesis and textures. Thus, the sound design process has become increasingly free and complex given the overwhelming amount of parameters provided by modern synthesizers. Therefore, methods allowing an easy and rich fine-tuning of sounds become a key requirement in music production, especially for non-expert users. Moreover, at a time where home-studios are becoming a norm in music production, it is essential to develop simple and lightweight tools that target users that create music on a single computer. Passionate about urban music since I was a child, I have always looked for music that would make me nod my head. Today, I have identified that rhythm and percussive sounds are particularly important to me. That is why I am focusing on drums and developing AI-based solutions for helping artists to easily design original drum sounds and rhythms, with the final goal of enlarging the horizon of possibilities, always trying to avoid framing artists creativity.
Automated computer vision tasks are a key factor to provide high-throughput phenotyping of plants in the field and could lead to more efficient and sustainable agriculture. This requires analysis of data acquired in the field and development of both plant models and new computer vision methods to extract useful traits from crops, across space and time. Machine learning techniques can also help us find useful information in the data and help the farmers in their decision-making process. The roughness of in-field conditions compared to the lab's perfectly controlled environment, as well as the diversity of crops in small market farms, make for a great challenge, especially when we want to keep the costs of tools and sensors accessible to everyone. Making farmers part of these experiments is key to the approach's success, and this is why I think it is important to develop free and open-source software and to keep the data open.
In our digital and interconnected world we constantly leave digital traces logging our daily activities such as the movies we watch, the route we follow on our daily commute, the friendships we have in the online social network, and so on. Despite this wealth of data we still lack a solid theoretical ground and a comprehensive modelling framework to characterize the mechanisms shaping the evolution of the socio-technical systems surrounding us. My research work is to unveil and describe these underlying processes by using Complex Systems techniques in combination with machine learning tools. The results are interesting not only from a theoretical point of view but can also provide valuable decision support tools to understand, control and dynamically forecast the sustainability of such systems. I am also engaged in the development of digital platforms for sustainability to create playful environments aimed at assessing the present situation and allow citizens to participate in the decision process to conceive and simulate future scenarios.
Putting composers back in the loop
Applying the latest deep learning techniques to music composition is appealing for AI researchers; but for composers, this intrusion of machines in their domain of expertise could be perceived as a threat. This fear of being replaced is legitimate: indeed, many recent generative models for music tend to produce infinite numbers of scores without the need for human intervention. I think that this behavior is not desirable and that AI algorithms should instead be used by artists as assistants during the compositional process. By creating a fruitful discussion between a composer and the machine, the artist can then focus on the development of their musical ideas and let the AI do the technical parts. Professional composers can benefit from these tools to become more productive and explore uncharted regions of musical creation while amateur musicians can use these innovative tools to express themselves in an intuitive way. By putting composers back in the loop, we will go from automatic music composition to AI-augmented composition and redefine the way people compose music.
Any music you can(not) imagine
Globalization has led popular music to become a part of universal world culture. The conventions of music have departed from local preferences to a universally shared language. Additionally, cheap personal computers have brought million-dollar studios to people's homes. Everybody's a music producer. A plethora of virtual instruments, audio processes, and sample libraries are readily available. A universal language, loads of data: it's time to look for an understanding of the universal principles of music. Can we define universal principles of well-formedness in music? The path to such knowledge goes through the augmentation of human capacities of observation using cutting-edge machine learning algorithms. When algorithms of automatic music generation grow to possess this knowledge, then we can generate music from any material, music from the strangest idea, any music you can(not) imagine.
My research activity is focused on complexity science and its interdisciplinary applications. Along the past years, I have been active in several fields from granular media, to complexity and information theory, from social dynamics to sustainability. My very recent KREYON project (www.kreyon.net) concerned "Unfolding the dynamics of creativity, novelties and innovation". In this context, I am interested in understanding and modelling how the "new" enters our lives in its multiform instantiations: personal novelties or global innovation. To this end I'm blending, in a unitary interdisciplinary effort, three main activities: web-based experiments, data science and theoretical modeling. Key to this endeavor is to grasp the structure and the dynamics of the "space of possibilities" in order to come up with a solid mathematical modelling of the way systems - biological, technological, social - explore the new at the individual and collective levels. Exploiting the knowledge of the way the space of possibilities is explored can be helpful to conceive the next generation of Artificial Intelligent algorithms able to cope with the occurrence of novelties, bridging in this way the gap between inference and unanticipated events.
In this presentation we argue that there is a lack of a common deep-semantic representation scheme which drives the community to use approaches that either do not require deep semantics or that (being neural-based) are expected to create appropriate internal structures automatically themselves. In order to overcome this lack, we propose a meta scheme for human-creatable deep-semantic representations based on an understanding of concepts as single aspects of semantics and a notion of entities that are both nodes in a generalised SRL-like Semantic Graph structure. Our conceptually language- and modality-independent representation also offers a method to define the concepts used in these graphs in a recursive way using a Semantic Graph of more basic concepts.
After the amazing breakthroughs of machine learning (deep learning or otherwise) in the past decade, the shortcomings of machine learning are also becoming increasingly clear: unexplainable results, data hunger and limited generalisability are all becoming bottle necks. In this talk we will look at how the combination with symbolic AI (in the form of very large knowledge graphs) can give us a way forward, towards machine learning systems that can explain their results, that need less data, and that generalise better outside their training set.
Living systems face the challenge of navigating natural environments shaped by non-trivial physical mechanisms. Notable examples are provided by long-distance orientation using airborne olfactory cues transported by turbulent flow, the tracking of surface-bound trails of odor cues, and flight in the lowest layers of the atmosphere. Terrestrial animals, insects, and birds have evolved navigation strategies that accomplish the above tasks with an efficiency that is often surprising and yet unmatched by human technology. Indeed, robotic applications for olfactory sniffers and unmanned aerial vehicles face similar challenges for the automated location of explosives, chemical, and toxic leaks, as well as the monitoring of biodiversity, surveillance, disaster relief, cargo transport, and agriculture. The interdisciplinary interplay between biology, physics, and robotics is key to jointly advancing fundamental understanding and technology. I shall review the above natural phenomena, discuss the physics that constrains and shapes the navigation tasks, how machine-learning methods are brought to bear on those tasks, and conclude with the relevant strategies of behavior and open issues.
Every complex system that involves two different kinds of agents interacting with each other can be described as a bipartite network, whose analysis can give new information about the system itself. Examples of these kind of systems can be found in many different areas of science, such as ecology, economics, social science. However, every graph can be remapped to a bipartite nodes-links graph. As this kind of framework gains more and more attention, we take a look at the bipartite null models and projection methods that exist, and how to deal with networks of large size and density. We focus our attention on a case study of the Twitter debate on Brexit during the UK elections of 2019, where we are able to build several (bipartite) networks of interactions between users and to characterize the presence and activity of automated accounts. Among the results of this study, we find that malicious users are injected in the debate at crucial times, that there is a class of suspicious users which are able not to be suspended by Twitter by maintaining a low profile, and that there are bots polluting the Brexit discourse with other populist topics.
Bruno et al., Brexit and bots: characterizing the behaviour of automated accounts on Twitter during the UK election, 2022
Vallarano et al., Fast and scalable likelihood maximization for Exponential Random Graph Models with local constraints, 2021
Saracco et al., Inferring monopartite projections of bipartite networks: an entropy-based approach, 2017
Python package: https://github.com/mat701/BiCM
Every one of our speakers has created a research lab in the industry, and all those labs all have non-traditional outputs as a measure of their own success.
For example, in academia the expected output is papers, in the industry the expected output is products, but all of these unconventional labs also value different kinds of impact, which raises new issues: how to establish credibility? How to define the value of our outputs? How to find and foster talent?
The talk will showcase some of the projects developed by IAAC’s Advanced Architecture Group to create tools and processes to plan and design responsive and inclusive cities. From visualising, to calculating and simulating changes in the urban environment, allowing us to shape the future of cities with informed decision making.
As individuals and researchers, we are concerned with environmental challenges and deeply aware of the necessity to be more sustainable. But in practice, what does this entail? How can a researcher’s activity be “sustainable,” and how do we integrate sustainable practices into research projects? Where do we start? In this presentation, I will propose a list of actionable rules to facilitate the contribution of anyone in the community towards sustainable research. These rules address a variety of topics, such as training on environmental issues or comprehensively evaluating the impacts of a research, with different levels of potential impact, to illustrate the breadth of possible actions.
Disinformation is as old as lies, and it has been used as a weapon in different shapes, evolving as Information Technologies did throughout the ages. Despite the dramatic improvement of information availability, it can have a dangerous impact on our societies, even when it involves only relatively small minorities. Democracies are currently struggling to find a way to deal with the problem without hindering the defining values of democracy itself.
But while we cannot improve information quality without entering the battlefield of opinions, we can do a lot to enhance information accessibility. We can redesign Information Technologies to make social dialogue more transparent, understandable, and healthy. The “Infosphere” research line at Sony Computer Science Labs is collecting the research efforts of both Paris and Rome Labs to tackle these challenges: (i) the detection of unmet news demand that might trigger disinformation production; (ii) the building of bridges between polarised factions through new recommender systems and transparent, shared-value reputation system for news outlets; (iii) the visualization of the social dialogue to improve citizens awareness about the different points of view; (iv) the study of “divisive news” instead of “fake news”, whose definition can always be questioned.
Our final aim is to improve our societies’ information dynamics through new IT tools shaped around human information processing, both at the individual and the collective level, to contrast those features resulting in dangers for our democracies.
With UC2 (“You. See. Too”) and “cellSTORM” the team around Benedict Diederich at the Leibniz Institute for Photonic Technology Jena in Germany have successfully demonstrated that cutting-edge microscopy can be realised for a fraction of the cost of commercial devices using open-source hard- and software. Establishing quality standards and encouraging other researchers to use open-source in their research is one of the key aspects of his research.
Science lives from the curiosity to get to the bottom of problems and from the subsequent discussion, where scientists exchange knowledge and opinions to finally come up with new questions. However, as a recent study showed, the vast majority of the experiments conducted within publications can often only be replicated partially, if at all, which contributes to the rising disbelief from society into scientific practice . The high level of exclusivity of scientific experiments often due to a lack of available instruments and knowledge of their use, as well as their high cost, makes it impossible for many researchers to replicate their experiments. Particularly in high-resolution microscopy, which is an essential tool for many different scientific disciplines such as cell biology or biochemistry this is a problem to be solved if we aim for realistic interdisciplinary scientific exchange. Our ever-growing open-source optical toolbox UC2 (“You.See.Too.”)  shows that this is not only important but also possible. With UC2, we are trying to democratise optics and microscopy in particular. To achieve this, UC2 relies on widely available components and 3D-printed parts so that it can be easily built by anyone, anywhere. Through online platforms such as GitHub , we enable anyone to use, replicate, and customise it for individual purposes open-source licenses.
Additionally, we invite users from around the world to share their designs with the community in order to create an iterative and decentralised optimisation loop. This way, completely new collaborations can be created, from the field of education to the realm of cutting-edge biology. In the ongoing Corona pandemic, we were able to show that state-of-the-art microscopic imaging can be realised even where access to such equipment is very limited, but no less urgently necessary. Also, we were able to detect and even optically resolve the SARS-CoV2 coronavirus in the high-safety biological laboratory. The open-source nature of UC2 allows connecting with other open projects to unite the expertise of scientists from around the world, together with approaching the goal of making cutting-edge tools available to all. Additionally scaling up the production of the UC2-components and organising interdisciplinary workshops, we hope to lower the entry barriers to get creative with optics and think science easily “out of the box”.
 Baker, M. 1,500 scientists lift the lid on reproducibility. Nature 533, 452–454 (2016).
 Diederich, B., Lachmann, R., Carlstedt, S. et al. A versatile and customizable low-cost 3D-printed open
standard for microscopic imaging. Nat Commun 11, 5979 (2020).
 UC2 GitHub Repository: https://github.com/bionanoimaging/UC2-GIT [https://repository-images.githubusercontent.com/145216636/328efe00-a59a-11ea-9820-a3954816c8ad]
Considering the impressive success of Generative Adversarial Networks (GANs) in image generation throughout the last years, it is only natural to apply these models to audio generation and restoration, too. Therefore, in the last three years, we performed several experiments regarding audio synthesis with GANs, involving drum sample generation, tonal synthesis, and MP3 restoration. In DrumGAN, we examined how neural synthesis can improve the artistic processes in music production using perceptual features for intuitive user control. In DarkGAN we exploit the principle of dark knowledge in neural networks and distill the knowledge of an audio classifier into a generative GAN architecture. In VQ-CPC GAN we tackle the problem of generating variable-length audio content with GANs, resulting in a non-autoregressive sequence generation. Finally, we studied the problem of audio transformation and restoration with GANs by restoring MP3-compressed popular music to its high-quality version.In my talk, I will give an overview of the underlying principles of our work, and I will show some audio examples and a live demo of the DrumGAN prototype.
This talk will explore via various examples how automated (generative, procedural) processes can and should reflect the artist’s intent and the audience the process is targeting, and the technology needed to support that vision.
Plants, algae and cyanobacteria are the only organisms capable of producing their food while performing photosynthesis. By employing complex biophysical processes, which act on multiple temporal and spatial scales, they perform highly efficient energy converting reactions. The basic machinery behind these reactions consists of two parts: the photosynthetic electron transport chain (PETC) and the Calvin-Benson-Bassham (CBB) Cycle. The photosynthetic activity is driven by the light availability at the site of the PETC and is pulled by the energy demand on the CBB site. Hence, the photosynthetic system can, and in fact, should be treated as an integrated supply-demand system .
During this talk, I will present our most recent mechanistic model of photosynthesis  developed for C3 plants to study the dynamics of balancing the energy supply under stress. Next, thanks to the modular construction of our computational models, I will show how a highly simplified version of PETC model could guide us to gain a better understanding of the dynamics of photosynthesis in diatoms . Finally, I will present the preliminary results on our most recent work on capturing photosynthesis in cyanobacteria.
Most recent publications:
 Matuszyńska et al.  Physiologia Plantarum https://doi.org/10.1111/ppl.12962
 Saadat et al.  Front. Plant Sci., 12:750580 https://doi.org/10.3389/fpls.2021.750580
 Seydoux et al  bioRxiv, accepted to New Phytologist https://doi.org/10.1101/2021.09.06.459119
Intelligent tutoring systems (ITS) are very effective but require a great deal of expertise and time to produce, making them expensive and difficult to scale. Dialogue-based ITS are particularly tractable for automated authoring because their textual representations allow for a range of NLP approaches to be applied. This talk describes four approaches to automated authoring of dialogue-based ITS ranging from knowledge poor NLP to knowledge rich logical and deep learning approaches. Errors in automated authoring may be checked by experts or crowdsourced to students as a learning task.
In this talk we present the results of the analysis of fMRI data of human brain at rest, using different methods of complex network theory, ranging from maximum spanning tree, to percolation and allometry.
This approach permits to detect a clear hierarchical functional organization of the human brain cortex and to detect significant difference in the organization between populations of normal individuals and schizophrenic patients.
Soft robotics research has made considerable progress in many areas of robotics technologies based on deformable functional materials, including locomotion, manipulation, and other morphological adaptation such as self-healing, self-morphing, and mechanical growth. While these technologies open up many new robotics applications, they also introduced a number of challenging problems in terms of sensing, modelling, planning and control. Because of the general complexity of the system based on flexible and continuum mechanics, and a large diversity of system-environment interactions, the conventional methods are often not applicable, and the new approaches are necessary based on the state-of-the-art machine learning techniques. In this talk, I will introduce some of the research projects in our laboratory that make use of soft robotics and machine learning techniques, for addressing the complexity problems in robotic applications.
These days there is more demand and sometimes more pressure to turn research into a product or startup.
This can often be a very complex and time consuming task, since software in research is often written with a singular goal in mind. Often little thought is given to future use, maintainability, or quality. This situation is actually more common that many realise.
Additionally many students finishing their studies who choose to enter industry, will often have little or no knowledge of best practices.
In this short seminar he will introduce some basic concepts of Test Driven Development (TDD), and show how a change in how we think of the software development process can lead to more modular, maintainable, self documenting, and stable software.
Cases in which the number of interacting components is very large are becoming of general interest in disparate fields, such as in ecology and biology, e.g. for bacteria communities, as well as in complex economies where many agents trade and interact simultaneously. Many of these systems appear often to be poised at the edge of stability, hence displaying enormous responses to external perturbations. This feature, also known in physics as marginal stability, is usually related to the complex underlying network of interactions, which might induce critical behavior.
In this talk, I will present the problem of ecological complexity by focusing on a reference model in theoretical ecology, the disordered Lotka-Volterra model with random interactions and finite demographic noise. Employing advanced statistical physics techniques, I will unveil a complex and rich structure for the organization of the equilibria and I will relate critical features and a slow relaxation dynamics to the appearance of disordered glassy-like phases.
Finally, I will discuss the generalization of these results to non-logistic growth functions in the dynamics of the species abundances, which turn out to be of great interest for modeling intra-specific mutualistic effects.
A. Altieri, F. Roy, C. Cammarota, G. Biroli, Phys. Rev. Lett. 126, 258301 (2021);
A. Altieri, G. Biroli, arXiv:2105.04519 (2021), to appear in SciPost Physics.
Ever since the Naming Game, agent-based modelling has beautifully illustrated how shared and complex linguistic conventions could emerge out of multiple inter-speaker interactions sharing communication goals. However, how a community can switch from one linguistic convention to another – and therefore enact a language change – has remained troublesome, and usually require additional mechanisms: a selective advantage of the new variant, a change endorsed by an influential sub-community, or an amplification of ongoing trends. What has been neglected, however, is that this sociolinguistic account does not necessarily map with the available evidence of the change – namely, the frequency of use as recorded in diachronic corpora. In this talk, I will review this evidence on actual case studies, and introduce the model of language change I developed during my PhD. This model focuses not on social interactions, but on the cognitive organization of language, which is assumed to drive the change. I will then discuss the limitations of this model, and how it can be expanded in the future, especially to be reconciled with a sociolinguistic account.
Human augmentation, Citizen-Built Cities, and Artificial Life: Research activity of Sony Computer Science Laboratories Kyoto. The talk will present the history of how and why we created this new lab and what the different members have been working on, including some of the more Kyoto-focused projects.
Holding a coherent, meaningful and multi-turn conversation with a human interlocutor is one of the main challenges of current intelligent agents. Especially when conversations span multiple turns, agents lack the capabilities to remember what has been said and to ground their answers in the conversational context.If we want truly intelligent agents that can communicate with humans about their environment, these agents need to possess certain cognitive capabilities. They must be able to perceive and categorize the world, to understand and produce utterances and possess sufficient reasoning skills to integrate these sources of information.In this talk, I present a novel methodology that allows an intelligent agent to hold multi-turn, coherent conversations with humans. Concretely, the agent maps utterances to a representation of their meaning. This semantic representation consists of the reasoning operations that are required to understand the utterance in terms of the environment and the discourse context. These reasoning operations are executed in a hybrid way; those related to discourse understanding are executed symbolically, whereas those that interact with the environment are executed subsymbolically. To keep track of what has been said, the agent possesses over a conversation memory, which is a representation of the conversational context. After each turn in the conversation, the conversation memory is updated with necessary information.The proposed intelligent agent is validated through the task of visual dialog. The visual dialog task consists of modelling an agent that can answer a series of questions about an image. The agent requires both the image and the conversational context to answer these questions correctly. Applied to two benchmark datasets, namely MNIST dialog and CLEVR dialog, the agent achieves an accuracy of 97.18% and 95.94%, respectively. The methodology proposed in this talk paves the way for intelligent agents that hold coherent and multi-turn conversations with humans. Moreover, the applied technologies ensure that the system is explainable and interpretable by design.
Information theory, probability and statistical dependencies, and algebraic topology provide different views of a unified theory yet currently in development, where uncertainty goes as deep as Galois’s ambiguity theory, topos and motivs. I will review some foundations led notably by Bennequin and Vigneaux, that characterize uniquely entropy as the first group of cohomology, on random variable complexes and probability laws. This framework allows to retrieve most of the usual information functions, like KL divergence, cross entropy, Tsallis entropies, differential entropy in different generality settings. Multivariate interaction/Mutual information (I_k and J_k) appear as coboundaries, and their negative minima, also called synergy, corresponds to homotopical link configurations, which at the image of Borromean links, illustrate what purely collective interactions or emergence can be. Those functions refine and characterize statistical independence in the multivariate case, in the sens that (X1,…,Xn) are independent iff all the I_k=0 (with 1
In this talk I will illustrate how the concept of digital twins can be used to reduce the cost of instrumentation design or the labelling effort in supervised machine learning. This will be illustrated with various recent bioimaging use cases [1-5] developed in my group ImHorPhen at Université d’Angers, France (https://www.youtube.com/channel/UCsd9Dt6N7O-fydynsWEfkww). Douarre, C., Crispim-Junior, C. F., Gelibert, A., Germain, G., Tougne, L., & Rousseau, D. (2021). CTIS-Net: a neural network architecture for compressed learning based on Computed Tomography Imaging Spectrometers. IEEE Transactions on Computational Imaging.
 Turgut, K., Dutagaci, H., Galopin, G., & Rousseau, D. (2020). Segmentation of structural parts of rosebush plants with 3D point-based deep learning methods. arXiv preprint arXiv:2012.11489.
 Ahmad, A., Frindel, C., & Rousseau, D. (2020). Detecting differences of fluorescent markers distribution in single cell microscopy: textural or pointillist feature space?. Frontiers in Robotics and AI, 7, 39.
 Debs, N., Rasti, P., Victor, L., Cho, T. H., Frindel, C., & Rousseau, D. (2020). Simulated perfusion MRI data to boost training of convolutional neural networks for lesion fate prediction in acute stroke. Computers in biology and medicine, 116, 103579.
Computer scientists typically think about machine learning as a set of powerful algorithms for modeling data in order to make decisions or predictions, or to better understand some phenomenon. In this talk, I’ll invite you to consider a different perspective, one in which machine learning algorithms function as live and interactive human-machine interfaces, akin to a musical instrument. These “instruments” can support a rich variety of activities, including creative, embodied, and exploratory interactions with computers and media. They can also enable a broader range of people—from software developers to children to music therapists—to create interactive digital systems. Drawing on a decade of research on these topics, I’ll discuss some of our most exciting findings about how machine learning can support human creative practices, for instance by enabling faster prototyping and exploration of new technologies (including by non-programmers), by supporting greater embodied engagement in design, and by changing the ways that creators are able to think about the design process and about themselves. I’ll discuss how these findings inform new ways of thinking about what machine learning is good for, how to make more useful and usable creative machine learning tools, how to teach creative practitioners about machine learning, and what the future of human-computer collaboration might look like.
Today AI is very much in the news with achievements in many areas of science, engineering and application. But how far has AI advanced in comparison to human intelligence? Can we already speak about computational creativity? To find the scope and limitations of the current state of AI I propose to look at art as one of the highest achievements of human intelligence, a domain in which creativity is highly valued. Can we somehow emulate the experience that a human has when looking at a painting? Can we emulate the act of creating new artistic work? This requires that we not only address issues of computer vision, pattern recognition and computer graphics (in the case of visual arts) but also semantic issues related to meaning and understanding. This talk is based on a case study that I carried out the past year culminating in an exhibition starting on 3 April 2021 at the BOZAR cultural center in Brussels. The subject was a world-renowned Flemish painter Luc Tuymans, and specifically one of his painting ‘Secrets’ that was last shown in his solo exhibition at the Palazzo Grassi in Venice during 2019-2020. Based on extensive discussions with the artist and with the help of some computer vision specialists, I made an AI model in the form of a transient narrative network that is fed with input from computer vision, language processing with text from the catalog, queries to semantic resources such as knowledge graphs, thesauri and dictionaries, as well as further inferences based on computational ontologies. The conclusion of this experiment is that using AI algorithms to investigate the computational nature of art interpretation is very illuminating, not only because it helps us to look more intently and to grasp more deeply the cultural and intrinsic meanings of an art work, but also because it shows us the remarkable richness of the human mind – making all claims that superhuman artificial intelligence will soon be reached sound hollow. The experiment also throws light on the nature of creativity and what it will take for AI to become creative the way human painters are. This challenge is equally considered to be far in the future – if ever reachable.
Most businesses, research institutions, and organizations rely heavily on an intangible asset: their employees’ knowledge. Despite its importance, little is known about where knowledge is found and how it is transmitted. The Kouzan app aims to change this by making knowledge flows visible and permanently recording them. For more than two years, a prototype of the Kouzan app has been running at the Complexity Science Hub Vienna and the Sony Computer Science Labs Paris. In this talk, we present our analysis and findings on knowledge transfer in these institutions. Furthermore, we will announce the next version of the application, which has undergone extensive rework. We are introducing a new social experiment with the relaunch of the application: Can we model how knowledge and ideas are created and transferred in our organizations?
Most of us live in urban settings, and when we look at a map, the primary things we see are concrete and wood: the roads and the buildings that make a city a city. But what if we invert it, and focus on the dirt, the soil, the ground, like a negative image. Then the concrete and wood is the other stuff that just happens to be there. Welcome to life in the agroecosystem: where we conceive of our primary residence as being in an ecosystem, a landbase, that then also happens to have places of dwelling and the like. How can we reconceive of cities as agroecosystems, and what would it mean if we did?
In this talk I will describe several recent works for accurate non-rigid 3D shape matching and comparison. I will highlight several recent architectures, focusing especially on spectral methods, that are well adapted to computing dense correspondences across a variety of settings. My ultimate goal will be to show that these techniques are becoming remarkably robust and universally applicable and useful.
Over the last few years, advances in graph, kernel, and sparse convolutions have helped establish deep networks as the predominant methods for 3D point clouds analysis. In this talk, I will first present the very dynamic landscape of 3D deep learning, and introduce the superpoint graph approach for scaling memory-intensive algorithms to very large point clouds. Finally, I will present some of the latest developments on the subject, including our recent work on unsupervised and interpretable shape sets summarization.
In this presentation, I will talk about plant growth from the provocative point of view of the physicist. The plant will be seens as a physical system, where all the biology will be abusively reduced to miscellaneous active phenomena (known or not). I will focus on th particular aspect of the motions induced by plant growth. We call them morphogenetic motions because they are motions linked to the development and growth of the plant. First part of the talk will be an introduction to plant growth and morphogenetic motions, as well as some observations of the phenomena. In the second part we will investigate in more detail the case of leaves development for simple or compound leaves. We will show how the rich morphogenetic motions are strongly related to posture regulation mechanisms.
Rick van de Zedde & Pieter de Visser, Wageningen University & Research.
At Wageningen University & Research from April 2021 onwards a digital twin (DT) will be operational. The DT will digitally represent a tomato crop of individual, virtual plants in their local greenhouse environment, and grown simultaneously. The DT will feature real-time updating of plant parameters and environmental variables based on high-tech sensor equipment available in the Netherlands Plant Eco-phenotyping Centre (NPEC) facilities. In the DT, each tomato plant in the crop will be modelled in 3D integrating a set of traits that correspond to model parameters. Thereby, the DT enables us to predict crop response (growth, development and production) to greenhouse and management conditions that affect production efficiency; light intensity and quality, CO2 dosing, nutrient availability and leaf pruning. Thus, the DT can support greenhouse management in real-time.
This will be the first ever 3D simulation model of individual plants growing in greenhouses that get updated by sensor data and that delivers updated predictions as the real plants grow. In that sense it is a true digital twin, which does not yet exist for plants. This is an important extension of the plant and greenhouse modelling that exists today. As well, the DT allows for hypothesis testing and in silico experiments. As a scientific aim, we will develop and study novel methods on e.g. deep learning for processing of sensor data to transform the raw data to plant traits. Moreover, novel methods will be dealt with on Bayesian inference of state parameters of the plant and greenhouse models, allowing efficient model updating and optimizing the accuracy of the model predictions.
Scientific issues that will be addressed include processing the high-dimensional sensor data, further refinement of the plant and greenhouse model, estimating the model parameters and using that to make decision about control. Furthermore, with our systematic and process-based approach we can analyse the whole system, and investigate possible bottlenecks in sensing, modelling, and control, and in what way or to which extent they hinder optimal performance. For example by simulating how small errors in each of the modules propagate through the system, and influence the performance. Subsequent investigation can then be targeted efficiently to find remedies, e.g., by improved sensing equipment or algorithms, improved model accuracy, or by selecting a different type of controller.
The constructed DT can be used to predict growth and development of tomato plants in response to real-time environmental factors and management decisions. This allows for more informed decisions regarding the agronomic management in commercial practice, as well as the selection pressure applied by breeders to specific traits.
More info: https://www.npec.nl/news/wur-is-working-on-digital-twins-for-tomatoes-food-and-farming/
In this talk, I will present the inner-workings of the Piano Inpainting Application (PIA), an A.I.-powered Ableton Live plugin meant to assist music creators during their compositional process. We will see how PIA makes the composition of piano pieces both easy and playful while creating novel ways to approach music composition for amateurs and professionals alike. We hope this presentation will shed new lights on the benefits of A.I.-assisted music composition, from democratizing computer music even more to boosting artists’ creativity.
PIA is freely available at https://ghadjeres.github.io/piano-inpainting-application/.
Presentation video: https://youtu.be/HSn2NGAR-ro
In this talk, I present a recently proposes data-driven framework for assessing the a-priori epidemic risk of a geographical area and for identifying high-risk areas within a country . A risk index is introduced and evaluated as a function of three different components: the hazard of the disease, the exposure of the area and the vulnerability of its inhabitants. As an application, we discuss the case of the COVID-19 outbreak in Italy. We characterize each of the twenty Italian regions by using available historical data on air pollution, human mobility, winter temperature, housing concentration, health care density, population size and age. We find that the epidemic risk is higher in some of the Northern regions with respect to Central and Southern Italy. The corresponding risk index shows correlations with the available official data on the number of infected individuals, patients in intensive care and deceased patients, and can help explain why regions such as Lombardia, Emilia-Romagna, Piemonte and Veneto have suffered much more than the rest of the country. Although the COVID-19 outbreak started in both North (Lombardia) and Central Italy (Lazio) almost at the same time, when the first cases were officially certified at the beginning of 2020, the disease has spread faster and with heavier consequences in regions with higher epidemic risk. Our framework can be extended and tested on other epidemic data, such as those on seasonal flu, and applied to other countries. I will also briefly address a policy model connected with our methodology, which might help policy-makers to make informed decisions.
 A.PLUCHINO, A.E.BIONDO, N.GIUFFRIDA, G.INTURRI, V.LATORA, R.LE MOLI, A.RAPISARDA, G.RUSSO, C.ZAPPALA’
A Novel Methodology for Epidemic Risk Assessment: the Case of COVID-19 outbreak in Italy Nature – Scientific Reports 11, 5304 (2021)
Years of preparedness and scientific progress have been harshly put at test in the most difficult health crisis of the last 100 years. Following the path of our experience in this first year, I will discuss pitfalls, challenges, and opportunities to improve modeling for outbreak response.
Deep neural networks have shown impressive performance enhancements on plant phenotyping tasks, such as organ detection, disease identification, etc. However, these achievements are often difficult to translate into real-world applications because they usually require an extensive amount of manually labeled training datasets. Preparing such training datasets for plant phenotyping is labor-intensive and time-consuming due to the single category, high-density objects within one picture, and requires a sufficient amount of domain knowledge. Moreover, most of the training datasets are prepared especially for a specific task in a particular domain, so they must be relabeled when the task or domain changes. In this presentation, I will introduce several approaches that we have developed/will develop to overcome the limited labeled training dataset issues.
What if globally designed products could radically change how we work, produce, and consume? Through practical examples from communities of practice, we’ll go through glimpses of how a sustainable economy based on the commons could look like and what might be its benefits.
No other species on Earth can play a symphony, form systems of government, and debate scientific issues with one another, quite like humans can. Social intelligence — including our capacities to understand each other’s minds and actions, and to collaborate and compete with each other — has played a key role in our progress as intelligent beings. Despite recent interests in building socially intelligent agents, there has been little work on developing benchmarks that systematically and rigorously evaluate the social intelligence of machine agents. In this talk, I will introduce two social AI challenges that we recently developed, each of which proposes a set of cognitively inspired tasks evaluating different aspects of machine social intelligence.
Annette Werth (PhD) (EX-) Researcher at Sony CSL Paris and Giulio Prevedello (PhD) Researcher at Sony CSL Paris:
Microgrids are a key technology to grant universal access to affordable and clean energy. In this seminar we provide an overview of the technology and business models and then dive deep into a case study on a decentralized off-grid microgrid in the Philippines. We will present a data-driven approach to evaluate implemented solutions and energy sharing benefits from interconnected Solar Home Systems.
AI has long envisioned using computers to automatically write stories. While a lot of work focuses on systems that perform this task autonomously, another objective is for applications to collaborate with human authors in augmenting their story writing abilities. In this talk, I’ll discuss the foundations and challenges of automated assistance for story writing, focusing in particular on the issue of supporting creativity. Motivated by observations from my own research on this endeavor, I’ll propose that it requires a different paradigm from that of other generation tasks, one that centers on the goals of the human author.
In a rapidly changing world, severely affected by extreme weather events, epidemic outbreaks, economic shocks and conflicts, it is of fundamental importance to understand where the most vulnerable people are, how many they are, and to identify what it is that makes them more vulnerable than others to these threats. During the last decade, research has shown that data such as digital traces, phone metadata and satellite imagery carry relevant information beyond their original purpose and can be used as a proxy to measure socio-economic characteristics and detect vulnerabilities when traditional data is not available. Following an overview of these studies, the talk will deep dive into the UN World Food Programme’s original work on predicting food security. We will then conclude by discussing challenges and limitations, but also opportunities, that come with these approaches.
Assessing non-pharmaceutical interventions’ (NPIs) effectiveness to mitigate the spread of SARS-CoV-2 is critical to inform future preparedness response plans. We propose a modeling approach that combines four computational techniques merging statistical, inference, and artificial intelligence tools to evaluate the impact of NPIs on spreading the Covid19 pandemic. Our results indicate that a suitable combination of NPIs is necessary to curb the spread of the virus. Less disruptive and costly NPIs can be as effective as more intrusive, drastic ones (for example, a national lockdown). Using country-specific “what-if” scenarios, we assess how the effectiveness of NPIs depends on the local context, such as the timing of their adoption, opening the way for forecasting the effectiveness of future interventions.
Manually selecting viewpoints or using commonly available flight planners like circular path for large-scale 3D reconstruction using drones often results in incomplete 3D models. Recent works have relied on hand-engineered heuristics such as information gain to select the Next-Best Views. In this work, we present a learning-based algorithm called Scan-RL to learn a Next-Best View (NBV) Policy. To train and evaluate the agent, we created Houses3K, a dataset of 3D house models. Our experiments show that using Scan-RL, the agent can scan houses with fewer number of steps and a shorter distance compared to our baseline circular path. Experimental results also demonstrate that a single NBV policy can be used to scan multiple houses including those that were not seen during training.
Sound synthesis, the generation of sound through analog electrical current or digital software, is a vibrant field since the second half of the 20th century. New synthesizers have expanded the scope of sounds that can be generated to previously unexplored reaches. But with this expansion also comes complexity: modern synthesizers, with sometimes over 100 continuous-valued parameters, can be daunting to program! Our work is an attempt at bridging the gap in sound synthesis between flexibility and ease of use. Thanks to recent advances in neural networks-based techniques, we provide users with the ability to edit and transform sounds through simple operations inspired by image processing software à-la-Paint. Namely, we frame the sound synthesis process as an interactive inpainting task, where portions of a sound are selectively transformed by the user. At each of these steps, our models are tasked with proposing new sonic content for the selected zones, by analyzing their surrounding context. In this talk, I will present both a novel machine learning architecture aimed at performing inpainting on spectrograms developed with Gaëtan Hadjeres (Sony CSL Paris), as well as a new, open-source interactive web interface that allows musicians to readily make use of these new models in creative settings: NOTONO.
As an electronic music producer, I have been producing music with CSL A.I. Music technology throughout the year 2020. In my talk, I will present the results of this collaboration, and describe the impact of A.I. on the music production workflow.
In several disciplines, cascades of failures or distress can propagate across a large networked system, possibly leading to the collapse of a significant number of its components. In order to correctly estimate the risk of such cascades, a detailed knowledge of the structure of the entire network is in principle required. However, due to data limitedness or confidentiality, the network may be largely unobservable, e.g. only aggregate node-specific information may be available. Is it possible to statistically reconstruct the hidden structure of a network and reliably infer its large-scale properties? In this talk, I will present a maximum-entropy approach to the problem of network reconstruction from local information. I will illustrate the power of the method when applied to the inference of network properties and systemic risk in various economic and financial systems. Then, as a counter-example, I will show how in certain circumstances the real network may deviate significantly from its reconstructed counterpart, thereby highlighting anomalous structural patterns. If such anomalies increase systematically over time, they may serve as early-warning signals of approaching critical events.
There is a contradiction at the heart of our current understanding of mobility patterns. On one hand, a highly influential stream of literature driven by analyses of massive empirical datasets finds that human movements show no evidence of characteristic spatial scales. There, human mobility is described as scale-free. On the other hand, in geography, the concept of scale, referring to meaningful levels of description from individual buildings through neighborhoods, cities, regions, and countries, is central. Here, we resolve this apparent paradox by showing that human mobility does indeed contain meaningful scales, corresponding to spatial containers restricting mobility behavior. The scale-free results arise from aggregating displacements across containers. We present a simple model, which given a person’s trajectory, infers their neighborhoods, cities and so on. We find that the containers characterizing the trajectories of more than 700,000 individuals worldwide do indeed have typical sizes. We show that our description improves on the state-of-the-art in modeling, and allows us to better understand effects due to socio-demographic differences and the built environment.
Big data uses linguistic information to understand almost every corner of human existence: what people do, what they want, how they feel, or what they think about. In this talk, I show how behavioral experiments can be a crucial complement to big data, if we want to understand how people use language: How can we leverage effects of the current pandemic inform our understanding of how language is stored in the brain? What do people actually imagine when they read about events? And how do you know that someone is stressed, based on how they write?
AI artwork sells for $432,500 — nearly 45 times its high estimate — as Christie’s becomes the first auction house to offer a work of art created by an algorithm. Is Artificial Intelligence set to become art’s next medium? This seminar gives insights into the exploration of the interface between art and Artificial Intelligence and the different ways machine learning algorithms can catalyze natural human creativity.
In this talk, I will explain the mechanisms of transformation- and invariance learning for symbolic music and audio, and I will describe different models that are based on this principle.Transformation Learning (TL) provides us with a novel way of musical representation learning. To that end, we do not aim to learn the musical patterns themselves, but some “rules” defining how a given pattern can be transformed into another pattern.
TL was initially proposed for image processing and had not yet been applied to music. In this talk, I summarize our experiments in TL for music. The models used throughout our work are based on Gated Autoencoders (GAE) which learn orthogonal transformations between data pairs. We show that a GAE can learn chromatic transposition, tempo-change, and the retrograde movement in music, but also more complex musical transformations, like diatonic transposition.
Transformation Learning (TL) provides us with a different view on music data, and yields features complementary to other music descriptors (e.g., such as obtained by autoencoder learning or hand-crafted features). There are different possible research directions regarding TL in music. They involve using the transformation features themselves, using transformation-invariant features computed from TL models, and using TL models for music generation.
I will particularly focus on DrumNet, a convolutional variant of a Gated Autoencoder, and will show how TL leads to time and tempo-invariant representations of rhythm. Importantly, learning transformations and learning invariances are two sides of the same coin (as specific invariances are defined with respect to specific transformations). I will introduce the Complex Autoencoder, a model derived from a Gated Autoencoder, which learns both a transformation-invariant, and a transformation-variant feature space. Using transposition- and time-shift invariant features, we obtain improved performance for audio alignment tasks.
In this seminar, I will talk about the 5D’s fuelling our energy future: Decentralization, Decarbonization, Disruption, Democratization and Digitization. In line with these beliefs, together with my colleagues at SOLshare, we installed the world’s first cyber-physical P2P solar sharing grid in a remote area of Bangladesh.
Manufacturing at small scales is a challenge requiring in most cases a human operator in the loop. However his perception of the task is impaired seriously because of the lack of feedback quality. This can be improved sensibly by providing additional sensory modalities: notably the haptic sense is a key element of human dexterity. In this talk, I will mention some approaches to implement this sensory coupling between the microscale and the operator. These techniques lend themselves naturally to coupling manual control with automation, similar to many successful applications of robotic technologies, such as surgical robotics or robotic space exploration.
In this seminar, I will talk about my artistic research and how my practise combines art and science. I work with a range of biological, digital and traditional media, including live organisms. I will explain how my research is materialised in techno-organic objects inspired by factual/fictional stories; in artefacts that are a combination of digital fabrication and craftsmanship; in installations that reflect both the problem and the (possible) solution, in multispecies collaborations, in polymorphic forms and models created by eco-data.
In this talk, I will discuss some of my recent works connected with social experiments about the study of collective creativity and learning about urban sustainability. In the first part, I will show the results coming from an experiment realized some years ago, during the Kreyon Days open event in PalaExpo in Rome. During this event, visitors could take part in an open-ended experiment, where they were asked to build some LEGO artworks collectively. RFID sensors, given to the participants, allowed for the reconstruction of the dynamical social network and the identification of teams that were contributing to a specific artwork. This data allowed for the identification of some of the characteristics of the most efficient building teams. For those interested, the work can be found here: https://www.pnas.org/content/116/44/22088
In the second part, I will discuss an ongoing experiment that is taking place during the “AI: More than human exhibition” in London and Groeningen. This experiment, dubbed “Kreyon City,” aims at understanding how individuals relate to complex sustainability problems. I will discuss the experience and some preliminary results coming from the analysis of the first tranche of collected data.
In this seminar, I will talk about the relationship between art and technology which is today a central theme in the contemporary art debate. A brief terminological analysis shows us how technology-based terms, such as “artificial intelligence”, posthumanism, machine learning, blockchain, etc. are increasingly present and pervasive. This is also shown today by the increasing interest in arts by companies in the technology sector. Microsoft, Google, Facebook, Adobe, among others, are all creating new artists’ residency allowing artists to work into the companies. A new market is growing that cultural institution should involve. The practice of artists is never related only with mere experimentation of technology. It is important to mix arts and innovation, artists and companies, orienting ethically the deterministic idea of technology development, reflecting through the experimentation of our contemporary era.
The seminar gives an overview of the topics presented at the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, with a focus on language processing and a strong emphasis on deep learning.
I will present a recently released neural-network-based audio processing toolbox called nnAudio. This toolbox leverages 1D convolutional neural networks for real-time spectrogram generation (time-domain to frequency-domain conversion). This enables us to generate spectrograms on-the-fly without the need to store any of the spectrograms on the disk when training neural networks for audio related tasks. In this talk, I will discuss one of the possible applications of nnAudio, namely, the exploration of suitable input representations for automatic music transcription (AMT).
In this talk, I will share our empirical results on learning disentangled representations using Gaussian mixture variational autoencoders (GMVAEs) for music instrument sounds. Specifically, we achieve disentanglement of note timbre and pitch, respectively, represented as latent timbre and pitch variables, by learning separate neural network encoders. The distributions of the two latent variables are regularized by distinct Gaussian mixture distributions. A neural network decoder is used to synthesize sounds with the desired timbre and pitch, which takes a concatenation of the timbre and pitch variables as input. The performance of the disentanglement network is evaluated by both qualitative and quantitative approaches, which further demonstrate the model’s applicability in both controllable sound synthesis and many-to-many timbre transfer.
Domestic robots are informers, conversational agents insult their interlocutors. Worse still, computer systems participate in human conflicts and sometimes even provoke them. On March 18, 2018, an autonomous Uber company killed a woman who was crossing the street in an Arizona city. This was the first pedestrian death caused by an algorithm. Who is responsible? The answer to this question is one of the most urgent challenges in our relationship to digital technologies. But it’s not about knowing how to make artificial intelligence benevolent. It is a question of making sure that it does not replace the man as a moral agent. Only recourse to chance, and this from its conception, can release the machine of the responsibility that one wants to make him carry.
The seminar will explore Behavioural Objects. The focus will be on studying and experimenting non-figurative robotic artistic objects that show behavioural traits.
The seminar will be an introduction to beer making with a focus on identifying how varying the beer making process/ingredients results in the beers you know and love. There will be an accompanying dégustation for empirical investigation.
Being a major form of online innovation, social media platforms emerge, diffuse across large populations; while many of them fail the competition for users, and sometimes eventually disappear. In this paper, rich empirical patterns of spatial diffusion and churn are illustrated at an unprecedented scale and over the full life-cycle of a social media app. For the first time in the literature, we evaluate the spatial accuracy of a theory-driven network-based innovation diffusion model with empirical data. In order to improve the predictive power of contagion models at the local level, we combine the most recent empirical approaches of city science and geographical social networks in laying out the directions of model correction.
Metre is a music-theoretic notion instructing music performers how to count beats. Music cognition research suggests that all listeners, musically trained or not, feel or perceive some correlate of metre, and that this percept forms a basic part of music experience. In fact, the experience of beat and metre appears so universal among humans, that the topic has recently started gaining attention from researchers in other fields such as biology. In search of potential evolutionary origins of musicality, these researchers became interested in which nonhuman animals share our proclivity for pulse. But since long before this widespread attention, music cognition researchers have attempted to understand beat and metre perception in empirical and modelling studies. Resulting models propose hypothetical mechanisms by which listening to a rhythm may result in the sensation of beat and metre. Little modelling work, however, has addressed metre perception’s susceptibility to perceptual learning and shaping by cultural exposure (enculturation). In this talk, I discuss some classic models of metre perception and the perspectives on cognition that they represent, before discussing some of my own work on modelling enculturation in metre perception.
In the last 10 years, the availability of time-resolved data in many fields has led to the extension of the field of networks to the study of temporal networks. In a static network, nodes represent elements of the system and links between nodes encode the fact that an interaction exists between the corresponding elements. Links are then fixed and no information on the timing of these interactions is available. In a temporal network instead, links are replaced by temporal series of interactions, each with its starting time and duration.
Taking into account temporality has important consequences in terms of analysis and modelling. Finding relevant structures in temporal networks is, in particular, a challenging task, and I will present two methods recently developed: the extraction of a backbone of significant ties on the one hand, and the temporal core decomposition, which allows us to identify dense structures, together with their temporal span.
The Audio Escape Room is an interactive installation. This is the first audio escape game ever. It was created as a student project by Amaury Delort in Université Jean Monnet of Saint-Etienne, France. Blindfolded, the player needs to move around a virtual room and interact with the right sound in the right place to find a way out of the room. This experience is an invitation to rediscover the power of hearing. Since no visuals are included at all, blind people can totally experience this game as well.
Inclusion of novelties into AI systems is challenging. At present, a real efficient online training algorithm for deep learning systems has still not been introduced. We propose a general approach to train different temporal sequences based on Markov chains by exploiting the ability of neural networks to share the same weight parameter region among different temporal data. Some results will be shown concerning training meta-parameters and relative entropy between sequences.
In this talk, I will describe some of our projects on HCI using advanced computer vision and dynamic projection mapping, including an interactive display on the water, image stabilization of camera embedded ball, and interactive spherical displays with 360 omnidirectional cameras. Then I will talk about our recent projects on skill transfer in sports and music.
The Ising model is a graphical model whose parameters can be tuned in order to describe stationary distributions of binary variables. In many practical problems in different domains – e.g. physics, biology, neuroscience, finance, sociology – the topology of the graph and the values of the couplings are unknown and they need to be reconstructed from the data. The inverse Ising problem aims to find the parameters of the model that best fit the data. We propose a new algorithm to learn the network of the interactions of a pairwise Ising model, based on the pseudo-likelihood method. Our present implementation is particularly suitable to address the case of sparse underlying topologies and it is based on a careful search of the most important parameters in their high dimensional space.
This presentation aims at introducing myself and my previous work to the lab. After a short introduction about Brussels and Belgian beers, I will describe the academic work that I have carried out during my PhD at IRCAM (Paris), about the modelling and control of virtual violins based on physical modelling, and my post-doctorate at IPEM (Ghent University, Belgium), about the monitoring of music performance. Then, I will introduce my subsequent work as a freelance, developing personal projects (FingerFiddle, a virtual instrument on mobile devices) and working for customers (e.g. the series of app Talenschool, for the baroque orchestra Les Talens Lyriques). Finally, I will present my activity at SonyCSL, within the music team, and some personal thoughts about the relation between AI and music production.
Construction grammar grew out of the need to model the whole of language instead of distinguishing core linguistic expressions from peripheral ones and has since then established itself as the grammatical embodiment of cognitive-functional linguistics. Its central claim that all linguistic knowledge can be represented as form-meaning mappings — called constructions — has been embraced in all areas of linguistics.
As is often the case, however, it takes time before the potential of an innovation is fully explored and understood. Early movies, for example, strongly mimicked theatre and used long and static shots before film makers developed their own cinematic “grammar”. A similar process happens in science, and while construction grammar is already too mature to be directly compared to early cinema, the formal and computational properties of its most important data structure are not yet completely worked out. As a result, construction grammar has become an umbrella term for all linguistic studies that roughly agree on what Bill Croft dubbed “vanilla” construction grammar, but more precision is needed in order to prevent a babelesque confusion from installing itself in the field and thereby impeding much-needed breakthroughs.
In this presentation, I will try to offer a more precise perspective on what constructions are and what they can do. More specifically, I will look at the representational and algorithmic properties of constructions. The goal of the presentation is therefore not to favor one or the other analysis, but simply to elicit more clarity about which analyses are possible and which criticisms on constructional analyses are valid concerns and which are not. In order to substantiate my claims, all analyses are accompanied by a concrete computational implementation in Fluid Construction Grammar, an open-source computational platform for exploring issues in constructional language processing and learning.
What are the elements that make a communication strategy useful? Communicating is increasingly important in an ever-connected world, both for sharing and receiving information. Nevertheless, the incredible amount of exchanged data calls for targeted communication measures that can attract the receivers’ attention while passing the message effectively. Communicating has a fundamental role in every aspect of relational life. During the seminar, I introduce the objectives of the new communication strategy devised for Sony CSL Paris, with specific reference to social media strategy and website contents; the procedure for publishing articles; and project drafting. The seminar also includes a section dedicated to how to write a good project for applying to calls for proposals, besides offering insights into possible trajectories for fundraising.
What does it take for a robot to be a substitute for a farmer? I present examples we implemented for the robot to observe, interpret, and intervene in the field. I also present the state of the art deep learning architectures for semantic segmentation and how we use it for the robot.
Modern synthesizers are getting increasingly powerful and now provide an overwhelming amount of parameters to carve a sound spectrum. This simultaneously increases creative freedom but can also complicate the sound design process. In parallel, recent generative learning models have been developed towards audio synthesis.
Here, we aim at providing intuitive control over sound synthesis with deep learning models, through synthesis by learning. Only a limited number of approaches have been proposed to deal with this new type of synthesis, which allows learning a synthesizer directly from audio sample examples. One of the most important proposals relies on the framework of variational autoencoders, which allows generating sounds from a parameter latent space, by simultaneously learning inference and generation networks from existing data.
In this work, we develop generative models for audio synthesis that are able to handle complex temporal information, thus, allowing to generate a wide variety of sounds. Here, we developed a model based on combinations of variational autoencoders and convolutional neural networks for audio synthesis. We also collected and labeled a dataset representing a variety of percussive sounds to train our model.
In a world with an ever-increasing demand for food, plant phenotyping in real-world conditions is key to understand the influence of the environment on plant growth. Computer Vision methods will help evaluate traits with more precision and more efficiency. I present a method for 3D reconstruction of plants in a lab setting and explore some of the difficulties to overcome to transpose it to the field. I present a specific case where the method can be applied, namely on the measure of angles between successive organs in Arabidopsis Thaliana.
We present the results of our work on the modelling of social networks evolution. We start from simple data-driven models and gradually converge to a more general model exploiting the adjacent possible theory to describe how people engage in social interactions.
I present the Variation Network (VarNet), a generative model providing means to manipulate the high-level attributes of a given input. The originality of our approach is that VarNet is not only capable of handling pre-defined attributes but can also learn the relevant attributes of the dataset by itself. These two settings can be easily combined which makes VarNet applicable for a wide variety of tasks. Further, VarNet has a sound probabilistic interpretation which grants us with a novel way to navigate in the latent spaces as well as means to control how the attributes are learned. We demonstrate experimentally that this model is capable of performing interesting input manipulation and that the learned attributes are relevant and interpretable.
The DeepBach model provides a novel way to compose Bach chorales in an interactive manner. In this seminar, we discuss how to extend this model so that it handles other music genres such as traditionnal folk tunes or jazz songs.
Theoretical models of critical mass have shown how minority groups can initiate social change dynamics in the emergence of new social conventions. Here, we study an artificial system of social conventions in which human subjects interact to establish a new coordination equilibrium. The findings provide a direct empirical demonstration of the existence of a tipping point in the dynamics of changing social conventions.
When minority groups reached the critical mass—that is, the critical group size for initiating social change—they were consistently able to overturn the established behavior. The size of the required critical mass is expected to vary based on theoretically identifiable features of a social setting. Our results show that the theoretically predicted dynamics of critical mass do in fact emerge as expected within an empirical system of social coordination.
Is it possible to capture the socio-economic footprint of human behavior in our cities or neighborhoods? Nowadays, all human activities, ranging from the people we call, the places we visit, the things we eat, and the products we buy, generate data. This can be analyzed over long periods to paint a comprehensive portrait of human behavior within the city boundaries. These geolocated digital traces, when combined with other information streams from the national census, or google API, can be used to extract information about the potential needs and the routines in the collective behavior of different groups of citizens. We will analyze this data to understand the extent to which the urban activities of different population groups or communities are driven by both socio-economic differences and cities’ structure. This new quantitative approach will provide new insights for more inclusive policies to help future urban development.
In large musical catalogs such as in streaming companies, manual curation comes at high cost and the amount of data is considerable with tens of thousands of records delivered every week.
Automatic systems trained directly from audio data help streaming companies describing audio recordings in their catalogs as well as creating relations between them.
We will take a look at what is done by Deezer R&D’s team in this domain using machine learning techniques, especially representation learning ones.
Outstanding supervised classification performances obtained by CNNs indicate they have the ability to create relevant invariants for classification. We show that this can be achieved through progressive invariance incorporation and as well via perfectly invertible architectures. Illustrations are given through Hybrid Scattering Networks, based on a geometric representation, and $i$-RevNets, a class of invertible CNNs. We explicit several empirical properties, like progressive linear separability, in order to shed light on the inner mechanisms implemented by CNNs.
Music mixing is the process of combining multitrack recordings into a final product. Sony CSL is involved in music mixing through the AutoMix and DAWGen projects. Beyond making the music merely audible, what is the purpose of mixing? Citing many examples within five categories, we explore a variety of aspects music mixing can address.
LEGO bricks are among the most popular toys for children (and adults), and they also are popular tools capable of fostering individual creativity and problem-solving skills.
In relatively recent times, some scientific works exploited LEGO bricks for a wide variety of different purposes, from the measurement of cognitive effects on problem-solving in social sciences to the representation of molecular structures, while in some cases, they became part of the experimental apparatus.
In this presentation, I will talk about my past work with LEGO Bricks, starting from the first experiments on collective creativity during free building events that took place in Rome. From these experiments, we began to develop a new interactive experience in which the “free building” task is replaced by the task of finding sustainable solutions for problems related to urban environments. This new experiment requires a realistic modeling framework for the dynamics of the cities. I will conclude presenting the current issues and research questions related to it.
Nowadays dealing with R&I implies not only looking after the scientific value of our work but also having clear in mind the effects of what we are doing for society in a wide perspective: from policymakers to enterprises, up to citizens. Research and innovation, therefore, should be able to come out from labs and create bridges with their reference context. To this regard, the challenge is to be able to creating economic, social, cultural and environmental impact looking at them as a measurement of “change”.Through the seminar, we will share and talk about some tools, such as the logical framework matrix, and some concepts, like Responsible Research and Innovation, to comply with this issue.
The workshop looks at the way A.I. is reshaping the music business, in terms of creativity, promotion and distribution.
About fifty years ago, linguistics played a central role in cognitive science and its insights were highly influential for developments in models and applications of natural language processing. Today, language is still seen as a major issue, but all recent breakthroughs in language studies – particularly in the fields of computational linguistics and artificial intelligence – have been achieved without influence from developments in linguistics. That is unfortunate, because the most powerful language technologies today are still incapable of understanding natural language, and they would greatly benefit from more linguistic sophistication.
In this presentation, I will present how “constructional approaches” to language can put linguistics back on the map of cognitive science and how it can help linguistics to make claim to the position of the science of natural language processing. More specifically, I will present our work on Fluid Construction Grammar, the world’s most advanced computational platform for constructional language processing, which intends to achieve both deep semantic parsing and adequate production using the same linguistic inventories.
Possible issues with FFT:
• Choice of network architecture?
• 1D input data (1 input vector = 1 window) or 2D input data (1 input vector = several reshaped windows)?
• Recurrent or non-recurrent network? Non-recurrent possible in 2D case.
• In case of 2D input data, is reshaping important? (eg. consequences in case of convolutional network)
Possible issues with CWT:
• Due to high dimensionality of CWT, is it feasible to set it as input of NN?
• If it’s feasible, what reshaping?
• Can we imagine a recurrent NN architecture with different time-scales [Alpay 2016] to suit CWT input?
Creativity and innovation are key elements in many different areas and disciplines since they represent the primary motor to explore new solutions in ever-changing and unpredictable environments. New biological traits and functions, new technological artefacts, new social, linguistic and cultural structures, new meanings, are very often triggered by the mutated external conditions. Unfortunately the detailed mechanisms through which humans,
societies and nature express their creativity and innovate are largely unknown. The common intuition that one new thing often leads to another is captured, mathematically, by the notion of adjacent possible, introduced by Stuart Kauffman. Originally introduced in the framework of biology, the adjacent possible metaphor already expanded its scope to include all those things (ideas, linguistic structures, concepts, molecules, genomes, technological artefacts, etc.) that are one step away from what actually exists, and hence can arise from incremental modifications and recombination of existing material. In this talk I’ll present a mathematical framework, describing the expansion of the adjacent possible, whose predictions are borne out in several data sets drawn from social and technological systems. Finally I’ll discuss how games could represent a extraordinary framework to experimentally investigate basic mechanisms at play whenever we learn, create and innovate. I’ll present a few examples recently developed in the framework of the KREYON project (www.kreyon.net).