AI artwork sells for $432,500 — nearly 45 times its high estimate — as Christie’s becomes the first auction house to offer a work of art created by an algorithm. Is Artificial Intelligence set to become art’s next medium? This seminar gives insights into the exploration of the interface between art and Artificial Intelligence and the different ways machine learning algorithms can catalyze natural human creativity.
The Obvious Collective is a group of friends, artists, and researchers, driven by a common sensibility regarding questions bounded to the increasing advent of Artificial Intelligence and Machine Learning. One of their goals is to explain and democratize these advances through their artworks. Their project began a year ago with the discovery of Generative Adversarial Networks (GANs), which are Machine Learning algorithms that generate images. This technology allows them to experiment with the notion of creativity for a machine.
In this talk, I will explain the mechanisms of transformation- and invariance learning for symbolic music and audio, and I will describe different models that are based on this principle.Transformation Learning (TL) provides us with a novel way of musical representation learning. To that end, we do not aim to learn the musical patterns themselves, but some ``rules'' defining how a given pattern can be transformed into another pattern.
TL was initially proposed for image processing and had not yet been applied to music. In this talk, I summarize our experiments in TL for music. The models used throughout our work are based on Gated Autoencoders (GAE) which learn orthogonal transformations between data pairs. We show that a GAE can learn chromatic transposition, tempo-change, and the retrograde movement in music, but also more complex musical transformations, like diatonic transposition.
Transformation Learning (TL) provides us with a different view on music data, and yields features complementary to other music descriptors (e.g., such as obtained by autoencoder learning or hand-crafted features). There are different possible research directions regarding TL in music. They involve using the transformation features themselves, using transformation-invariant features computed from TL models, and using TL models for music generation.
I will particularly focus on DrumNet, a convolutional variant of a Gated Autoencoder, and will show how TL leads to time and tempo-invariant representations of rhythm. Importantly, learning transformations and learning invariances are two sides of the same coin (as specific invariances are defined with respect to specific transformations). I will introduce the Complex Autoencoder, a model derived from a Gated Autoencoder, which learns both a transformation-invariant, and a transformation-variant feature space. Using transposition- and time-shift invariant features, we obtain improved performance for audio alignment tasks.
Sebastian Groh is passionate to drive the 5D's fuelling our energy future: Decentralization, Decarbonization, Disruption, Democratization and Digitization. In line with these beliefs, together with his colleagues at SOLshare, he installed the world's first cyber-physical P2P solar sharing grid in a remote area of Bangladesh.
Sebastian Groh is a 2013 Stanford Ignite Fellow from Stanford Graduate School of Business and holds a PhD from Aalborg University and the Postgraduate School Microenergy Systems at the TU Berlin where he wrote his doctoral thesis on the role of energy in development processes, energy poverty& technical innovations, with a special focus on Bangladesh. He published a book and multiple journal articles on the topic of decentralized electrification in the Global South. Dr Groh started his career and received his DNA at MicroEnergy International, a Berlin-based consultancy firm working on microfinance and decentralized energy. In 2014, Dr Groh founded SOLshare, acting as its CEO since then. He is also an Associate Professor in the Brac Business School at BRAC University in Dhaka (Bangladesh). On behalf of SOLshare, he received numerous awards, among them Tech Pioneer ‘18 by the World Economic Forum and best energy startup in the world by Free Electrons.
SOLshare also received the prestigious UN DESA Powering the Future We WantUSD 1M Energy Grant, along with Grameen Shakti. Dr Groh became Ashoka Fellow in 2018, & UBS Global Visionary in 2019, as well as received the 2019Unilever Young Entrepreneurs Award.
Manufacturing at small scales is a challenge requiring in most cases a human operator in the loop. However his perception of the task is impaired seriously because of the lack of feedback quality. This can be improved sensibly by providing additional sensory modalities: notably the haptic sense is a key element of human dexterity. In this talk, I'll mention some approaches to implement this sensory coupling between the microscale and the operator. These techniques lend themselves naturally to coupling manual control with automation, similar to many successful applications of robotic technologies, such as surgical robotics or robotic space exploration.
SINAN HALIYO is currently an Associate Professor at the Institute of Intelligent Systems and Robotics (ISIR), Sorbonne University, Paris, where he leads the 'Multiscale Interactions' Lab. He has been active in the field of microrobotics since 1999 on topics including control and design issues, physical interactions and user interfaces for microscale applications in assembly, characterization and user training. He also takes a particular interest in human-computer interaction issues in remote handling and teleoperation, especially with haptics and multimodal interfaces.
The seminar gives an overview of the topics presented at the 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, with a focus on language processing and a strong emphasis on deep learning.
AnneMarie Maes is an artist who has been studying the close interactions and co-evolution within urban ecosystems. Her research practise combines art and science, with a keen interest in DIY technologies and biotechnology. She works with a range of biological, digital and traditional media, including live organisms. Her artistic research is materialised in techno-organic objects that are inspired by factual/fictional stories; in artefacts that are a combination of digital fabrication and craftsmanship; in installations that reflect both the problem and the (possible) solution, in multispecies collaborations, in polymorphic forms and models created by eco-data. On the rooftop of her studio in Brussels (BE), she has created an open-air lab and experimental garden where she studies the processes that nature employs to create form. Her research provides an ongoing source of inspiration for her artworks. The Bee Agency as well as the Laboratory for Form and Matter -in which she experiments with bacteria and living textiles – provide a framework that has inspired a wide range of installations, sculptures, photography works, objects and books – all at the intersection of art, science and technology. In 2017, she received an Honorary Mention in the Hybrid Art category at Ars Electronica for the Intelligent Guerrilla Beehive project.
In this talk, I will discuss some of my recent works connected with social experiments about the study of collective creativity and learning about urban sustainability. In the first part, I will show the results coming from an experiment realized some years ago, during the Kreyon Days open event in PalaExpo in Rome. During this event, visitors could take part in an open-ended experiment, where they were asked to build some LEGO artworks collectively. RFID sensors, given to the participants, allowed for the reconstruction of the dynamical social network and the identification of teams that were contributing to a specific artwork. This data allowed for the identification of some of the characteristics of the most efficient building teams. For those interested, the work can be found here: https://www.pnas.org/content/116/44/22088
In the second part, I will discuss an ongoing experiment that is taking place during the “AI: More than human exhibition” in London and Groeningen. This experiment, dubbed “Kreyon City,” aims at understanding how individuals relate to complex sustainability problems. I will discuss the experience and some preliminary results coming from the analysis of the first tranche of collected data.
The relationship between art and technology is today a central theme in the contemporary art debate. A brief terminological analysis shows us how technology-based terms, such as “artificial intelligence”, posthumanism, machine learning, blockchain, etc. are increasingly present and pervasive. This is also shown today by the increasing interest in arts by companies in the technology sector. Microsoft, Google, Facebook, Adobe, among others, are all creating new artists’ residency allowing artists to work into the companies. A new market is growing that cultural institution should involve. The practice of artists is never related only with mere experimentation of technology. It is important to mix arts and innovation, artists and companies, orienting ethically the deterministic idea of technology development, reflecting through the experimentation of our contemporary era.
Valentino Catricalà (PhD) is a scholar and contemporary art curator specialised in the analysis of the relationship of artists with new technologies and media. He is currently the director of the Art Section of the Maker Faire-The European Edition, the biggest Faire on creativity and innovation in Europe; Art Consultant at Paris Sony CS Lab and professor at Lecce Academy of Fine Art.
I will present a recently released neural-network-based audio processing toolbox called nnAudio. This toolbox leverages 1D convolutional neural networks for real-time spectrogram generation (time-domain to frequency-domain conversion). This enables us to generate spectrograms on-the-fly without the need to store any of the spectrograms on the disk when training neural networks for audio related tasks. In this talk, I will discuss one of the possible applications of nnAudio, namely, the exploration of suitable input representations for automatic music transcription (AMT).
In this talk, I will share our empirical results on learning disentangled representations using Gaussian mixture variational autoencoders (GMVAEs) for music instrument sounds. Specifically, we achieve disentanglement of note timbre and pitch, respectively, represented as latent timbre and pitch variables, by learning separate neural network encoders. The distributions of the two latent variables are regularized by distinct Gaussian mixture distributions. A neural network decoder is used to synthesize sounds with the desired timbre and pitch, which takes a concatenation of the timbre and pitch variables as input. The performance of the disentanglement network is evaluated by both qualitative and quantitative approaches, which further demonstrate the model's applicability in both controllable sound synthesis and many-to-many timbre transfer.