Driven by a strong interest in experimental music as well as electronic dance music, I have attended over the last years many live music performances, looking for stimulating new ways of performing music. Often, I find computers interactions on stage unsatisfactory. It feels like the musician loses freedom once the computer gets into the loop, leading to very formulaic and constrained, pre-structured music.
This is where I think generative probabilistic model can play an exciting role, by enabling the computer to be not just a static tool, producing predictable and repeated outputs for the same inputs, but rather become a creative force on stage, presenting the musician with ever-changing propositions. In this context, the musician can be, again, more than just an operator and indeed be a creator. Yet, a generative model in a creative context is only as powerful as the interface through which its user interacts with it. This calls for a constant back-and-forth between the design of powerful deep learning models and the design of the associated interfaces, in order to build meaningful – and useful – new systems.
Inpainting-based generative modeling allows for stimulating human-machine interactions by letting users perform stylistically coherent local editions to an object using a statistical model. We presentNONOTO, a new interface for interactive music generation based on in-painting models. It is aimed both at researchers, by offering a simple and flexible API allowing them to connect their own models with the interface, and at musicians by providing industry-standard features such as audio playback, real-time MIDI output and straightforward synchronization with DAWs using Ableton Link.