Aucouturier, J.-J., Pachet, F., Roy, P. and Beurivé, A. Signal + Context = Better Classification. Proceedings of ISMIR 07, pages 425-430, Vienna, Austria, 2007

Sony CSL authors: Jean-Julien Aucouturier, Anthony Beurivé, François Pachet, Pierre Roy

Abstract

Typical signal-based approaches to extract musical descriptions from audio signals only have limited precision. A possible explanation is that they do not exploit take any account of context, which provide important cues in human cognitive processing of music: e.g. electric guitar is unlikely in 1930s music, children choirs rarely perform heavy metal, etc. We propose an architecture to train a large set of binary classifiers simultaneously, for many different musical metadata (genre, instrument, mood, etc.), in such a way that correlations between metadata are used to reinforce each individual classifier. The system is iterative: it uses classification decisions it made on some metadata problems as new features for new, harder classification problems; and hybrid: it uses a signal classifier based on timbre similarity to bootstrap symbolic reasoning with decision trees. While further work is needed, the approach seems to outperform signalonly algorithms by 5% precision on average, and sometimes up to 15% for traditionally difficult problems such as cultural and subjective categories.

Keywords: timbre, feature extraction

Downloads

[PDF] Adobe Acrobat PDF file

BibTeX entry

@INPROCEEDINGS { aucouturier:07e, ADDRESS="Vienna, Austria", AUTHOR="Aucouturier, J.-J. and Pachet, F. and Roy, P. and Beurivé, A.", BOOKTITLE="Proceedings of ISMIR 07", PAGES="425-430", TITLE="Signal + Context = Better Classification", YEAR="2007", }