Invariances and Data Augmentation for Supervised Music Transcription

Citation:

J. Thickstun, Z. Harchaoui, D. Foster, and S. M. Kakade, Invariances and Data Augmentation for Supervised Music Transcription. International Conference on Acoustics, Speech, and Signal Processing (ICASSP), 2018: ArXiv Report, 2018.

Abstract:

This paper explores a variety of models for frame-based music transcription, with an emphasis on the methods needed to reach state-of-the-art on human recordings. The translation-invariant network discussed in this paper, which combines a traditional filterbank with a convolutional neural network, was the top-performing model in the 2017 MIREX Multiple Fundamental Frequency Estimation evaluation. This class of models shares parameters in the log-frequency domain, which exploits the frequency invariance of music to reduce the number of model parameters and avoid overfitting to the training data. All models in this paper were trained with supervision by labeled data from the MusicNet dataset, augmented by random label-preserving pitch-shift transformations.

Publisher's Version

See also: 2018
Last updated on 10/10/2021