FMA: A Dataset For Music Analysis

Overview

FMA: A Dataset For Music Analysis

Michaël Defferrard, Kirell Benzi, Pierre Vandergheynst, Xavier Bresson.
International Society for Music Information Retrieval Conference (ISMIR), 2017.

We introduce the Free Music Archive (FMA), an open and easily accessible dataset suitable for evaluating several tasks in MIR, a field concerned with browsing, searching, and organizing large music collections. The community's growing interest in feature and end-to-end learning is however restrained by the limited availability of large audio datasets. The FMA aims to overcome this hurdle by providing 917 GiB and 343 days of Creative Commons-licensed audio from 106,574 tracks from 16,341 artists and 14,854 albums, arranged in a hierarchical taxonomy of 161 genres. It provides full-length and high-quality audio, pre-computed features, together with track- and user-level metadata, tags, and free-form text such as biographies. We here describe the dataset and how it was created, propose a train/validation/test split and three subsets, discuss some suitable MIR tasks, and evaluate some baselines for genre recognition. Code, data, and usage examples are available at https://github.com/mdeff/fma.

Data

All metadata and features for all tracks are distributed in fma_metadata.zip (342 MiB). The below tables can be used with pandas or any other data analysis tool. See the paper or the usage.ipynb notebook for a description.

  • tracks.csv: per track metadata such as ID, title, artist, genres, tags and play counts, for all 106,574 tracks.
  • genres.csv: all 163 genres with name and parent (used to infer the genre hierarchy and top-level genres).
  • features.csv: common features extracted with librosa.
  • echonest.csv: audio features provided by Echonest (now Spotify) for a subset of 13,129 tracks.

Then, you got various sizes of MP3-encoded audio data:

  1. fma_small.zip: 8,000 tracks of 30s, 8 balanced genres (GTZAN-like) (7.2 GiB)
  2. fma_medium.zip: 25,000 tracks of 30s, 16 unbalanced genres (22 GiB)
  3. fma_large.zip: 106,574 tracks of 30s, 161 unbalanced genres (93 GiB)
  4. fma_full.zip: 106,574 untrimmed tracks, 161 unbalanced genres (879 GiB)

See the wiki (or #41) for known issues (errata).

Code

The following notebooks, scripts, and modules have been developed for the dataset.

  1. usage.ipynb: shows how to load the datasets and develop, train, and test your own models with it.
  2. analysis.ipynb: exploration of the metadata, data, and features. Creates the figures used in the paper.
  3. baselines.ipynb: baseline models for genre recognition, both from audio and features.
  4. features.py: features extraction from the audio (used to create features.csv).
  5. webapi.ipynb: query the web API of the FMA. Can be used to update the dataset.
  6. creation.ipynb: creation of the dataset (used to create tracks.csv and genres.csv).
  7. creation.py: creation of the dataset (long-running data collection and processing).
  8. utils.py: helper functions and classes.

Usage

Binder   Click the binder badge to play with the code and data from your browser without installing anything.

  1. Clone the repository.

    git clone https://github.com/mdeff/fma.git
    cd fma
  2. Create a Python 3.6 environment.
    # with https://conda.io
    conda create -n fma python=3.6
    conda activate fma
    
    # with https://github.com/pyenv/pyenv
    pyenv install 3.6.0
    pyenv virtualenv 3.6.0 fma
    pyenv activate fma
    
    # with https://pipenv.pypa.io
    pipenv --python 3.6
    pipenv shell
    
    # with https://docs.python.org/3/tutorial/venv.html
    python3.6 -m venv ./env
    source ./env/bin/activate
  3. Install dependencies.

    pip install --upgrade pip setuptools wheel
    pip install numpy==1.12.1  # workaround resampy's bogus setup.py
    pip install -r requirements.txt

    Note: you may need to install ffmpeg or graphviz depending on your usage.
    Note: install CUDA to train neural networks on GPUs (see Tensorflow's instructions).

  4. Download some data, verify its integrity, and uncompress the archives.

    cd data
    
    curl -O https://os.unil.cloud.switch.ch/fma/fma_metadata.zip
    curl -O https://os.unil.cloud.switch.ch/fma/fma_small.zip
    curl -O https://os.unil.cloud.switch.ch/fma/fma_medium.zip
    curl -O https://os.unil.cloud.switch.ch/fma/fma_large.zip
    curl -O https://os.unil.cloud.switch.ch/fma/fma_full.zip
    
    echo "f0df49ffe5f2a6008d7dc83c6915b31835dfe733  fma_metadata.zip" | sha1sum -c -
    echo "ade154f733639d52e35e32f5593efe5be76c6d70  fma_small.zip"    | sha1sum -c -
    echo "c67b69ea232021025fca9231fc1c7c1a063ab50b  fma_medium.zip"   | sha1sum -c -
    echo "497109f4dd721066b5ce5e5f250ec604dc78939e  fma_large.zip"    | sha1sum -c -
    echo "0f0ace23fbe9ba30ecb7e95f763e435ea802b8ab  fma_full.zip"     | sha1sum -c -
    
    unzip fma_metadata.zip
    unzip fma_small.zip
    unzip fma_medium.zip
    unzip fma_large.zip
    unzip fma_full.zip
    
    cd ..

    Note: try 7zip if decompression errors. It might be an unsupported compression issue.

  5. Fill a .env configuration file (at repository's root) with the following content.

    AUDIO_DIR=./data/fma_small/  # the path to a decompressed fma_*.zip
    FMA_KEY=MYKEY  # only if you want to query the freemusicarchive.org API
    
  6. Open Jupyter or run a notebook.

    jupyter notebook
    make usage.ipynb

Impact, coverage, and resources

100+ research papers

Full list on Google Scholar. Some picks below.

2 derived works
~10 posts
5 events
~10 dataset lists

Contributing

Contribute by opening an issue or a pull request. Let this repository be a hub around the dataset!

History

2017-05-09 pre-publication release

  • paper: arXiv:1612.01840v2
  • code: git tag rc1
  • fma_metadata.zip sha1: f0df49ffe5f2a6008d7dc83c6915b31835dfe733
  • fma_small.zip sha1: ade154f733639d52e35e32f5593efe5be76c6d70
  • fma_medium.zip sha1: c67b69ea232021025fca9231fc1c7c1a063ab50b
  • fma_large.zip sha1: 497109f4dd721066b5ce5e5f250ec604dc78939e
  • fma_full.zip sha1: 0f0ace23fbe9ba30ecb7e95f763e435ea802b8ab
  • known issues: see #41

2016-12-06 beta release

  • paper: arXiv:1612.01840v1
  • code: git tag beta
  • fma_small.zip sha1: e731a5d56a5625f7b7f770923ee32922374e2cbf
  • fma_medium.zip sha1: fe23d6f2a400821ed1271ded6bcd530b7a8ea551

Acknowledgments and Licenses

We are grateful to the Swiss Data Science Center (EPFL and ETHZ) for hosting the dataset.

Please cite our work if you use our code or data.

@inproceedings{fma_dataset,
  title = {{FMA}: A Dataset for Music Analysis},
  author = {Defferrard, Micha\"el and Benzi, Kirell and Vandergheynst, Pierre and Bresson, Xavier},
  booktitle = {18th International Society for Music Information Retrieval Conference (ISMIR)},
  year = {2017},
  archiveprefix = {arXiv},
  eprint = {1612.01840},
  url = {https://arxiv.org/abs/1612.01840},
}
@inproceedings{fma_challenge,
  title = {Learning to Recognize Musical Genre from Audio},
  subtitle = {Challenge Overview},
  author = {Defferrard, Micha\"el and Mohanty, Sharada P. and Carroll, Sean F. and Salath\'e, Marcel},
  booktitle = {The 2018 Web Conference Companion},
  year = {2018},
  publisher = {ACM Press},
  isbn = {9781450356404},
  doi = {10.1145/3184558.3192310},
  archiveprefix = {arXiv},
  eprint = {1803.05337},
  url = {https://arxiv.org/abs/1803.05337},
}
Owner
Michaël Defferrard
Research on machine learning and graphs. Open science, source, data.
Michaël Defferrard
Explainable Medical ImageSegmentation via GenerativeAdversarial Networks andLayer-wise Relevance Propagation

MedAI: Transparency in Medical Image Segmentation What is this repo This repo contains the code and experiments that are implemented to contribute in

Awadelrahman M. A. Ahmed 1 Nov 22, 2021
Source code for the Paper: CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints}

CombOptNet: Fit the Right NP-Hard Problem by Learning Integer Programming Constraints Installation Run pipenv install (at your own risk with --skip-lo

Autonomous Learning Group 65 Dec 27, 2022
Learning to Stylize Novel Views

Learning to Stylize Novel Views [Project] [Paper] Contact: Hsin-Ping Huang ([ema

34 Nov 27, 2022
[cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation

PS-MT [cvpr22] Perturbed and Strict Mean Teachers for Semi-supervised Semantic Segmentation by Yuyuan Liu, Yu Tian, Yuanhong Chen, Fengbei Liu, Vasile

Yuyuan Liu 132 Jan 03, 2023
1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

Lihe Yang 209 Jan 01, 2023
QueryInst: Parallelly Supervised Mask Query for Instance Segmentation

QueryInst is a simple and effective query based instance segmentation method driven by parallel supervision on dynamic mask heads, which outperforms previous arts in terms of both accuracy and speed.

Hust Visual Learning Team 386 Jan 08, 2023
Chess reinforcement learning by AlphaGo Zero methods.

About Chess reinforcement learning by AlphaGo Zero methods. This project is based on these main resources: DeepMind's Oct 19th publication: Mastering

Samuel 2k Dec 29, 2022
Weakly Supervised End-to-End Learning (NeurIPS 2021)

WeaSEL: Weakly Supervised End-to-end Learning This is a PyTorch-Lightning-based framework, based on our End-to-End Weak Supervision paper (NeurIPS 202

Auton Lab, Carnegie Mellon University 131 Jan 06, 2023
labelpix is a graphical image labeling interface for drawing bounding boxes

Welcome to labelpix 👋 labelpix is a graphical image labeling interface for drawing bounding boxes. 🏠 Homepage Install pip install -r requirements.tx

schissmantics 26 May 24, 2022
Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021)

Pano-AVQA Official repository of PanoAVQA: Grounded Audio-Visual Question Answering in 360° Videos (ICCV 2021) [Paper] [Poster] [Video] Getting Starte

Heeseung Yun 9 Dec 23, 2022
A PyTorch implementation of NeRF (Neural Radiance Fields) that reproduces the results.

NeRF-pytorch NeRF (Neural Radiance Fields) is a method that achieves state-of-the-art results for synthesizing novel views of complex scenes. Here are

Yen-Chen Lin 3.2k Jan 08, 2023
GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms

GRaNDPapA: Generator of Rad Names from Decent Paper Acronyms Trying to publish a new machine learning model and can't write a decent title for your pa

264 Nov 08, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

Ibai Gorordo 19 Oct 22, 2022
LWCC: A LightWeight Crowd Counting library for Python that includes several pretrained state-of-the-art models.

LWCC: A LightWeight Crowd Counting library for Python LWCC is a lightweight crowd counting framework for Python. It wraps four state-of-the-art models

Matija Teršek 39 Dec 28, 2022
Simple-System-Convert--C--F - Simple System Convert With Python

Simple-System-Convert--C--F REQUIREMENTS Python version : 3 HOW TO USE Run the c

Jonathan Santos 2 Feb 16, 2022
MolRep: A Deep Representation Learning Library for Molecular Property Prediction

MolRep: A Deep Representation Learning Library for Molecular Property Prediction Summary MolRep is a Python package for fairly measuring algorithmic p

AI-Health @NSCC-gz 83 Dec 24, 2022
(NeurIPS 2020) Wasserstein Distances for Stereo Disparity Estimation

Wasserstein Distances for Stereo Disparity Estimation Accepted in NeurIPS 2020 as Spotlight. [Project Page] Wasserstein Distances for Stereo Disparity

Divyansh Garg 92 Dec 12, 2022
This is the formal code implementation of the CVPR 2022 paper 'Federated Class Incremental Learning'.

Official Pytorch Implementation for GLFC [CVPR-2022] Federated Class-Incremental Learning This is the official implementation code of our paper "Feder

Race Wang 57 Dec 27, 2022
This repo contains the code and data used in the paper "Wizard of Search Engine: Access to Information Through Conversations with Search Engines"

Wizard of Search Engine: Access to Information Through Conversations with Search Engines by Pengjie Ren, Zhongkun Liu, Xiaomeng Song, Hongtao Tian, Zh

19 Oct 27, 2022
Rethinking Nearest Neighbors for Visual Classification

Rethinking Nearest Neighbors for Visual Classification arXiv Environment settings Check out scripts/env_setup.sh Setup data Download the following fin

Menglin Jia 29 Oct 11, 2022