The ARCA23K baseline system

Overview

ARCA23K Baseline System

This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline system can be found in our DCASE2021 paper [1].

Requirements

This software requires Python >=3.8. To install the dependencies, run:

poetry install

or:

pip install -r requirements.txt

You are also free to use another package manager (e.g. Conda).

The ARCA23K and FSD50K datasets are required too. For convenience, bash scripts are provided to download the datasets automatically. The dependencies are bash, curl, and unzip. Simply run the following command from the root directory of the project:

$ scripts/download_arca23k.sh
$ scripts/download_fsd50k.sh

This will download the datasets to a directory called _datasets/. When running the software, the --arca23k_dir and --fsd50k_dir options (refer to the Usage section) can be used to specify the location of the datasets. This is only necessary if the dataset paths are different from the default.

Usage

The general usage pattern is:

python <script> [-f PATH] <args...> [options...]

The command-line options can also be specified in configuration files. The path of a configuration file can be specified to the program using the --config_file (or -f) command-line option. This option can be used multiple times. Options that are passed in the command-line override those in the config file(s). See default.ini for an example of a config file. Note that default.ini does not need to be specified in the command line and should not be modified.

Training

To train a model, run:

python baseline/train.py DATASET [-f FILE] [--experiment_id ID] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--frac NUM] [--sample_rate NUM] [--block_length NUM] [--hop_length NUM] [--features SPEC] [--cache_features BOOL] [--model {vgg9a,vgg11a}] [--weights_path PATH] [--label_noise DICT] [--n_epochs N] [--batch_size N] [--lr NUM] [--lr_scheduler SPEC] [--partition SPEC] [--seed N] [--cuda BOOL] [--n_workers N] [--overwrite BOOL]

The DATASET argument accepts the following values:

  • arca23k - Train using the ARCA23K dataset.
  • arca23k-fsd - Train using the ARCA23K-FSD dataset.
  • mixed-p - Train using a mixture of ARCA23K and ARCA23K-FSD. Replace p with a fraction that represents the percentage of ARCA23K examples to be present in the training set.

The --experiment_id option is used to differentiate experiments. It determines where the output files are saved relative to the path given by the --work_dir option. When running multiple trials, either use the --seed option to specify different random seeds or set it to a negative number to disable setting the random seed. Otherwise, the learned models will be identical across different trials.

Example:

python baseline/train.py arca23k --experiment_id my_experiment

Prediction

To compute predictions, run:

python baseline/predict.py DATASET SUBSET [-f FILE] [--experiment_id ID] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--output_name FILE_NAME] [--clean BOOL] [--sample_rate NUM] [--block_length NUM] [--features SPEC] [--cache_features BOOL] [--weights_path PATH] [--batch_size N] [--partition SPEC] [--n_workers N] [--seed N] [--cuda BOOL]

The SUBSET argument must be set to either training, validation, or test.

Example:

python baseline/predict.py arca23k test --experiment_id my_experiment

Evaluation

To evaluate the predictions, run:

python baseline/evaluate.py DATASET SUBSET [-f FILE] [--experiment_id LIST] [--work_dir DIR] [--arca23k_dir DIR] [--fsd50k_dir DIR] [--output_name FILE_NAME] [--cached BOOL]

The SUBSET argument must be set to either training, validation, or test.

Example:

python baseline/evaluate.py arca23k test --experiment_id my_experiment

Citing

If you wish to cite this work, please cite the following paper:

[1] T. Iqbal, Y. Cao, A. Bailey, M. D. Plumbley, and W. Wang, โ€œARCA23K: An audio dataset for investigating open-set label noiseโ€, in Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021), 2021, Barcelona, Spain, pp. 201โ€“205.

BibTeX:

@inproceedings{Iqbal2021,
    author = {Iqbal, T. and Cao, Y. and Bailey, A. and Plumbley, M. D. and Wang, W.},
    title = {{ARCA23K}: An audio dataset for investigating open-set label noise},
    booktitle = {Proceedings of the Detection and Classification of Acoustic Scenes and Events 2021 Workshop (DCASE2021)},
    pages = {201--205},
    year = {2021},
    address = {Barcelona, Spain},
}
Implements VQGAN+CLIP for image and video generation, and style transfers, based on text and image prompts. Emphasis on ease-of-use, documentation, and smooth video creation.

VQGAN-CLIP-GENERATOR Overview This is a package (with available notebook) for running VQGAN+CLIP locally, with a focus on ease of use, good documentat

Ryan Hamilton 98 Dec 30, 2022
The Official TensorFlow Implementation for SPatchGAN (ICCV2021)

SPatchGAN: Official TensorFlow Implementation Paper "SPatchGAN: A Statistical Feature Based Discriminator for Unsupervised Image-to-Image Translation"

39 Dec 30, 2022
ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

ONNX-GLPDepth - Python scripts for performing monocular depth estimation using the GLPDepth model in ONNX

Ibai Gorordo 18 Nov 06, 2022
๐Ÿงฎ Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model after All

Accompanying source code to the paper "Matrix Factorization for Collaborative Filtering is just Solving an Adjoint Latent Dirichlet Allocation Model A

Florian Wilhelm 39 Dec 03, 2022
Deep Learning to Improve Breast Cancer Detection on Screening Mammography

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Deep Learning to Improve Breast

Li Shen 305 Jan 03, 2023
[SIGGRAPH Asia 2021] DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning.

DeepVecFont This is the homepage for "DeepVecFont: Synthesizing High-quality Vector Fonts via Dual-modality Learning". Yizhi Wang and Zhouhui Lian. WI

Yizhi Wang 17 Dec 22, 2022
An implementation of IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification

IMLE-Net: An Interpretable Multi-level Multi-channel Model for ECG Classification The repostiory consists of the code, results and data set links for

12 Dec 26, 2022
Code for the paper "On the Power of Edge Independent Graph Models"

Edge Independent Graph Models Code for the paper: "On the Power of Edge Independent Graph Models" Sudhanshu Chanpuriya, Cameron Musco, Konstantinos So

Konstantinos Sotiropoulos 0 Oct 26, 2021
A sequence of Jupyter notebooks featuring the 12 Steps to Navier-Stokes

CFD Python Please cite as: Barba, Lorena A., and Forsyth, Gilbert F. (2018). CFD Python: the 12 steps to Navier-Stokes equations. Journal of Open Sour

Barba group 2.6k Dec 30, 2022
A graph neural network (GNN) model to predict protein-protein interactions (PPI) with no sample features

A graph neural network (GNN) model to predict protein-protein interactions (PPI) with no sample features

2 Jul 25, 2022
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation

Contents Local and Global GAN Cross-View Image Translation Semantic Image Synthesis Acknowledgments Related Projects Citation Contributions Collaborat

Hao Tang 131 Dec 07, 2022
The repository includes the code for training cell counting applications. (Keras + Tensorflow)

cell_counting_v2 The repository includes the code for training cell counting applications. (Keras + Tensorflow) Dataset can be downloaded here : http:

Weidi 113 Oct 06, 2022
๐Ÿ”ฅ Cogitare - A Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python

Cogitare is a Modern, Fast, and Modular Deep Learning and Machine Learning framework for Python. A friendly interface for beginners and a powerful too

Cogitare - Modern and Easy Deep Learning with Python 76 Sep 30, 2022
Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation

CorrNet This project provides the code and results for 'Lightweight Salient Object Detection in Optical Remote Sensing Images via Feature Correlation'

Gongyang Li 13 Nov 03, 2022
Attention-driven Robot Manipulation (ARM) which includes Q-attention

Attention-driven Robotic Manipulation (ARM) This codebase is home to: Q-attention: Enabling Efficient Learning for Vision-based Robotic Manipulation I

Stephen James 84 Dec 29, 2022
CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss

CAMoE + Dual SoftMax Loss (DSL): Improving Video-Text Retrieval by Multi-Stream Corpus Alignment and Dual Softmax Loss This is official implement of "

็จ‹ๆ˜Ÿ 87 Dec 24, 2022
Group Fisher Pruning for Practical Network Compression(ICML2021)

Group Fisher Pruning for Practical Network Compression (ICML2021) By Liyang Liu*, Shilong Zhang*, Zhanghui Kuang, Jing-Hao Xue, Aojun Zhou, Xinjiang W

Shilong Zhang 129 Dec 13, 2022
Hierarchical User Intent Graph Network for Multimedia Recommendation

Hierarchical User Intent Graph Network for Multimedia Recommendation This is our Pytorch implementation for the paper: Hierarchical User Intent Graph

6 Jan 05, 2023
1st ranked 'driver careless behavior detection' for AI Online Competition 2021, hosted by MSIT Korea.

2021AICompetition-03 ๋ณธ repo ๋Š” mAy-I Inc. ํŒ€์œผ๋กœ ์ฐธ๊ฐ€ํ•œ 2021 ์ธ๊ณต์ง€๋Šฅ ์˜จ๋ผ์ธ ๊ฒฝ์ง„๋Œ€ํšŒ ์ค‘ [์ด๋ฏธ์ง€] ์šด์ „ ์‚ฌ๊ณ  ์˜ˆ๋ฐฉ์„ ์œ„ํ•œ ์šด์ „์ž ๋ถ€์ฃผ์˜ ํ–‰๋™ ๊ฒ€์ถœ ๋ชจ๋ธ] ํƒœ์Šคํฌ ์ˆ˜ํ–‰์„ ์œ„ํ•œ ๋ ˆํฌ์ง€ํ† ๋ฆฌ์ž…๋‹ˆ๋‹ค. mAy-I ๋Š” ๊ณผํ•™๊ธฐ์ˆ ์ •๋ณดํ†ต์‹ ๋ถ€๊ฐ€ ์ฃผ์ตœํ•˜

Junhyuk Park 9 Dec 01, 2022
Train CPPNs as a Generative Model, using Generative Adversarial Networks and Variational Autoencoder techniques to produce high resolution images.

cppn-gan-vae tensorflow Train Compositional Pattern Producing Network as a Generative Model, using Generative Adversarial Networks and Variational Aut

hardmaru 343 Dec 29, 2022