Starter kit for getting started in the Music Demixing Challenge.

Overview

Airborne Banner

Music Demixing Challenge - Starter Kit

๐Ÿ‘‰ Challenge page

Discord

This repository is the Music Demixing Challenge Submission template and Starter kit!

Clone the repository to compete now!

This repository contains:

  • Documentation on how to submit your models to the leaderboard
  • The procedure for best practices and information on how we evaluate your agent, etc.
  • Starter code for you to get started!

Table of Contents

  1. Competition Procedure
  2. How to access and use dataset
  3. How to start participating
  4. How do I specify my software runtime / dependencies?
  5. What should my code structure be like ?
  6. How to make submission
  7. Other concepts
  8. Important links

Competition Procedure

The Music Demixing (MDX) Challenge is an opportunity for researchers and machine learning enthusiasts to test their skills by creating a system able to perform audio source separation.

In this challenge, you will train your models locally and then upload them to AIcrowd (via git) to be evaluated.

The following is a high level description of how this process works

  1. Sign up to join the competition on the AIcrowd website.
  2. Clone this repo and start developing your solution.
  3. Train your models for audio seperation and write prediction code in test.py.
  4. Submit your trained models to AIcrowd Gitlab for evaluation (full instructions below). The automated evaluation setup will evaluate the submissions against the test dataset to compute and report the metrics on the leaderboard of the competition.

How to access and use the dataset

You are allowed to train your system either exclusively on the training part of MUSDB18-HQ dataset or you can use your choice of data. According to the dataset used, you will be eligible for different leaderboards.

๐Ÿ‘‰ Download MUSDB18-HQ dataset

In case you are using external dataset, please mention it in your aicrowd.json.

{
  [...],
  "external_dataset_used": true
}

The MUSDB18 dataset contains 150 songs (100 songs in train and 50 songs in test) together with their seperations in the following manner:

|
โ”œโ”€โ”€ train
โ”‚   โ”œโ”€โ”€ A Classic Education - NightOwl
โ”‚   โ”‚   โ”œโ”€โ”€ bass.wav
โ”‚   โ”‚   โ”œโ”€โ”€ drums.wav
โ”‚   โ”‚   โ”œโ”€โ”€ mixture.wav
โ”‚   โ”‚   โ”œโ”€โ”€ other.wav
โ”‚   โ”‚   โ””โ”€โ”€ vocals.wav
โ”‚   โ””โ”€โ”€ ANiMAL - Clinic A
โ”‚       โ”œโ”€โ”€ bass.wav
โ”‚       โ”œโ”€โ”€ drums.wav
โ”‚       โ”œโ”€โ”€ mixture.wav
โ”‚       โ”œโ”€โ”€ other.wav
โ”‚       โ””โ”€โ”€ vocals.wav
[...]

Here the mixture.wav file is the original music on which you need to do audio source seperation.
While bass.wav, drums.wav, other.wav and vocals.wav contain files for your training purposes.
Please note again: To be eligible for Leaderboard A, you are only allowed to train on the songs in train.

How to start participating

Setup

  1. Add your SSH key to AIcrowd GitLab

You can add your SSH Keys to your GitLab account by going to your profile settings here. If you do not have SSH Keys, you will first need to generate one.

  1. Clone the repository

    git clone [email protected]:AIcrowd/music-demixing-challenge-starter-kit.git
    
  2. Install competition specific dependencies!

    cd music-demixing-challenge-starter-kit
    pip3 install -r requirements.txt
    
  3. Try out random prediction codebase present in test.py.

How do I specify my software runtime / dependencies ?

We accept submissions with custom runtime, so you don't need to worry about which libraries or framework to pick from.

The configuration files typically include requirements.txt (pypi packages), environment.yml (conda environment), apt.txt (apt packages) or even your own Dockerfile.

You can check detailed information about the same in the ๐Ÿ‘‰ RUNTIME.md file.

What should my code structure be like ?

Please follow the example structure as it is in the starter kit for the code structure. The different files and directories have following meaning:

.
โ”œโ”€โ”€ aicrowd.json           # Submission meta information - like your username
โ”œโ”€โ”€ apt.txt                # Packages to be installed inside docker image
โ”œโ”€โ”€ data                   # Your local dataset copy - you don't need to upload it (read DATASET.md)
โ”œโ”€โ”€ requirements.txt       # Python packages to be installed
โ”œโ”€โ”€ test.py                # IMPORTANT: Your testing/prediction code, must be derived from MusicDemixingPredictor (example in test.py)
โ””โ”€โ”€ utility                # The utility scripts to provide smoother experience to you.
    โ”œโ”€โ”€ docker_build.sh
    โ”œโ”€โ”€ docker_run.sh
    โ”œโ”€โ”€ environ.sh
    โ””โ”€โ”€ verify_or_download_data.sh

Finally, you must specify an AIcrowd submission JSON in aicrowd.json to be scored!

The aicrowd.json of each submission should contain the following content:

{
  "challenge_id": "evaluations-api-music-demixing",
  "authors": ["your-aicrowd-username"],
  "description": "(optional) description about your awesome agent",
  "external_dataset_used": false
}

This JSON is used to map your submission to the challenge - so please remember to use the correct challenge_id as specified above.

How to make submission

๐Ÿ‘‰ SUBMISSION.md

Best of Luck ๐ŸŽ‰ ๐ŸŽ‰

Other Concepts

Time constraints

You need to make sure that your model can do audio seperation for each song within 4 minutes, otherwise the submission will be marked as failed.

Local Run

๐Ÿ‘‰ LOCAL_RUN.md

Contributing

๐Ÿ™ You can share your solutions or any other baselines by contributing directly to this repository by opening merge request.

  • Add your implemntation as test_<approach-name>.py
  • Test it out using python test_<approach-name>.py
  • Add any documentation for your approach at top of your file.
  • Import it in predict.py
  • Create merge request! ๐ŸŽ‰ ๐ŸŽ‰ ๐ŸŽ‰

Contributors

๐Ÿ“Ž Important links

๐Ÿ’ช  Challenge Page: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021

๐Ÿ—ฃ๏ธ  Discussion Forum: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021/discussion

๐Ÿ†  Leaderboard: https://www.aicrowd.com/challenges/music-demixing-challenge-ismir-2021/leaderboards

Owner
AIcrowd
AIcrowd
Contrastive Learning with Non-Semantic Negatives

Contrastive Learning with Non-Semantic Negatives This repository is the official implementation of Robust Contrastive Learning Using Negative Samples

39 Jul 31, 2022
A JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short.

BraVe This is a JAX implementation of Broaden Your Views for Self-Supervised Video Learning, or BraVe for short. The model provided in this package wa

DeepMind 44 Nov 20, 2022
Rust bindings for the C++ api of PyTorch.

tch-rs Rust bindings for the C++ api of PyTorch. The goal of the tch crate is to provide some thin wrappers around the C++ PyTorch api (a.k.a. libtorc

Laurent Mazare 2.3k Dec 30, 2022
Multi-Anchor Active Domain Adaptation for Semantic Segmentation (ICCV 2021 Oral)

Multi-Anchor Active Domain Adaptation for Semantic Segmentation Munan Ning*, Donghuan Lu*, Dong Weiโ€ , Cheng Bian, Chenglang Yuan, Shuang Yu, Kai Ma, Y

Munan Ning 36 Dec 07, 2022
A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data

A Parameter-free Deep Embedded Clustering Method for Single-cell RNA-seq Data Overview Clustering analysis is widely utilized in single-cell RNA-seque

AI-Biomed @NSCC-gz 3 May 08, 2022
Lbl2Vec learns jointly embedded label, document and word vectors to retrieve documents with predefined topics from an unlabeled document corpus.

Lbl2Vec Lbl2Vec is an algorithm for unsupervised document classification and unsupervised document retrieval. It automatically generates jointly embed

sebis - TUM - Germany 61 Dec 20, 2022
Re-TACRED: Addressing Shortcomings of the TACRED Dataset

Re-TACRED Re-TACRED: Addressing Shortcomings of the TACRED Dataset

George Stoica 40 Dec 10, 2022
๐Ÿค— Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | ็ฎ€ไฝ“ไธญๆ–‡ | ็น้ซ”ไธญๆ–‡ | ํ•œ๊ตญ์–ด State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow ๐Ÿค— Transformers provides thousands of pretrai

Hugging Face 77.4k Jan 05, 2023
DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors

DeepMoCap: Deep Optical Motion Capture using multiple Depth Sensors and Retro-reflectors By Anargyros Chatzitofis, Dimitris Zarpalas, Stefanos Kollias

tofis 24 Oct 08, 2022
Pytorch ImageNet1k Loader with Bounding Boxes.

ImageNet 1K Bounding Boxes For some experiments, you might wanna pass only the background of imagenet images vs passing only the foreground. Here, I'v

Amin Ghiasi 11 Oct 15, 2022
WarpRNNT loss ported in Numba CPU/CUDA for Pytorch

RNNT loss in Pytorch - Numba JIT compiled (warprnnt_numba) Warp RNN Transducer Loss for ASR in Pytorch, ported from HawkAaron/warp-transducer and a re

Somshubra Majumdar 15 Oct 22, 2022
A python script to dump all the challenges locally of a CTFd-based Capture the Flag.

A python script to dump all the challenges locally of a CTFd-based Capture the Flag. Features Connects and logins to a remote CTFd instance. Dumps all

Podalirius 77 Dec 07, 2022
Jigsaw Rate Severity of Toxic Comments

Jigsaw Rate Severity of Toxic Comments

Guanshuo Xu 66 Nov 30, 2022
HW3 โ€• GAN, ACGAN and UDA

HW3 โ€• GAN, ACGAN and UDA In this assignment, you are given datasets of human face and digit images. You will need to implement the models of both GAN

grassking100 1 Dec 13, 2021
A reimplementation of DCGAN in PyTorch

DCGAN in PyTorch A reimplementation of DCGAN in PyTorch. Although there is an abundant source of code and examples found online (as well as an officia

Diego Porres 6 Jan 08, 2022
Implementation of Monocular Direct Sparse Localization in a Prior 3D Surfel Map (DSL)

DSL Project page: https://sites.google.com/view/dsl-ram-lab/ Monocular Direct Sparse Localization in a Prior 3D Surfel Map Authors: Haoyang Ye, Huaiya

Haoyang Ye 93 Nov 30, 2022
Automatically replace ONNX's RandomNormal node with Constant node.

onnx-remove-random-normal This is a script to replace RandomNormal node with Constant node. Example Imagine that we have something ONNX model like the

Masashi Shibata 1 Dec 11, 2021
SimBERTๅ‡็บง็‰ˆ๏ผˆSimBERTv2๏ผ‰๏ผ

RoFormer-Sim RoFormer-Sim๏ผŒๅˆ็งฐSimBERTv2๏ผŒๆ˜ฏๆˆ‘ไปฌไน‹ๅ‰ๅ‘ๅธƒ็š„SimBERTๆจกๅž‹็š„ๅ‡็บง็‰ˆใ€‚ ไป‹็ป https://kexue.fm/archives/8454 ่ฎญ็ปƒ tensorflow 1.14 + keras 2.3.1 + bert4keras 0.10.6 ไธ‹่ฝฝ

318 Dec 31, 2022
BERTMap: A BERT-Based Ontology Alignment System

BERTMap: A BERT-based Ontology Alignment System Important Notices The relevant paper was accepted in AAAI-2022. Arxiv version is available at: https:/

KRR 36 Dec 24, 2022
PyTorch Implementation of Spatially Consistent Representation Learning(SCRL)

Spatially Consistent Representation Learning (CVPR'21) Official PyTorch implementation of Spatially Consistent Representation Learning (SCRL). This re

Kakao Brain 102 Nov 03, 2022