AlexanderGbd's picture
Update README.md
7e19f33 verified
metadata
license: cc-by-nc-sa-4.0
pipeline_tag: audio-classification
tags:
  - autrainer
  - audio
  - ecoacoustic-tagging

Baseline model for audio classification of orthopera and hemiptera

This baseline model, utilized in the ECOSoundSet paper (link will follow), was trained to tag audio files with one or more of 86 species from the Orthoptera and Hemiptera insect orders.

Installation

To use the model, you have to install autrainer, e.g. via pip:

pip install autrainer

For more information about autrainer, please refer to: https://autrainer.github.io/autrainer/index.html

Usage

The model can be applied on all wav files present in a folder (<data-root>) and stored in another folder (<output-root>):

autrainer inference hf:AlexanderGbd/insects-base-cnn10-96k-t -r <data-root> <output-root>

For possible inference settings (e.g. using sliding window) and all usable parameters, please have a look at the autrainer documentation.

Training

Pretraining

The model has been originally trained on AudioSet by Kong et. al. (https://ieeexplore.ieee.org/abstract/document/9229505).

Dataset

The model has been further trained (finetuned) on 4s long audio segments of the ECOSoundSet dataset, which is soon to be submitted, and will be referenced here as soon as it is publicly available.

Features

The audio recordings were resampled to 96kHz, as we wanted to avoid losing too much frequency information from the species. Log-Mel spectrograms were then extracted using torchlibrosa.

Training process

The model has been trained for 30 epochs. At the end of each epoch, the model was evaluated on our validation set. We release the state that achieved the best performance on this validation set. All training hyperparameters can be found inside conf/config.yaml inside the model folder. The train, dev, and test set can be accessed in the splits-folder.

Evaluation

The performance on the test set reached a (macro) f1-score of 0.568.

Acknowledgments

Please acknowledge the work which produced the original model and the ECOSoundSet dataset. We would also appreciate an acknowledgment to autrainer.