Baseline model for audio classification of orthopera and hemiptera
The baseline model which was used in the ECOSoundSet-paper, in order to tag audio files, with one or more of 86 species belonging to the orthoptera or hemiptera insect orders.
Installation
To use the model, you have to install autrainer, e.g. via pip:
pip install autrainer
Usage
The model can be applied on all wav files present in a folder (<data-root>
) and stored in another folder (<output-root>
):
autrainer inference hf:autrainer/edansa-2019-cnn10-32k-t <data-root> <output-root>
Training
Pretraining
The model has been originally trained on AudioSet by Kong et. al.
Dataset
The model has been further trained (finetuned) on audio segments of the ECOSoundSet dataset, which is soon to be submitted, and will referenced here as soon as it is publicly available.
Features
The audio recordings were resampled to 96kHz, as we wanted to avoid losing too much frequency information from the species. Log-Mel spectrograms were then extracted using torchlibrosa.
Training process
The model has been trained for 30 epochs. At the end of each epoch, the model was evaluated on our validation set. We release the state that achieved the best performance on this validation set. All training hyperparameters can be found inside conf/config.yaml
inside the model folder. The train, dev, and test set can be accessed in the splits
-folder.
Evaluation
The performance on the test set reached a 0.56 (macro) f1-score.
Acknowledgments
Please acknowledge the work which produced the original model and the ECOSoundSet dataset. We would also appreciate an acknowledgment to autrainer.