PSELDNets / README.md
Jinbo-HU's picture
Update README.md
ab9e85e verified
metadata
pretty_name: DataSynthSELD
size_categories:
  - 100B<n<1T
task_categories:
  - audio-classification

PSELDNets: Pre-trained Neural Networks on a Large-scale Synthetic Dataset for Sound Event Localization and Detection

  1. This repo contains 67,000 1-minute clips, amounting to approximately 1,117 hours for training, and 3,060 1-minute clips, amounting to roughly 51 hours for testing.
  2. The dataset features an ontology of 170 sound classes and is generated by convolving sound event clips from FSD50K with simulated SRIRs (for training) or collected SRIRs from TAU-SRIR DB (for testing).
  3. The datasets are generated by this tools.
  4. The pre-trained SELD checkpoints on the large-scale synthetic dataset are also publicly available.

New Updates

  • (2025-05-22) We release EINV2-HTSAT-AGG1-0.514.ckpt and SEDDOA-HTSAT-AGG1-0.531.ckpt. The corresponding method is described here.

Download

Citation

Please cite our papers as below if you use the datasets, codes, and models of PSELDNets.

[1] Jinbo Hu, Yin Cao, Ming Wu, Fang Kang, Feiran Yang, Wenwu Wang, Mark D. Plumbley, Jun Yang, "PSELDNets: Pre-trained Neural Networks on Large-scale Synthetic Datasets for Sound Event Localization and Detection" arXiv:2411.06399, 2024. URL