Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
Job manager crashed while running this job (missing heartbeats).
Error code:   JobManagerCrashedError

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

audio
audio
End of preview.

logo
collage

ReplayDF

ReplayDF is a dataset for evaluating the impact of replay attacks on audio deepfake detection systems. It features re-recorded bona-fide and synthetic speech derived from M-AILABS and MLAAD v5, using 109 unique speaker-microphone combinations across six languages and four TTS models in diverse acoustic environments.

This dataset reveals how such replays can significantly degrade the performance of state-of-the-art detectors. That is, audio deepfakes are detected much worse once they have been played over a loudspeaker and re-recorded via a microphone. It is provided for non-commercial research to support the development of robust and generalizable deepfake detection systems.

πŸ“„ Paper

Replay Attacks Against Audio Deepfake Detection (Interspeech 2025)

πŸ”½ Download

sudo apt-get install git-lfs
git lfs install
git clone https://huggingface.co/datasets/mueller91/ReplayDF

πŸ“Œ Citation

@article{muller2025replaydf,
  title = {Replay Attacks Against Audio Deepfake Detection},
  author = {Nicolas MΓΌller and Piotr Kawa and Wei-Herng Choong and Adriana Stan and Aditya Tirumala Bukkapatnam and Karla Pizzi and Alexander Wagner and Philip Sperl},
  journal={Interspeech 2025},
  year = {2025},
}

πŸ“ Folder Structure

ReplayDF/
β”œβ”€β”€ aux/
β”‚   β”œβ”€β”€ <UID1>/                 # contains information such as setup, recorded sine sweep, RIR (derived from sine sweep)
β”‚   β”œβ”€β”€ <UID2>/
β”‚   └── ...
β”œβ”€β”€ wav/
β”‚   β”œβ”€β”€ <UID1>/
β”‚   β”‚   β”œβ”€β”€ spoof               # Re-recorded audio samples (spoofs)
β”‚   β”‚   β”œβ”€β”€ benign              # Re-recorded audio samples (benign)
β”‚   β”‚   └── meta.csv            # Metadata for this UID's recordings
β”‚   β”œβ”€β”€ <UID2>/
β”‚   β”‚   β”œβ”€β”€ spoof               
β”‚   β”‚   β”œβ”€β”€ benign              
β”‚   β”‚   └── meta.csv            
β”‚   └── ...
β”œβ”€β”€ mos/
β”‚   └── mos.png                 # MOS ratings plot
β”‚   └── mos_scores              # individual mos scores

πŸ“„ License

Attribution-NonCommercial-ShareAlike 4.0 International: https://creativecommons.org/licenses/by-nc/4.0/

Resources

Find the original resources (i.e. non-airgapped audio files) here:

Mic/Speaker Matrix

MicSpeaker Matrix

πŸ“Š Mean Opinion Scores (MOS)mos

The scoring criteria for rating the audio files are outlined in the table below:

Rating Description Speech Quality Distortion (background noise, overdrive, etc.)
5 Excellent Clear Imperceptible
4 Good Clear Slightly perceptible, but not annoying
3 Fair Understandable Perceptible and slightly annoying
2 Poor Understandable Perceptible and annoying
1 Very Poor Barely understandable Very annoying and objectionable
e Error Inaudible Heavy
Downloads last month
672