|
---
|
|
license: cc-by-4.0
|
|
mutilinguality:
|
|
- monolingual
|
|
task_categories:
|
|
- audio-text-to-text
|
|
size_categories:
|
|
- 1K<n<10K
|
|
source_datasets:
|
|
- original
|
|
pretty_name: BLAB (Brutally Long Audio Bench)
|
|
tags:
|
|
- speech
|
|
- audio
|
|
- speech-llm
|
|
- audio-lm
|
|
- long-audio
|
|
- spoken-language-understanding
|
|
viewer: true
|
|
configs:
|
|
- config_name: word_localization
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: answer_type
|
|
dtype: string
|
|
- name: groundtruth
|
|
dtype: LargeList
|
|
inner_dtype:
|
|
- name: word
|
|
dtype: string
|
|
- name: start
|
|
dtype: float32
|
|
- name: end
|
|
dtype: float32
|
|
- config_name: advertisement_localization
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: answer_type
|
|
dtype: string
|
|
- name: groundtruth
|
|
dtype: Struct
|
|
fields:
|
|
- name: ads_segment
|
|
dtype: LargeList
|
|
inner_dtype:
|
|
- name: text
|
|
dtype: string
|
|
- name: start
|
|
dtype: float32
|
|
- name: end
|
|
dtype: float32
|
|
- name: word_timestamp
|
|
dtype: LargeList
|
|
inner_dtype:
|
|
- name: word
|
|
dtype: string
|
|
- name: start
|
|
dtype: float32
|
|
- name: end
|
|
dtype: float32
|
|
|
|
- config_name: named_entity_localization
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: answer_type
|
|
dtype: string
|
|
- name: groundtruth
|
|
dtype: Struct
|
|
fields:
|
|
- name: entities
|
|
dtype: LargeList
|
|
inner_dtype:
|
|
- name: entity_type
|
|
dtype: string
|
|
- name: entity
|
|
dtype: string
|
|
- name: start
|
|
dtype: float32
|
|
- name: end
|
|
dtype: float32
|
|
- name: word_timestamp
|
|
dtype: LargeList
|
|
inner_dtype:
|
|
- name: word
|
|
dtype: string
|
|
- name: start
|
|
dtype: float32
|
|
- name: end
|
|
dtype: float32
|
|
- config_name: speaker_number_estimation
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: groundtruth
|
|
dtype: Sequence
|
|
inner_dtype:
|
|
dtype: int32
|
|
- config_name: entire_duration
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: groundtruth
|
|
dtype: float32
|
|
- config_name: event_duration
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: answer_type
|
|
dtype: string
|
|
- name: groundtruth
|
|
dtype: float32
|
|
- config_name: emotion_ranking
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: type
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: correct_option
|
|
dtype: string
|
|
- name: option_A
|
|
dtype: string
|
|
- name: option_B
|
|
dtype: string
|
|
- name: option_C
|
|
dtype: string
|
|
- name: option_D
|
|
dtype: string
|
|
- name: option_E
|
|
dtype: string
|
|
- name: correct_answer
|
|
dtype: string
|
|
- config_name: emotion_reasoning
|
|
features:
|
|
- name: video_url
|
|
dtype: string
|
|
- name: audio
|
|
dtype: string
|
|
- name: type
|
|
dtype: string
|
|
- name: question
|
|
dtype: string
|
|
- name: correct_option
|
|
dtype: string
|
|
- name: option_A
|
|
dtype: string
|
|
- name: option_B
|
|
dtype: string
|
|
- name: option_C
|
|
dtype: string
|
|
- name: option_D
|
|
dtype: string
|
|
- name: correct_answer
|
|
dtype: string
|
|
---
|
|
|
|
|
|
# BLAB: Brutally Long Audio Bench
|
|
|
|
|
|
## Dataset Summary
|
|
Brutally Long Audio Bench (BLAB) is a challenging long-form audio benchmark that evaluates audio LMs on localization, duration estimation, emotion, and counting tasks using audio segments averaging 51 minutes in length. BLAB consists of 833+ hours of diverse, full-length Youtube audio clips, each paired with human-annotated, text-based natural language questions and answers. Our audio data were collected from permissively licensed sources and underwent a human-assisted filtering process to ensure task compliance.
|
|
|
|
NB: This data should only be used for evaluation purposes and not for model training.
|
|
|
|
|
|
|
|
## Tasks Covered in BLAB
|
|
|
|
### Localization
|
|
* **Word Localization:** Locate the exact start and end times of specific words within the audio.
|
|
* **Named Entity Localization:** Detect and locate the exact start and end times of named entities (e.g., people, organizations, locations).
|
|
* **Advertisement Localization:** Locate and transcribe advertisement segments within a podcast.
|
|
|
|
### Counting
|
|
* **Speaker Number Estimation:** Determine the number of unique speakers present in the full audio segment.
|
|
|
|
### Duration
|
|
* **Event Duration:** Calculate the duration of specific acoustic events (e.g., laughter in a comedy special, question-and-answer segments in a panel session, or a particular speaker’s total speaking time in a meeting) within an audio sample,.
|
|
* **Entire Duration:** Estimate the total duration of an audio file, expressed in seconds.
|
|
|
|
### Emotion
|
|
* **Emotion Reasoning:** Reason over emotional expressions conveyed in the audio.
|
|
* **Emotion Ranking:** Rank different emotional expressions of speech and non-verbal sound present in the audio.
|
|
|
|
|
|
|
|
## Dataset Structure
|
|
|
|
To load a specific task from BLAB, you'll need to specify its configuration name. Keep in mind that **BLAB provides URLs to the YouTube audio files, not the actual audio files themselves.** You'll need to download the audio from these URLs separately.
|
|
|
|
```python
|
|
from datasets import load_dataset
|
|
|
|
# Load the Word Localization task
|
|
word_localization_data = load_dataset("oreva/blab_long_audio", "word_localization")
|
|
|
|
|
|
# Load the Named Entity Localization task
|
|
named_entity_localization_data = load_dataset("oreva/blab_long_audio", "named_entity_localization")
|
|
|
|
# You can load any other task similarly:
|
|
# speaker_number_estimation_data = load_dataset("oreva/blab_long_audio", "speaker_number_estimation")
|
|
# entire_duration_data = load_dataset("oreva/blab_long_audio", "entire_duration")
|
|
# event_duration_data = load_dataset("oreva/blab_long_audio", "event_duration")
|
|
# emotion_reasoning_data = load_dataset("oreva/blab_long_audio", "emotion_reasoning")
|
|
# emotion_ranking_data = load_dataset("oreva/blab_long_audio", "emotion_ranking")
|
|
|
|
```
|
|
|
|
# Citation
|
|
|
|
```
|
|
@misc{ahia2025blabbrutallylongaudio,
|
|
title={BLAB: Brutally Long Audio Bench},
|
|
author={Orevaoghene Ahia and Martijn Bartelds and Kabir Ahuja and Hila Gonen and Valentin Hofmann and Siddhant Arora and Shuyue Stella Li and Vishal Puttagunta and Mofetoluwa Adeyemi and Charishma Buchireddy and Ben Walls and Noah Bennett and Shinji Watanabe and Noah A. Smith and Yulia Tsvetkov and Sachin Kumar},
|
|
year={2025},
|
|
eprint={2505.03054},
|
|
archivePrefix={arXiv},
|
|
primaryClass={cs.AI},
|
|
url={https://arxiv.org/abs/2505.03054},
|
|
}
|
|
|
|
``` |