The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
LibriBrain MEG Preprocessed Dataset
Preprocessed magnetoencephalography (MEG) recordings with phoneme labels from the LibriBrain dataset, optimized for fast loading during machine learning model training.
This dataset was created for the LibriBrain 2025 Competition (now concluded).
Dataset Overview
MEG Recording Specifications
- Channels: 306 total (102 magnetometers + 204 gradiometers)
- Sampling Rate: 250 Hz
- Duration: ~52 hours of recordings
- Subject: Single English speaker listening to Sherlock Holmes audiobooks
- Phoneme Instances: ~1.5 million
Phoneme Inventory
39 ARPAbet phonemes with position encoding:
- Vowels (15): aa, ae, ah, ao, aw, ay, eh, er, ey, ih, iy, ow, oy, uh, uw
- Consonants (24): b, ch, d, dh, f, g, hh, jh, k, l, m, n, ng, p, r, s, sh, t, th, v, w, y, z, zh
- Special: oov (out-of-vocabulary)
Position markers: B (beginning), I (inside), E (end), S (singleton)
Signal Processing
All MEG data has been preprocessed through the following pipeline:
- Bad channel removal
- Signal Space Separation (SSS) for noise reduction
- Notch filtering for powerline noise removal
- Bandpass filtering (0.1-125 Hz)
- Downsampling to 250 Hz
Preprocessing and Grouping
This dataset contains pre-grouped and averaged MEG samples for significantly faster data loading during training. Instead of grouping samples on-the-fly (which is computationally expensive), samples have been pre-grouped at various levels.
Available Grouping Configurations
grouped_5
: 5 samples averaged togethergrouped_10
: 10 samples averaged togethergrouped_15
: 15 samples averaged togethergrouped_20
: 20 samples averaged togethergrouped_25
: 25 samples averaged togethergrouped_30
: 30 samples averaged togethergrouped_35
: 35 samples averaged together (partial - train only)grouped_45
: 45 samples averaged togethergrouped_50
: 50 samples averaged togethergrouped_55
: 55 samples averaged togethergrouped_60
: 60 samples averaged togethergrouped_100
: 100 samples averaged together
Each configuration contains:
train_grouped.h5
: Training datavalidation_grouped.h5
: Validation datatest_grouped.h5
: Test datapaths.yaml
: File path references
Why Use Grouped Data?
- Faster Loading: Pre-computed grouping eliminates runtime averaging overhead
- Memory Efficient: Smaller file sizes for higher grouping levels
- Flexible: Choose grouping level based on your accuracy vs. speed requirements
- Standardized: Consistent preprocessing across all configurations
Installation
This dataset requires the modified pnpl library for loading:
pip install git+https://github.com/September-Labs/pnpl.git
Usage
from pnpl.datasets import GroupedDataset
# Load preprocessed data with 100-sample grouping
train_dataset = GroupedDataset(
preprocessed_path="data/grouped_100/train_grouped.h5",
load_to_memory=True # Optional: load entire dataset to memory for faster access
)
val_dataset = GroupedDataset(
preprocessed_path="data/grouped_100/validation_grouped.h5",
load_to_memory=True
)
# Get a sample
sample = train_dataset[0]
meg_data = sample['meg'] # Shape: (306, time_points)
phoneme_label = sample['phoneme'] # Phoneme class index
# Use with PyTorch DataLoader
from torch.utils.data import DataLoader
dataloader = DataLoader(
train_dataset,
batch_size=32,
shuffle=True,
num_workers=4
)
Data Structure
data/
βββ grouped_5/
β βββ train_grouped.h5
β βββ validation_grouped.h5
β βββ test_grouped.h5
β βββ paths.yaml
βββ grouped_10/
β βββ train_grouped.h5
β βββ validation_grouped.h5
β βββ test_grouped.h5
β βββ paths.yaml
βββ ...
βββ grouped_100/
βββ train_grouped.h5
βββ validation_grouped.h5
βββ test_grouped.h5
βββ paths.yaml
File Sizes
Grouping | Train | Validation | Test | Total |
---|---|---|---|---|
grouped_5 | 45.6 GB | 425 MB | 456 MB | ~47 GB |
grouped_10 | 22.8 GB | 213 MB | 228 MB | ~24 GB |
grouped_20 | 11.4 GB | 106 MB | 114 MB | ~12 GB |
grouped_50 | 4.6 GB | 37 MB | 42 MB | ~4.7 GB |
grouped_100 | 2.3 GB | 19 MB | 21 MB | ~2.4 GB |
Dataset Splits
- Train: 88 sessions (~51 hours)
- Validation: 1 session (~0.36 hours)
- Test: 1 session (~0.38 hours)
Citation
If you use this dataset, please cite the LibriBrain competition:
@misc{libribrain2025,
title={LibriBrain: A Dataset for Speech Decoding from Brain Signals},
author={Neural Processing Lab},
year={2025},
url={https://neural-processing-lab.github.io/2025-libribrain-competition/}
}
License
Please refer to the original LibriBrain dataset license terms.
Acknowledgments
This preprocessed version was created to facilitate faster training for the LibriBrain 2025 Competition. The original dataset and competition were organized by the Neural Processing Lab.
- Downloads last month
- 88