Datasets:
Dataset Viewer
The dataset viewer is not available for this subset.
Cannot get the split names for the config 'default' of the dataset.
Exception: SplitsNotFoundError
Message: The split names could not be parsed from the dataset config.
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 289, in get_dataset_config_info
for split_generator in builder._split_generators(
^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 81, in _split_generators
first_examples = list(islice(pipeline, self.NUM_EXAMPLES_FOR_FEATURES_INFERENCE))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/webdataset/webdataset.py", line 47, in _get_pipeline_from_tar
extracted_file_path = streaming_download_manager.extract(f"memory://{filename}")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/download/streaming_download_manager.py", line 121, in extract
urlpaths = map_nested(self._extract, url_or_urls, map_tuple=True)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/utils/py_utils.py", line 493, in map_nested
mapped = function(data_struct)
^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/download/streaming_download_manager.py", line 131, in _extract
raise NotImplementedError(
NotImplementedError: Extraction protocol for TAR archives like 'memory://train/shard-000209.tar.gz' is not implemented in streaming mode. Please use `dl_manager.iter_archive` instead.
Example usage:
url = dl_manager.download(url)
tar_archive_iterator = dl_manager.iter_archive(url)
for filename, file in tar_archive_iterator:
...
The above exception was the direct cause of the following exception:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/split_names.py", line 65, in compute_split_names_from_streaming_response
for split in get_dataset_split_names(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 343, in get_dataset_split_names
info = get_dataset_config_info(
^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/inspect.py", line 294, in get_dataset_config_info
raise SplitsNotFoundError("The split names could not be parsed from the dataset config.") from err
datasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
BMS Molecular Translation - WebDataset Shards
This dataset contains pre-processed WebDataset shards of the BMS Molecular Translation dataset, optimized for fast data loading during model training.
Dataset Summary
- Total Size: 3.8 GB
- Training shards: 236 files (3.7 GB) - 2.36M molecular structure images with SMILES
- Validation shards: 5 files (0.1 GB) - 48K samples for model validation
- Test shards: 3 files (0.0 GB) - 24K held-out samples for final evaluation
Format
Shards are in WebDataset format:
- Sequential tar archives for fast I/O
- 10,000 samples per shard
- Training data pre-shuffled
- Val/test data in original order
- Tar files are preserved (not extracted) - perfect for WebDataset!
Usage
Download the Dataset
# Using HuggingFace Hub
pip install huggingface_hub
# Download entire dataset
python download_shards_from_huggingface.py --username jeffdekerj
# Or use HuggingFace Hub directly
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="jeffdekerj/bms-images-shards",
repo_type="dataset",
local_dir=".data/webdataset_shards"
)
Load with WebDataset
from webdataset_loader import BMSWebDataset
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("lightonai/LightOnOCR-1B-1025")
train_dataset = BMSWebDataset(
shard_dir=".data/webdataset_shards/train/",
processor=processor,
user_prompt="Return the SMILES string for this molecule.",
shuffle_buffer=1000,
)
Train Your Model
python finetune_lightocr.py \
--train_shards .data/webdataset_shards/train/ \
--val_shards .data/webdataset_shards/val/ \
--per_device_train_batch_size 4 \
--num_train_epochs 3 \
--fp16
Benefits
- 2-5x faster data loading vs individual files
- Better I/O performance for network filesystems
- Lower overhead with sequential reads
- Built-in shuffling without memory overhead
- Tar files preserved - no auto-extraction like Kaggle
Source Repository
GitHub: https://github.com/JeffDeKerj/lightocr
Complete documentation available in the repository:
docs/WEBDATASET_GUIDE.md- Complete usage guidedocs/HUGGINGFACE_GUIDE.md- HuggingFace-specific guidedocs/FINETUNE_GUIDE.md- Fine-tuning guideREADME.md- Project overview
Original Dataset
Based on the BMS Molecular Translation competition dataset: https://www.kaggle.com/c/bms-molecular-translation
Citation
If you use this dataset, please cite both:
- The original BMS Molecular Translation competition
- The LightOnOCR model (if applicable to your work)
License
CC0: Public Domain. Free to use for any purpose.
- Downloads last month
- 12