Datasets:
File size: 2,942 Bytes
d934392 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 |
---
license: cc0-1.0
task_categories:
- image-to-text
tags:
- chemistry
- molecular-structure
- smiles
- ocr
- computer-vision
- webdataset
- lightonocr
size_categories:
- 1M<n<10M
---
# BMS Molecular Translation - WebDataset Shards
This dataset contains pre-processed WebDataset shards of the BMS Molecular Translation dataset,
optimized for fast data loading during model training.
## Dataset Summary
- **Total Size**: 3.8 GB
- **Training shards**: 236 files (3.7 GB) - 2.36M molecular structure images with SMILES
- **Validation shards**: 5 files (0.1 GB) - 48K samples for model validation
- **Test shards**: 3 files (0.0 GB) - 24K held-out samples for final evaluation
## Format
Shards are in [WebDataset](https://github.com/webdataset/webdataset) format:
- Sequential tar archives for fast I/O
- 10,000 samples per shard
- Training data pre-shuffled
- Val/test data in original order
- **Tar files are preserved** (not extracted) - perfect for WebDataset!
## Usage
### Download the Dataset
```bash
# Using HuggingFace Hub
pip install huggingface_hub
# Download entire dataset
python download_shards_from_huggingface.py --username jeffdekerj
# Or use HuggingFace Hub directly
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="jeffdekerj/bms-images-shards",
repo_type="dataset",
local_dir=".data/webdataset_shards"
)
```
### Load with WebDataset
```python
from webdataset_loader import BMSWebDataset
from transformers import AutoProcessor
processor = AutoProcessor.from_pretrained("lightonai/LightOnOCR-1B-1025")
train_dataset = BMSWebDataset(
shard_dir=".data/webdataset_shards/train/",
processor=processor,
user_prompt="Return the SMILES string for this molecule.",
shuffle_buffer=1000,
)
```
### Train Your Model
```bash
python finetune_lightocr.py \
--train_shards .data/webdataset_shards/train/ \
--val_shards .data/webdataset_shards/val/ \
--per_device_train_batch_size 4 \
--num_train_epochs 3 \
--fp16
```
## Benefits
- **2-5x faster** data loading vs individual files
- **Better I/O** performance for network filesystems
- **Lower overhead** with sequential reads
- **Built-in shuffling** without memory overhead
- **Tar files preserved** - no auto-extraction like Kaggle
## Source Repository
GitHub: https://github.com/JeffDeKerj/lightocr
Complete documentation available in the repository:
- `docs/WEBDATASET_GUIDE.md` - Complete usage guide
- `docs/HUGGINGFACE_GUIDE.md` - HuggingFace-specific guide
- `docs/FINETUNE_GUIDE.md` - Fine-tuning guide
- `README.md` - Project overview
## Original Dataset
Based on the BMS Molecular Translation competition dataset:
https://www.kaggle.com/c/bms-molecular-translation
## Citation
If you use this dataset, please cite both:
1. The original BMS Molecular Translation competition
2. The LightOnOCR model (if applicable to your work)
## License
CC0: Public Domain. Free to use for any purpose.
|