jeffdekerj commited on
Commit
d934392
·
verified ·
1 Parent(s): be2c152

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +116 -0
README.md ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc0-1.0
3
+ task_categories:
4
+ - image-to-text
5
+ tags:
6
+ - chemistry
7
+ - molecular-structure
8
+ - smiles
9
+ - ocr
10
+ - computer-vision
11
+ - webdataset
12
+ - lightonocr
13
+ size_categories:
14
+ - 1M<n<10M
15
+ ---
16
+
17
+ # BMS Molecular Translation - WebDataset Shards
18
+
19
+ This dataset contains pre-processed WebDataset shards of the BMS Molecular Translation dataset,
20
+ optimized for fast data loading during model training.
21
+
22
+ ## Dataset Summary
23
+
24
+ - **Total Size**: 3.8 GB
25
+ - **Training shards**: 236 files (3.7 GB) - 2.36M molecular structure images with SMILES
26
+ - **Validation shards**: 5 files (0.1 GB) - 48K samples for model validation
27
+ - **Test shards**: 3 files (0.0 GB) - 24K held-out samples for final evaluation
28
+
29
+ ## Format
30
+
31
+ Shards are in [WebDataset](https://github.com/webdataset/webdataset) format:
32
+ - Sequential tar archives for fast I/O
33
+ - 10,000 samples per shard
34
+ - Training data pre-shuffled
35
+ - Val/test data in original order
36
+ - **Tar files are preserved** (not extracted) - perfect for WebDataset!
37
+
38
+ ## Usage
39
+
40
+ ### Download the Dataset
41
+
42
+ ```bash
43
+ # Using HuggingFace Hub
44
+ pip install huggingface_hub
45
+
46
+ # Download entire dataset
47
+ python download_shards_from_huggingface.py --username jeffdekerj
48
+
49
+ # Or use HuggingFace Hub directly
50
+ from huggingface_hub import snapshot_download
51
+ snapshot_download(
52
+ repo_id="jeffdekerj/bms-images-shards",
53
+ repo_type="dataset",
54
+ local_dir=".data/webdataset_shards"
55
+ )
56
+ ```
57
+
58
+ ### Load with WebDataset
59
+
60
+ ```python
61
+ from webdataset_loader import BMSWebDataset
62
+ from transformers import AutoProcessor
63
+
64
+ processor = AutoProcessor.from_pretrained("lightonai/LightOnOCR-1B-1025")
65
+
66
+ train_dataset = BMSWebDataset(
67
+ shard_dir=".data/webdataset_shards/train/",
68
+ processor=processor,
69
+ user_prompt="Return the SMILES string for this molecule.",
70
+ shuffle_buffer=1000,
71
+ )
72
+ ```
73
+
74
+ ### Train Your Model
75
+
76
+ ```bash
77
+ python finetune_lightocr.py \
78
+ --train_shards .data/webdataset_shards/train/ \
79
+ --val_shards .data/webdataset_shards/val/ \
80
+ --per_device_train_batch_size 4 \
81
+ --num_train_epochs 3 \
82
+ --fp16
83
+ ```
84
+
85
+ ## Benefits
86
+
87
+ - **2-5x faster** data loading vs individual files
88
+ - **Better I/O** performance for network filesystems
89
+ - **Lower overhead** with sequential reads
90
+ - **Built-in shuffling** without memory overhead
91
+ - **Tar files preserved** - no auto-extraction like Kaggle
92
+
93
+ ## Source Repository
94
+
95
+ GitHub: https://github.com/JeffDeKerj/lightocr
96
+
97
+ Complete documentation available in the repository:
98
+ - `docs/WEBDATASET_GUIDE.md` - Complete usage guide
99
+ - `docs/HUGGINGFACE_GUIDE.md` - HuggingFace-specific guide
100
+ - `docs/FINETUNE_GUIDE.md` - Fine-tuning guide
101
+ - `README.md` - Project overview
102
+
103
+ ## Original Dataset
104
+
105
+ Based on the BMS Molecular Translation competition dataset:
106
+ https://www.kaggle.com/c/bms-molecular-translation
107
+
108
+ ## Citation
109
+
110
+ If you use this dataset, please cite both:
111
+ 1. The original BMS Molecular Translation competition
112
+ 2. The LightOnOCR model (if applicable to your work)
113
+
114
+ ## License
115
+
116
+ CC0: Public Domain. Free to use for any purpose.