Update README.md
Browse files
README.md
CHANGED
@@ -193,6 +193,16 @@ SmolVLM2 is built upon [SigLIP](https://huggingface.co/google/siglip-base-patch1
|
|
193 |
|
194 |
We release the SmolVLM2 checkpoints under the Apache 2.0 license.
|
195 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
196 |
## Training Data
|
197 |
SmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).
|
198 |
In the following plots we give a general overview of the samples across modalities and the source of those samples.
|
|
|
193 |
|
194 |
We release the SmolVLM2 checkpoints under the Apache 2.0 license.
|
195 |
|
196 |
+
## Citation information
|
197 |
+
You can cite us in the following way:
|
198 |
+
```bibtex
|
199 |
+
@misc{smolvlm2,
|
200 |
+
title = {SmolVLM2: Bringing Video Understanding to Every Device},
|
201 |
+
author = {Orr Zohar and Miquel Farré and Andi Marafioti and Merve Noyan and Pedro Cuenca and Cyril Zakka and Joshua Lochner},
|
202 |
+
year = {2025},
|
203 |
+
url = {https://huggingface.co/blog/smolvlm2}
|
204 |
+
}
|
205 |
+
```
|
206 |
## Training Data
|
207 |
SmolVLM2 used 3.3M samples for training originally from ten different datasets: [LlaVa Onevision](https://huggingface.co/datasets/lmms-lab/LLaVA-OneVision-Data), [M4-Instruct](https://huggingface.co/datasets/lmms-lab/M4-Instruct-Data), [Mammoth](https://huggingface.co/datasets/MAmmoTH-VL/MAmmoTH-VL-Instruct-12M), [LlaVa Video 178K](https://huggingface.co/datasets/lmms-lab/LLaVA-Video-178K), [FineVideo](https://huggingface.co/datasets/HuggingFaceFV/finevideo), [VideoStar](https://huggingface.co/datasets/orrzohar/Video-STaR), [VRipt](https://huggingface.co/datasets/Mutonix/Vript), [Vista-400K](https://huggingface.co/datasets/TIGER-Lab/VISTA-400K), [MovieChat](https://huggingface.co/datasets/Enxin/MovieChat-1K_train) and [ShareGPT4Video](https://huggingface.co/datasets/ShareGPT4Video/ShareGPT4Video).
|
208 |
In the following plots we give a general overview of the samples across modalities and the source of those samples.
|