Improve dataset card: Add paper and code links, sample usage, and refine tags
#3
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -33,31 +33,57 @@ dataset_info:
|
|
| 33 |
dataset_size: 2289772991
|
| 34 |
tags:
|
| 35 |
- visual
|
|
|
|
|
|
|
|
|
|
| 36 |
---
|
| 37 |
|
| 38 |
## ABC Pretraining Data
|
| 39 |
|
| 40 |
-
|
| 41 |
-
This the the pretraining data for ABC. This dataset is derived from Google's [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) dataset.
|
| 42 |
-
The each item in the dataset contain a URL where the corresponding image can be downloaded and mined negatives for each item. Full dataaset is ~300 GB of images. For a detailed description of how we mined the negatives please check out our ppaer ;).
|
| 43 |
-
**Update** I have added the images to this repository, for an example of how to use and download this dataset see our [repository](https://github.com/TIGER-AI-Lab/ABC).
|
| 44 |
|
| 45 |
-
|
|
|
|
|
|
|
| 46 |
|
| 47 |
-
|
| 48 |
|
| 49 |
-
|
|
|
|
|
|
|
| 50 |
|
| 51 |
-
|
| 52 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 53 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
@misc{schneider2025abcachievingbettercontrol,
|
| 55 |
-
title={ABC: Achieving Better Control of Multimodal Embeddings using VLMs},
|
| 56 |
author={Benjamin Schneider and Florian Kerschbaum and Wenhu Chen},
|
| 57 |
year={2025},
|
| 58 |
eprint={2503.00329},
|
| 59 |
archivePrefix={arXiv},
|
| 60 |
primaryClass={cs.CV},
|
| 61 |
-
url={https://arxiv.org/abs/2503.00329},
|
| 62 |
}
|
| 63 |
```
|
|
|
|
| 33 |
dataset_size: 2289772991
|
| 34 |
tags:
|
| 35 |
- visual
|
| 36 |
+
- multimodal
|
| 37 |
+
- vision-language-model
|
| 38 |
+
- retrieval
|
| 39 |
---
|
| 40 |
|
| 41 |
## ABC Pretraining Data
|
| 42 |
|
| 43 |
+
This dataset contains the pretraining data for ABC, an open-source multimodal embedding model that uses a vision-language model backbone to deeply integrate image features with natural language instructions, advancing the state of visual embeddings with natural language control.
|
|
|
|
|
|
|
|
|
|
| 44 |
|
| 45 |
+
This dataset is derived from Google's [Conceptual Captions](https://ai.google.com/research/ConceptualCaptions/) dataset.
|
| 46 |
+
Each item in the dataset contains a URL where the corresponding image can be downloaded and mined negatives for each item. The full dataset is ~300 GB of images. For a detailed description of how we mined the negatives, please check out our paper.
|
| 47 |
+
**Update**: The images have been added to this repository. For an example of how to use and download this dataset, see our [repository](https://github.com/TIGER-AI-Lab/ABC).
|
| 48 |
|
| 49 |
+
## Paper, Project Page, and Code
|
| 50 |
|
| 51 |
+
- Paper: [ABC: Achieving Better Control of Multimodal Embeddings using VLMs](https://huggingface.co/papers/2503.00329)
|
| 52 |
+
- Project Page: [https://tiger-ai-lab.github.io/ABC/](https://tiger-ai-lab.github.io/ABC/)
|
| 53 |
+
- Code: [https://github.com/TIGER-AI-Lab/ABC](https://github.com/TIGER-AI-Lab/ABC)
|
| 54 |
|
| 55 |
+
## Sample Usage
|
| 56 |
|
| 57 |
+
### Quick Start
|
| 58 |
+
First, install the necessary dependencies by cloning the repository and installing requirements:
|
| 59 |
+
```bash
|
| 60 |
+
git clone https://github.com/TIGER-AI-Lab/ABC
|
| 61 |
+
cd ABC
|
| 62 |
+
pip install -r requirements.txt
|
| 63 |
+
```
|
| 64 |
+
Then, you can start making multimodal embeddings:
|
| 65 |
+
```python
|
| 66 |
+
python -i ./quick_start.py
|
| 67 |
```
|
| 68 |
+
|
| 69 |
+
### Fetching Datasets from 🤗 Hub
|
| 70 |
+
Our datasets are hosted on HuggingFace Hub. The text data and dataset metadata can be fetched using HF's `load_dataset` utility.
|
| 71 |
+
To fetch the images from our datasets, we provide scripts in the `fetch_datasets` directory.
|
| 72 |
+
These scripts will pull the pretraining/finetuning image data off the hub and unpack them in your huggingface datasets cache (under a directory called `tigerlab`).
|
| 73 |
+
Run `python ./fetch_datasets/pretrain.py` to get the pretraining dataset and `python ./fetch_datasets/instruct.py` to get the finetuning dataset, respectively.
|
| 74 |
+
|
| 75 |
+
## Citation
|
| 76 |
+
|
| 77 |
+
If you find any of our work helpful, please consider citing:
|
| 78 |
+
|
| 79 |
+
```bibtex
|
| 80 |
@misc{schneider2025abcachievingbettercontrol,
|
| 81 |
+
title={ABC: Achieving Better Control of Multimodal Embeddings using VLMs},
|
| 82 |
author={Benjamin Schneider and Florian Kerschbaum and Wenhu Chen},
|
| 83 |
year={2025},
|
| 84 |
eprint={2503.00329},
|
| 85 |
archivePrefix={arXiv},
|
| 86 |
primaryClass={cs.CV},
|
| 87 |
+
url={https://arxiv.org/abs/2503.00329},
|
| 88 |
}
|
| 89 |
```
|