Improve dataset card for MultiID-Bench: Add task categories, tags, intro, usage, and full license

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +92 -58
README.md CHANGED
@@ -2,6 +2,15 @@
2
  license: other
3
  license_name: multiid-2m
4
  license_link: LICENSE.md
 
 
 
 
 
 
 
 
 
5
  dataset_info:
6
  features:
7
  - name: ID
@@ -33,7 +42,6 @@ configs:
33
  path: data/train-*
34
  ---
35
 
36
-
37
  # MultiID-Bench in WithAnyone
38
 
39
  [![arXiv](https://img.shields.io/badge/arXiv-2510.14975-b31b1b.svg)](https://arxiv.org/abs/2510.14975)
@@ -44,61 +52,43 @@ configs:
44
  [![MultiID-2M](https://img.shields.io/badge/MultiID_2M-Dataset-Green.svg)](https://huggingface.co/datasets/WithAnyone/MultiID-2M)
45
  [![Demo](https://img.shields.io/badge/HuggingFace-Demo-Yellow.svg)](https://huggingface.co/spaces/WithAnyone/WithAnyone_demo)
46
 
47
- **Please refer to [GitHub repo](https://github.com/Doby-Xu/WithAnyone) for the usage of this benchmark.**
48
 
49
- ## Download
50
 
51
- [HuggingFace Dataset](https://huggingface.co/datasets/WithAnyone/MultiID-Bench)
 
 
 
 
52
 
53
- ```
54
- huggingface-cli download WithAnyone/MultiID-Bench --repo-type dataset --local-dir <path to MultiID-Bench directory>
55
- ```
56
 
57
- ## Evaluation
58
 
59
- **Please refer to [GitHub repo](https://github.com/Doby-Xu/WithAnyone) for the usage of this benchmark.**
60
 
61
- ### Environment Setup
62
-
63
- Besides the `requirements.txt` in [GitHub repo](https://github.com/Doby-Xu/WithAnyone), you need to install the following packages:
64
 
65
  ```bash
66
- pip install aesthetic-predictor-v2-5
67
- pip install facexlib
68
- pip install colorama
69
- pip install pytorch_lightning
70
- git clone https://github.com/timesler/facenet-pytorch.git facenet_pytorch
71
-
72
- # in MultiID_Bench/
73
- mkdir pretrained
74
  ```
75
 
 
76
 
77
- You need the following models to run the evaluation:
78
 
79
- - CLIP
80
- - arcface
81
- - aesthetic-v2.5
82
- - adaface
83
- - facenet
84
-
85
- For the first three models, they will be automatically downloaded when you run the evaluation script for the first time. Most of the models will be cached in the `HF_HOME` directory, which is usually `~/.cache/huggingface`. About 5GB of disk space is needed.
86
-
87
- For adaface, you need to download the model weights from [adaface_ir50_ms1mv2.ckpt](https://drive.google.com/file/d/1eUaSHG4pGlIZK7hBkqjyp2fc2epKoBvI/view?usp=sharing) (This is the original link provided by the authors of AdaFace) and put it in the `pretrained` directory.
88
-
89
- This repository includes code from [AdaFace](https://github.com/mk-minchul/AdaFace?tab=readme-ov-file). AdaFace is included in this codebase for merely easier import. You can also clone it separately from its original repository, and modify the import paths accordingly.
90
-
91
-
92
- ### Data to Evaluate
93
-
94
- By running:
95
  ```
96
- python hf2bench.py \
97
- --dataset WithAnyone/MultiID-Bench \
98
- --output <root directory to save the data> \
99
- --from_hub
100
  ```
101
- you can arrange the generated images and the corresponding text prompts in the following structure:
102
  ```
103
  root/
104
  β”œβ”€β”€ id1/
@@ -120,38 +110,42 @@ root/
120
  β”‚ └── meta.json
121
  β”‚
122
  └── ...
123
- ```
124
-
125
- Or you can manually download the data by
126
- ```
127
- huggingface-cli download WithAnyone/MultiID-Bench --repo-type dataset --local-dir <root directory to save the data>
128
  ```
129
- and arrange the files:
130
- ```
131
- python hf2bench.py --dataset <root directory to save the data> --output <root directory to save the data>
132
- ```
133
-
134
- If you run the `infer_withanyone.py` script in this repository, the output directory will be in the correct format.
135
-
136
  The `meta.json` file should contain the prompt used to generate the image, in the following format:
137
-
138
  ```json
139
  {
140
  "prompt": "a photo of a person with blue hair and glasses"
141
  }
142
  ```
143
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
144
  ### Run Evaluation
145
 
146
- You can run the evaluation script as follows:
147
 
148
  ```python
149
  from eval import BenchEval_Geo
150
 
151
  def run():
152
  evaler = BenchEval_Geo(
153
- target_dir=<root directory mentioned above>,
154
- output_dir=<output directory to save the evaluation results>,
155
  ori_file_name="ori.jpg", # the name of the ground truth image file
156
  output_file_name="out.jpg", # the name of the generated image file
157
  ref_1_file_name="ref_1.jpg", # the name of the first reference image file
@@ -167,3 +161,43 @@ if __name__ == "__main__":
167
  run()
168
  ```
169
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  license: other
3
  license_name: multiid-2m
4
  license_link: LICENSE.md
5
+ language:
6
+ - en
7
+ task_categories:
8
+ - text-to-image
9
+ tags:
10
+ - face-generation
11
+ - identity-consistent
12
+ - multi-person
13
+ - benchmark
14
  dataset_info:
15
  features:
16
  - name: ID
 
42
  path: data/train-*
43
  ---
44
 
 
45
  # MultiID-Bench in WithAnyone
46
 
47
  [![arXiv](https://img.shields.io/badge/arXiv-2510.14975-b31b1b.svg)](https://arxiv.org/abs/2510.14975)
 
52
  [![MultiID-2M](https://img.shields.io/badge/MultiID_2M-Dataset-Green.svg)](https://huggingface.co/datasets/WithAnyone/MultiID-2M)
53
  [![Demo](https://img.shields.io/badge/HuggingFace-Demo-Yellow.svg)](https://huggingface.co/spaces/WithAnyone/WithAnyone_demo)
54
 
55
+ The MultiID-Bench dataset is a benchmark introduced in the paper [WithAnyone: Towards Controllable and ID Consistent Image Generation](https://huggingface.co/papers/2510.14975). It is specifically tailored for multi-person scenarios in text-to-image research, providing diverse references for each identity. This dataset aims to quantify "copy-paste" artifacts and evaluate the trade-off between identity fidelity and variation, enabling models like WithAnyone to achieve controllable and identity-consistent image generation.
56
 
57
+ ## Links
58
 
59
+ * **Paper:** [WithAnyone: Towards Controllable and ID Consistent Image Generation](https://huggingface.co/papers/2510.14975)
60
+ * **Project Page:** https://doby-xu.github.io/WithAnyone/
61
+ * **GitHub Repository:** https://github.com/Doby-Xu/WithAnyone
62
+ * **WithAnyone Model:** https://huggingface.co/WithAnyone/WithAnyone
63
+ * **WithAnyone Demo:** https://huggingface.co/spaces/WithAnyone/WithAnyone_demo
64
 
65
+ ## Sample Usage
 
 
66
 
67
+ This section provides instructions for downloading the MultiID-Bench dataset and preparing it for evaluation.
68
 
69
+ ### Download the Dataset
70
 
71
+ You can download the MultiID-Bench dataset using the Hugging Face CLI:
 
 
72
 
73
  ```bash
74
+ huggingface-cli download WithAnyone/MultiID-Bench --repo-type dataset --local-dir <path to MultiID-Bench directory>
 
 
 
 
 
 
 
75
  ```
76
 
77
+ ### Prepare Data for Evaluation
78
 
79
+ After downloading, if your dataset is in a `parquet` file format, you can convert it into a structured directory of images and JSON metadata using the `parquet2bench.py` script provided in the GitHub repository.
80
 
81
+ First, ensure you have cloned the GitHub repository:
82
+ ```bash
83
+ git clone https://github.com/Doby-Xu/WithAnyone
84
+ cd WithAnyone
 
 
 
 
 
 
 
 
 
 
 
 
85
  ```
86
+
87
+ Then, convert the downloaded parquet file:
88
+ ```bash
89
+ python MultiID_Bench/parquet2bench.py --parquet <path to downloaded parquet file> --output_dir <root directory to save the processed data>
90
  ```
91
+ The output directory will contain a structure like this, with subfolders for each ID and `meta.json` files containing prompts:
92
  ```
93
  root/
94
  β”œβ”€β”€ id1/
 
110
  β”‚ └── meta.json
111
  β”‚
112
  └── ...
 
 
 
 
 
113
  ```
 
 
 
 
 
 
 
114
  The `meta.json` file should contain the prompt used to generate the image, in the following format:
 
115
  ```json
116
  {
117
  "prompt": "a photo of a person with blue hair and glasses"
118
  }
119
  ```
120
 
121
+ ### Environment Setup for Evaluation
122
+
123
+ To run the evaluation scripts, you need to install several packages. Besides the `requirements.txt` from the [GitHub repo](https://github.com/Doby-Xu/WithAnyone), install the following:
124
+
125
+ ```bash
126
+ pip install aesthetic-predictor-v2-5
127
+ pip install facexlib
128
+ pip install colorama
129
+ pip install pytorch_lightning
130
+ git clone https://github.com/timesler/facenet-pytorch.git facenet_pytorch
131
+
132
+ # in MultiID_Bench/
133
+ mkdir pretrained
134
+ ```
135
+
136
+ You will also need the following models to run the evaluation: CLIP, arcface, aesthetic-v2.5, adaface, and facenet. The first three will be automatically downloaded. For `adaface`, download `adaface_ir50_ms1mv2.ckpt` from [this link](https://drive.google.com/file/d/1eUaSHG4pGlIZK7hBkqjyp2fc2epKoBvI/view?usp=sharing) and place it in the `pretrained` directory.
137
+
138
  ### Run Evaluation
139
 
140
+ You can run the evaluation script as follows, using the prepared data:
141
 
142
  ```python
143
  from eval import BenchEval_Geo
144
 
145
  def run():
146
  evaler = BenchEval_Geo(
147
+ target_dir="<root directory mentioned above>",
148
+ output_dir="<output directory to save the evaluation results>",
149
  ori_file_name="ori.jpg", # the name of the ground truth image file
150
  output_file_name="out.jpg", # the name of the generated image file
151
  ref_1_file_name="ref_1.jpg", # the name of the first reference image file
 
161
  run()
162
  ```
163
 
164
+ ## License and Disclaimer
165
+
166
+ The **code** of WithAnyone is released under the [**Apache License 2.0**](https://www.apache.org/licenses/LICENSE-2.0), while the WithAnyone **model and associated datasets** are made available **solely for non-commercial academic research purposes**.
167
+
168
+ - **License Terms:**
169
+ The WithAnyone model is distributed under the [**FLUX.1 [dev] Non-Commercial License v1.1.1**](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md). All underlying base models remain governed by their respective original licenses and terms, which shall continue to apply in full. Users must comply with all such applicable licenses when using this project.
170
+
171
+ - **Permitted Use:**
172
+ This project may be used for lawful academic research, analysis, and non-commercial experimentation only. Any form of commercial use, redistribution for profit, or application that violates applicable laws, regulations, or ethical standards is strictly prohibited.
173
+
174
+ - **User Obligations:**
175
+ Users are solely responsible for ensuring that their use of the model and dataset complies with all relevant laws, regulations, institutional review policies, and third-party license terms.
176
+
177
+ - **Disclaimer of Liability:**
178
+ The authors, developers, and contributors make no warranties, express or implied, regarding the accuracy, reliability, or fitness of this project for any particular purpose. They shall not be held liable for any damages, losses, or legal claims arising from the use or misuse of this project, including but not limited to violations of law or ethical standards by end users.
179
+
180
+ - **Acceptance of Terms:**
181
+ By downloading, accessing, or using this project, you acknowledge and agree to be bound by the applicable license terms and legal requirements, and you assume full responsibility for all consequences resulting from your use.
182
+
183
+ ## Acknowledgement
184
+ We thank the following prior art for their excellent open source work:
185
+ - [PuLID](https://github.com/ToTheBeginning/PuLID)
186
+ - [UNO](https://github.com/bytedance/UNO)
187
+ - [UniPortrait](https://github.com/junjiehe96/UniPortrait)
188
+ - [InfiniteYou](https://github.com/bytedance/InfiniteYou)
189
+ - [DreamO](https://github.com/bytedance/DreamO)
190
+ - [UMO](https://github.com/bytedance/UMO)
191
+
192
+ ## Citation
193
+
194
+ If you find this project useful in your research, please consider citing:
195
+
196
+ ```bibtex
197
+ @article{xu2025withanyone,
198
+ title={WithAnyone: Towards Controllable and ID-Consistent Image Generation},
199
+ author={Hengyuan Xu and Wei Cheng and Peng Xing and Yixiao Fang and Shuhan Wu and Rui Wang and Xianfang Zeng and Gang Yu and Xinjun Ma and Yu-Gang Jiang},
200
+ journal={arXiv preprint arxiv:2510.14975},
201
+ year={2025}
202
+ }
203
+ ```