lexica_dataset / README.md
vera365's picture
Update README.md
b8e6299 verified
---
dataset_info:
features:
- name: image
dtype: image
- name: prompt
dtype: string
- name: id
dtype: string
- name: promptid
dtype: string
- name: width
dtype: uint16
- name: height
dtype: uint16
- name: seed
dtype: uint32
- name: grid
dtype: bool
- name: model
dtype: string
- name: nsfw
dtype: string
- name: subject
dtype: string
- name: modifier10
sequence: string
- name: modifier10_vector
sequence: float64
splits:
- name: train
num_bytes: 7270597392.368
num_examples: 49173
- name: test
num_bytes: 1765294302.142
num_examples: 12294
download_size: 5194348793
dataset_size: 9035891694.51
license: cc-by-4.0
task_categories:
- text-to-image
- image-to-text
language:
- en
tags:
- prompts
- engineering
- research paper
pretty_name: LexicaDataset
size_categories:
- 10K<n<100K
---
## Dataset Description
- **Repository:** [Github repository](https://github.com/verazuo/prompt-stealing-attack)
- **Distribution:** [LexicaDataset on HuggingFace](https://huggingface.co/datasets/vera365/lexica_dataset)
- **Paper:** [Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923)
- **Point of Contact:** [Xinyue Shen]([email protected])
### LexicaDataset
LexicaDataset is a large-scale text-to-image prompt dataset shared in [[USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923).
It contains **61,467 prompt-image pairs** collected from [Lexica](https://lexica.art/).
All prompts are curated by real users and images are generated by Stable Diffusion.
Data collection details can be found in the paper.
### Data Splits
We randomly sample 80% of a dataset as the training dataset and the rest 20% as the testing dataset.
### Load LexicaDataset
You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from LexicaDataset.
```python
import numpy as np
from datasets import load_dataset
trainset = load_dataset('vera365/lexica_dataset', split='train')
testset = load_dataset('vera365/lexica_dataset', split='test')
```
### Metadata Schema
`trainset` and `testset` share the same schema.
| Column | Type | Description |
| :------------------ | :--------- | :----------------------------------------------------------- |
| `image` | `image` | The generated image |
| `prompt` | `string` | The text prompt used to generate this image |
| `id` | `string` | Image UUID |
| `promptid` | `string` | Prompt UUID |
| `width` | `uint16` | Image width |
| `height` | `uint16` | Image height |
| `seed` | `uint32` | Random seed used to generate this image. |
| `grid` | `bool` | Whether the image is composed of multiple smaller images arranged in a grid |
| `model` | `string` | Model used to generate the image |
| `nsfw` | `string` | Whether the image is NSFW |
| `subject` | `string` | the subject/object depicted in the image, extracted from the prompt |
| `modifier10` | `sequence` | Modifiers in the prompt that appear more than 10 times in the whole dataset. We regard them as labels to train the modifier detector |
| `modifier10_vector` | `sequence` | One-hot vector of `modifier10` |
## Ethics & Disclosure
According to the [terms and conditions of Lexica](https://lexica.art/terms), images on the website are available under the Creative Commons Noncommercial 4.0 Attribution International License. We strictly followed Lexica’s Terms and Conditions, utilized only the official Lexica API for data retrieval, and disclosed our research to Lexica. We also responsibly disclosed our findings to related prompt marketplaces.
## License
The LexicaDataset dataset is available under the [CC-BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).
## Citation
If you find this useful in your research, please consider citing:
```bibtex
@inproceedings{SQBZ24,
author = {Xinyue Shen and Yiting Qu and Michael Backes and Yang Zhang},
title = {{Prompt Stealing Attacks Against Text-to-Image Generation Models}},
booktitle = {{USENIX Security Symposium (USENIX Security)}},
publisher = {USENIX},
year = {2024}
}
```