vera365 commited on
Commit
b8e6299
1 Parent(s): f7fa027

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -7
README.md CHANGED
@@ -58,7 +58,7 @@ size_categories:
58
  - **Paper:** [Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923)
59
  - **Point of Contact:** [Xinyue Shen]([email protected])
60
 
61
- ### Dataset Summary
62
 
63
  LexicaDataset is a large-scale text-to-image prompt dataset shared in [[USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923).
64
  It contains **61,467 prompt-image pairs** collected from [Lexica](https://lexica.art/).
@@ -69,9 +69,7 @@ Data collection details can be found in the paper.
69
 
70
  We randomly sample 80% of a dataset as the training dataset and the rest 20% as the testing dataset.
71
 
72
-
73
- ### Loading LexicaDataset
74
-
75
 
76
  You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from LexicaDataset.
77
 
@@ -103,19 +101,17 @@ testset = load_dataset('vera365/lexica_dataset', split='test')
103
  | `modifier10` | `sequence` | Modifiers in the prompt that appear more than 10 times in the whole dataset. We regard them as labels to train the modifier detector |
104
  | `modifier10_vector` | `sequence` | One-hot vector of `modifier10` |
105
 
106
-
107
  ## Ethics & Disclosure
108
 
109
  According to the [terms and conditions of Lexica](https://lexica.art/terms), images on the website are available under the Creative Commons Noncommercial 4.0 Attribution International License. We strictly followed Lexica’s Terms and Conditions, utilized only the official Lexica API for data retrieval, and disclosed our research to Lexica. We also responsibly disclosed our findings to related prompt marketplaces.
110
 
111
- ## Licensing
112
 
113
  The LexicaDataset dataset is available under the [CC-BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).
114
 
115
  ## Citation
116
 
117
  If you find this useful in your research, please consider citing:
118
-
119
  ```bibtex
120
  @inproceedings{SQBZ24,
121
  author = {Xinyue Shen and Yiting Qu and Michael Backes and Yang Zhang},
 
58
  - **Paper:** [Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923)
59
  - **Point of Contact:** [Xinyue Shen]([email protected])
60
 
61
+ ### LexicaDataset
62
 
63
  LexicaDataset is a large-scale text-to-image prompt dataset shared in [[USENIX'24] Prompt Stealing Attacks Against Text-to-Image Generation Models](https://arxiv.org/abs/2302.09923).
64
  It contains **61,467 prompt-image pairs** collected from [Lexica](https://lexica.art/).
 
69
 
70
  We randomly sample 80% of a dataset as the training dataset and the rest 20% as the testing dataset.
71
 
72
+ ### Load LexicaDataset
 
 
73
 
74
  You can use the Hugging Face [`Datasets`](https://huggingface.co/docs/datasets/quickstart) library to easily load prompts and images from LexicaDataset.
75
 
 
101
  | `modifier10` | `sequence` | Modifiers in the prompt that appear more than 10 times in the whole dataset. We regard them as labels to train the modifier detector |
102
  | `modifier10_vector` | `sequence` | One-hot vector of `modifier10` |
103
 
 
104
  ## Ethics & Disclosure
105
 
106
  According to the [terms and conditions of Lexica](https://lexica.art/terms), images on the website are available under the Creative Commons Noncommercial 4.0 Attribution International License. We strictly followed Lexica’s Terms and Conditions, utilized only the official Lexica API for data retrieval, and disclosed our research to Lexica. We also responsibly disclosed our findings to related prompt marketplaces.
107
 
108
+ ## License
109
 
110
  The LexicaDataset dataset is available under the [CC-BY 4.0 License](https://creativecommons.org/licenses/by/4.0/).
111
 
112
  ## Citation
113
 
114
  If you find this useful in your research, please consider citing:
 
115
  ```bibtex
116
  @inproceedings{SQBZ24,
117
  author = {Xinyue Shen and Yiting Qu and Michael Backes and Yang Zhang},