Datasets:
image
imagewidth (px) 256
1.02k
|
---|
Dataset Card for StyleCruxGen
A visual dataset exploring the interplay between style, object, and environment using diffusion-generated imagery.
Dataset Details
Dataset Description
StyleCruxGen is a synthetic multi-style image dataset generated using the Stable Diffusion XL (SDXL) model. It contains 4245 high-resolution images (1024px) featuring 8 object-environment pairs rendered across photorealistic and 10 artistic styles. Each image is guided by a structured prompt with style placement variations (prepend, mid, append) to support prompt sensitivity studies. The dataset is accompanied by image resolution variants (512, 384, 256) and a metadata CSV describing all generation parameters. The 10 styles used in the dataset: pixel art, watercolor, sketch, voxel art, charcoal drawing, embroidery, neon glow, mosaic art, graffiti and glass painting. Two types of control images were also created:
- Style_only (these images don't have any object environment pair), placement_of_style column in csv has style_only values for these type of images
- Content_only (these images don't have any style associated with it in the SDXL prompts), placement_of_style column in csv has content_only values for these type of images
- Curated by: Bodhisatta Maiti
- Funded by: None
- Shared by: Bodhisatta Maiti
- License: CC BY-NC-SA 4.0
Dataset Sources
- Repository:
- https://doi.org/10.5281/zenodo.15795551
- https://www.kaggle.com/datasets/bodhisattamaiti/stylecruxgen
- https://huggingface.co/datasets/bodhisattamaiti/StyleCruxGen
Uses
Direct Use
StyleCruxGen can be used for:
- Style classification and clustering
- Evaluating prompt faithfulness of diffusion models
- Analyzing style-conditioned image generation capabilities
- Prompt sensitivity analysis based on style cue placement
- Style disentanglement benchmarking
- Cross-style image retrieval
- Domain robustness testing and visual generalization
- Evaluation of style transfer models
- Vision-language model sensitivity to stylistic shifts
- Failure case analysis in diffusion-based generation
Out-of-Scope Use
- Commercial use or redistribution of images without proper attribution
- Training downstream models for medical, biometric, or identity-sensitive tasks
- Any use violating the non-commercial terms of the license
Dataset Structure
Each image sample includes:
filename
(filename column in csv): image file nameobject
(object_name column in csv): one of 8 base objectsenvironment
(env_name column in csv): environment context in which the object is placedstyle
(style column in csv): one of photorealistic and 10 styles (e.g., watercolor, voxel art, charcoal drawing)prompt_structure
(placement_of_style column in csv): one of prepend, mid, append, content-only, or style-onlyguidance_scale
(guidance_scale column in csv): generation parameterstyle_description
(prompt column in csv): descriptive phrase used in the prompt
Images are stored in folders by resolution. All metadata is provided in a unified CSV file.
Dataset Creation
Curation Rationale
The dataset was created to fill a gap in publicly available multi-style benchmarks that consider both content and prompt structure variation. It supports model evaluation, style fidelity analysis, and retrieval tasks under stylistic and contextual shifts.
Source Data
Data Collection and Processing
Images were generated using Stable Diffusion XL (base 1.0) via structured text prompts. Prompts vary the placement of style cues and maintain consistent object and environment pairings. Each prompt was used to generate 5 image variants. The code used to generate the dataset is open-source.
Who are the source data producers?
The content was synthesized using the SDXL model. Prompts and object–environment–style mappings were manually curated by the dataset author.
Annotations
Annotation process
No human annotations were added post-generation. Style labels and metadata are embedded at generation time.
Who are the annotators?
All annotations were programmatically generated by the dataset author.
Personal and Sensitive Information
No real-world identities, personal data, or biometric information are present. All content is fully synthetic.
Bias, Risks, and Limitations
- Some styles may fail to render clearly for certain objects.
- Style fidelity may vary across prompts and should not be assumed to be perfect.
- Diffusion models may occasionally produce artifacts such as signatures or unprompted background elements.
Recommendations
- Researchers using the dataset for style fidelity evaluation should apply additional style recognition filters.
- Caution is advised when attributing semantic meaning to stylistic differences without further validation.
Citation
Maiti, B. (2025). StyleCruxGen: A Visual Dataset Exploring Style, Object, and Environment Variation [Data set]. Zenodo. https://doi.org/10.5281/zenodo.15795551
Glossary
- Prompt placement: Location of the style cue in the sentence (prepend, mid, append)
- Style fidelity: The visual consistency between the intended style and the generated image
More Information
Please contact the dataset author for collaborations or dataset feedback.
Dataset Card Authors
Bodhisatta Maiti
Dataset Card Contact
- Downloads last month
- 156