Text-to-Image
Diffusers
TensorBoard
Safetensors
StableDiffusionPipeline
dreambooth
diffusers-training
stable-diffusion
stable-diffusion-diffusers
Instructions to use choicow/sample with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use choicow/sample with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("choicow/sample", dtype=torch.bfloat16, device_map="cuda") prompt = "Generate an image of sks person matching this pose: There is sks person in the image who is performing conditioning exercise, resistance training." image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
- Local Apps
- Draw Things
- DiffusionBee
DreamBooth - choicow/sample
This is a dreambooth model derived from stable-diffusion-v1-5/stable-diffusion-v1-5. The weights were trained on Generate an image of sks person matching this pose: There is sks person in the image who is performing conditioning exercise, resistance training. using DreamBooth. You can find some example images in the following.
DreamBooth for the text encoder was enabled: False.
Intended uses & limitations
How to use
# TODO: add an example code snippet for running this diffusion pipeline
Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
Training details
[TODO: describe the data used to train the model]
- Downloads last month
- 1
Model tree for choicow/sample
Base model
stable-diffusion-v1-5/stable-diffusion-v1-5