---
base_model: black-forest-labs/FLUX.1-schnell
base_model_relation: adapter
language:
- en
library_name: diffusers
license: apache-2.0
pipeline_tag: image-to-image
tags:
- image-to-image
- SVDQuant
- FLUX.1-schnell
- Diffusion
- Quantization
- ICLR2025
- sketch
---
# Model Card for nunchaku-flux.1-schnell-pix2pix-turbo

This repository contains [img2img-turbo](https://github.com/GaParmar/img2img-turbo) LoRAs for both original and Nunchaku-quantized [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell) to translate sketch to images from user prompts.
## Model Details
### Model Description
- **Developed by:** Nunchaku Team, CMU Generative Intelligence Lab
- **Model type:** image-to-image
- **License:** apache-2.0
- **Quantized from model:** [FLUX.1-schnell](https://huggingface.co/black-forest-labs/FLUX.1-schnell)
### Model Files
- [`sketch.safetensors`](./sketch.safetensors): Img2img sketch-to-image LoRA for original FLUX.1-schnell model.
- [`svdq-int4-sketch.safetensors`](./svdq-int4-sketch.safetensors): Img2img sketch-to-image LoRA for SVDQuant INT4 FLUX.1-schnell model.
### Model Sources
- **Inference Engine:** [nunchaku](https://github.com/nunchaku-tech/nunchaku)
- **Training Repo:** [img2img-turbo](https://github.com/GaParmar/img2img-turbo)
- **Paper:** [SVDQuant](http://arxiv.org/abs/2411.05007) | [Img2img-Turbo](https://arxiv.org/abs/2403.12036)
- **Demo:** [svdquant.mit.edu](https://svdquant.mit.edu)
## Usage
See https://github.com/nunchaku-tech/nunchaku/tree/main/app/flux.1/sketch.
## Citation
```bibtex
@inproceedings{
li2024svdquant,
title={SVDQuant: Absorbing Outliers by Low-Rank Components for 4-Bit Diffusion Models},
author={Li*, Muyang and Lin*, Yujun and Zhang*, Zhekai and Cai, Tianle and Li, Xiuyu and Guo, Junxian and Xie, Enze and Meng, Chenlin and Zhu, Jun-Yan and Han, Song},
booktitle={The Thirteenth International Conference on Learning Representations},
year={2025}
}
@article{
parmar2024one,
title={One-step image translation with text-to-image models},
author={Parmar, Gaurav and Park, Taesung and Narasimhan, Srinivasa and Zhu, Jun-Yan},
journal={arXiv preprint arXiv:2403.12036},
year={2024}
}
```