FLUX.1 [dev] FP8 Scaled SOTA model by ComfyUI โ compatible with low VRAM GPUs (~15GB)
Note: This model is better than the regular FP8 model.
โ Hashnimo
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.