Quantization settings

  • vae (first_stage_model): torch.float16. No quantization.
  • text_encoder, text_encoder_2 (conditioner.embedders):
    • NF4 with bitsandbytes
    • Target layers:["self_attn", ".mlp."]
  • diffusion_model:
    • Int8 with bitsandbytes
    • Target layers: ["attn1", "attn2", ".ff."]
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for p1atdev/animagine-xl-4.0-bnb-nf4