FP8 quantized version of AuraFlow v0.3
Just casted to torch.float8_e4m3fn
all linear weights of the flow transformer except t_embedder
, final_linear
, modF
.
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.
Model tree for p1atdev/AuraFlow-v0.3-fp8
Base model
fal/AuraFlow-v0.3