Inquiry on Latest Update for flux1-dev-fp8
4
#27 opened 15 days ago
by
torealise
fp8 inference
1
#26 opened 24 days ago
by
Melody32768
wrong model
#25 opened 27 days ago
by
sunhaha123
Update README.md
#24 opened 29 days ago
by
WBD8
"model weight dtype torch.float8_e4m3fn, manual cast: torch.bfloat16" with ROCM6.0
2
#23 opened about 1 month ago
by
12letter
quite slow to load the fp8 model
11
#21 opened about 2 months ago
by
gpt3eth
RuntimeError: "addmm_cuda" not implemented for 'Float8_e4m3fn'
1
#20 opened about 2 months ago
by
gradient-diffusion
How to load into VRAM?
2
#19 opened about 2 months ago
by
MicahV
What setting to use for flux1-dev-fp8
2
#18 opened about 2 months ago
by
fullsoftwares
'float8_e4m3fn' attribute error
3
#17 opened about 2 months ago
by
Magenta6
Loading flux-fp8 with diffusers
1
#16 opened about 2 months ago
by
8au
FP8 Checkpoint version size mismatch?
2
#15 opened about 2 months ago
by
Thireus
Can this model be used on Apple Silicon?
22
#14 opened about 2 months ago
by
jsmidt
How to use fp8 models + original flux repo?
#13 opened about 2 months ago
by
rolux
Quantization Method?
7
#7 opened about 2 months ago
by
vyralsurfer
ComfyUi Workflow
1
#6 opened about 2 months ago
by
Jebari
Can you make FP8 version of schnell as well please?
3
#5 opened about 2 months ago
by
MonsterMMORPG
Diffusers?
19
#4 opened about 2 months ago
by
tintwotin
Minimum vram requirements?
3
#3 opened about 2 months ago
by
joachimsallstrom
FP16
1
#2 opened about 2 months ago
by
bsbsbsbs112321
Metadata lost from model
4
#1 opened about 2 months ago
by
mcmonkey