Update README.md
Browse files
README.md
CHANGED
@@ -9,8 +9,19 @@ tags:
|
|
9 |
- imatrix
|
10 |
---
|
11 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
12 |
## Experimental from q8
|
13 |
|
14 |
| Filename | Quant type | File Size | Description |
|
15 |
| -------- | ---------- | --------- | ----------- |
|
16 |
-
| [flux1-dev-IQ1_S.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ1_S.gguf) | IQ1_S | 2.45GB | TBC |
|
|
|
|
|
|
9 |
- imatrix
|
10 |
---
|
11 |
|
12 |
+
# Support
|
13 |
+
|
14 |
+
- ComfyUI-GGUF: TBC
|
15 |
+
- Forge: TBC
|
16 |
+
- stable-diffusion.cpp: [llama.cpp Feature-matrix](https://github.com/ggerganov/llama.cpp/wiki/Feature-matrix)
|
17 |
+
|
18 |
+
# Alpha
|
19 |
+
simple imatrix: 512x512 single image 8/20 steps q3_K_S euler data: `load_imatrix: loaded 314 importance matrix entries from imatrix.dat computed on 7 chunks`
|
20 |
+
|
21 |
## Experimental from q8
|
22 |
|
23 |
| Filename | Quant type | File Size | Description |
|
24 |
| -------- | ---------- | --------- | ----------- |
|
25 |
+
| [flux1-dev-IQ1_S.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ1_S.gguf) | IQ1_S | 2.45GB | TBC |
|
26 |
+
| [flux1-dev-IQ2_XXS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XXS.gguf) | IQ1_S | 2.96GB | TBC |
|
27 |
+
|