Update README.md
Browse files
README.md
CHANGED
|
@@ -39,7 +39,7 @@ Using [llama.cpp quantize cae9fb4](https://github.com/ggerganov/llama.cpp/commit
|
|
| 39 |
| - | IQ1_M | - | broken | - |
|
| 40 |
| [flux1-dev-TQ1_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-TQ1_0.gguf) | TQ1_0| 2.63GB | TBC | - |
|
| 41 |
| [flux1-dev-TQ2_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-TQ2_0.gguf) | TQ2_0 | 3.19GB | TBC | - |
|
| 42 |
-
| [flux1-dev-IQ2_XXS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XXS.gguf) | IQ2_XXS | 3.19GB | TBC | [Example](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/images/
|
| 43 |
| [flux1-dev-IQ2_XS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XS.gguf) | IQ2_XS | 3.56GB | TBC | - |
|
| 44 |
| [flux1-dev-IQ2_S.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_S.gguf) | IQ2_S | 3.56GB | TBC | - |
|
| 45 |
| [flux1-dev-IQ2_M.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_M.gguf) | IQ2_M | 3.93GB | TBC | - |
|
|
|
|
| 39 |
| - | IQ1_M | - | broken | - |
|
| 40 |
| [flux1-dev-TQ1_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-TQ1_0.gguf) | TQ1_0| 2.63GB | TBC | - |
|
| 41 |
| [flux1-dev-TQ2_0.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-TQ2_0.gguf) | TQ2_0 | 3.19GB | TBC | - |
|
| 42 |
+
| [flux1-dev-IQ2_XXS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XXS.gguf) | IQ2_XXS | 3.19GB | TBC | [Example](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/images/output_test_IQ2_XXS_512_25_woman.png) |
|
| 43 |
| [flux1-dev-IQ2_XS.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_XS.gguf) | IQ2_XS | 3.56GB | TBC | - |
|
| 44 |
| [flux1-dev-IQ2_S.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_S.gguf) | IQ2_S | 3.56GB | TBC | - |
|
| 45 |
| [flux1-dev-IQ2_M.gguf](https://huggingface.co/Eviation/flux-imatrix/blob/main/experimental-from-q8/flux1-dev-IQ2_M.gguf) | IQ2_M | 3.93GB | TBC | - |
|