Update README.md
Browse files
README.md
CHANGED
@@ -23,7 +23,7 @@ tags:
|
|
23 |
# Codestral-22B-v0.1-hf-IMat-GGUF
|
24 |
_Llama.cpp imatrix quantization of bullerwins/Codestral-22B-v0.1-hf_
|
25 |
|
26 |
-
Original model: [mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)
|
27 |
Quantized HF Model: [bullerwins/Codestral-22B-v0.1-hf](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf)
|
28 |
Original dtype: `BF16` (`bfloat16`)
|
29 |
Quantized by: llama.cpp [b3037](https://github.com/ggerganov/llama.cpp/releases/tag/b3037)
|
|
|
23 |
# Codestral-22B-v0.1-hf-IMat-GGUF
|
24 |
_Llama.cpp imatrix quantization of bullerwins/Codestral-22B-v0.1-hf_
|
25 |
|
26 |
+
Original model: [mistralai/Codestral-22B-v0.1](https://huggingface.co/mistralai/Codestral-22B-v0.1)
|
27 |
Quantized HF Model: [bullerwins/Codestral-22B-v0.1-hf](https://huggingface.co/bullerwins/Codestral-22B-v0.1-hf)
|
28 |
Original dtype: `BF16` (`bfloat16`)
|
29 |
Quantized by: llama.cpp [b3037](https://github.com/ggerganov/llama.cpp/releases/tag/b3037)
|