Update README.md
Browse files
README.md
CHANGED
@@ -38,6 +38,18 @@ tags:
|
|
38 |
This model was converted to GGUF format from [`unsloth/Mistral-Small-3.2-24B-Instruct-2506`](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
39 |
Refer to the [original model card](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) for more details on the model.
|
40 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
41 |
## Use with llama.cpp
|
42 |
Install llama.cpp through brew (works on Mac and Linux)
|
43 |
|
|
|
38 |
This model was converted to GGUF format from [`unsloth/Mistral-Small-3.2-24B-Instruct-2506`](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
|
39 |
Refer to the [original model card](https://huggingface.co/unsloth/Mistral-Small-3.2-24B-Instruct-2506) for more details on the model.
|
40 |
|
41 |
+
---
|
42 |
+
Mistral-Small-3.2-24B-Instruct-2506 is a minor update of Mistral-Small-3.1-24B-Instruct-2503.
|
43 |
+
|
44 |
+
Small-3.2 improves in the following categories:
|
45 |
+
|
46 |
+
- Instruction following: Small-3.2 is better at following precise instructions
|
47 |
+
- Repetition errors: Small-3.2 produces less infinite generations or repetitive answers
|
48 |
+
- Function calling: Small-3.2's function calling template is more robust (see here and examples)
|
49 |
+
|
50 |
+
In all other categories Small-3.2 should match or slightly improve compared to Mistral-Small-3.1-24B-Instruct-2503.
|
51 |
+
|
52 |
+
---
|
53 |
## Use with llama.cpp
|
54 |
Install llama.cpp through brew (works on Mac and Linux)
|
55 |
|