juliendenize teven commited on
Commit
129d7a0
·
verified ·
0 Parent(s):

Super-squash branch 'main' using huggingface_hub

Browse files

Co-authored-by: teven <[email protected]>

.gitattributes ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Magistral-Small-2507-Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Magistral-Small-2507_BF16.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Magistral-Small-2507-BF16.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Magistral-Small-2507-Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Magistral-Small-2507-Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
Magistral-Small-2507-BF16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:df62af95072e1db16a9aed4d125df543a72676b23fed8acbe1e701985980ff5f
3
+ size 47154041792
Magistral-Small-2507-Q4_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:23a7a20e664389931dbff1c9d3b34ec59e6a38255cc2c91135a67a18454c8e48
3
+ size 14334432192
Magistral-Small-2507-Q5_K_M.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:87a2a57af0cd985f1e92699d1de1bfa26d7f91ac60a62928aea2479becd064e9
3
+ size 16764507072
Magistral-Small-2507-Q8_0.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:78bfc64c72de6540cbcf7597dc2dac5d37e79764a24aa1172157e7d76b008da3
3
+ size 25055302592
README.md ADDED
@@ -0,0 +1,193 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - fr
5
+ - de
6
+ - es
7
+ - pt
8
+ - it
9
+ - ja
10
+ - ko
11
+ - ru
12
+ - zh
13
+ - ar
14
+ - fa
15
+ - id
16
+ - ms
17
+ - ne
18
+ - pl
19
+ - ro
20
+ - sr
21
+ - sv
22
+ - tr
23
+ - uk
24
+ - vi
25
+ - hi
26
+ - bn
27
+ license: apache-2.0
28
+ library_name: llama.cpp
29
+ inference: false
30
+ base_model:
31
+ - mistralai/Magistral-Small-2507
32
+ extra_gated_description: >-
33
+ If you want to learn more about how we process your personal data, please read
34
+ our <a href="https://mistral.ai/terms/">Privacy Policy</a>.
35
+ pipeline_tag: text-generation
36
+ ---
37
+
38
+ > [!Note]
39
+ > At Mistral, we don't yet have too much experience with providing GGUF-quantized checkpoints
40
+ > to the community, but want to help improving the ecosystem going forward.
41
+ > If you encounter any problems with the provided checkpoints here, please open a discussion or pull request
42
+
43
+
44
+ # Magistral Small 1.1 (GGUF)
45
+
46
+ Building upon Mistral Small 3.1 (2503), **with added reasoning capabilities**, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
47
+
48
+ Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
49
+
50
+ This is the GGUF version of the [Magistral-Small-2507](https://huggingface.co/mistralai/Magistral-Small-2507) model. We released the BF16 weights as well as the following quantized format:
51
+ - Q8_0
52
+ - Q5_K_M
53
+ - Q4_K_M
54
+ Our format **does not have a chat template** and instead we recommend to use [`mistral-common`](#usage).
55
+
56
+ ## Updates compared with [Magistral Small 1.0](https://huggingface.co/mistralai/Magistral-Small-2506)
57
+
58
+ Magistral Small 1.1 should give you about the same performance as Magistral Small 1.0 as seen in the [benchmark results](#benchmark-results).
59
+
60
+ The update involves the following features:
61
+ - Better tone and model behaviour. You should experiment better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
62
+ - The model is less likely to enter infinite generation loops.
63
+ - `[THINK]` and `[/THINK]` special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.
64
+ - The reasoning prompt is now given in the system prompt.
65
+
66
+ ## Key Features
67
+ - **Reasoning:** Capable of long chains of reasoning traces before providing an answer.
68
+ - **Multilingual:** Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
69
+ - **Apache 2.0 License:** Open license allowing usage and modification for both commercial and non-commercial purposes.
70
+ - **Context Window:** A 128k context window, **but** performance might degrade past **40k**. Hence we recommend setting the maximum model length to 40k.
71
+
72
+ ## Usage
73
+
74
+ We recommend to use Magistral with [llama.cpp](https://github.com/ggml-org/llama.cpp/tree/master) along with [mistral-common >= 1.8.3](https://mistralai.github.io/mistral-common/) server. See [here](https://mistralai.github.io/mistral-common/usage/experimental/) for the documentation of `mistral-common` server.
75
+
76
+ ### Install
77
+
78
+ 1. Install `llama.cpp` following their [guidelines](https://github.com/ggml-org/llama.cpp/blob/master/README.md#quick-start).
79
+
80
+ 2. Install `mistral-common` with its dependencies.
81
+
82
+ ```sh
83
+ pip install mistral-common[server]
84
+ ```
85
+
86
+ 3. Download the weights from huggingface.
87
+
88
+ ```sh
89
+ pip install -U "huggingface_hub[cli]"
90
+
91
+ huggingface-cli download \
92
+ "mistralai/Magistral-Small-2507-GGUF" \
93
+ --include "Magistral-Small-2507-Q4_K_M.gguf" \
94
+ --local-dir "mistralai/Magistral-Small-2507-GGUF/"
95
+ ```
96
+
97
+ ### Launch the servers
98
+
99
+ 1. Launch the `llama.cpp` server
100
+
101
+ ```sh
102
+ llama-server -m mistralai/Magistral-Small-2507-GGUF/Magistral-Small-2507-Q4_K_M.gguf -c 0
103
+ ```
104
+
105
+ 2. Launch the `mistral-common` server and pass the url of the `llama.cpp` server.
106
+
107
+ This is the server that will handle tokenization and detokenization and call the `llama.cpp` server for generations.
108
+
109
+ ```sh
110
+ mistral_common serve mistralai/Magistral-Small-2507 \
111
+ --host localhost --port 6000 \
112
+ --engine-url http://localhost:8080 --engine-backend llama_cpp \
113
+ --timeout 300
114
+ ```
115
+
116
+ ### Use the model
117
+
118
+ 1. let's define the function to call the servers:
119
+
120
+ **generate**: call `mistral-common` that will tokenizer, call the `llama.cpp` server to generate new tokens and detokenize the output to an [`AssistantMessage`](https://mistralai.github.io/mistral-common/code_reference/mistral_common/protocol/instruct/messages/#mistral_common.protocol.instruct.messages.AssistantMessage) with think chunk and tool calls parsed.
121
+
122
+ ```python
123
+ from mistral_common.protocol.instruct.messages import AssistantMessage
124
+ from mistral_common.protocol.instruct.request import ChatCompletionRequest
125
+ from mistral_common.experimental.app.models import OpenAIChatCompletionRequest
126
+ from fastapi.encoders import jsonable_encoder
127
+ import requests
128
+
129
+ mistral_common_url = "http://127.0.0.1:6000"
130
+
131
+ def generate(
132
+ request: dict | ChatCompletionRequest | OpenAIChatCompletionRequest, url: str
133
+ ) -> AssistantMessage:
134
+ response = requests.post(
135
+ f"{url}/v1/chat/completions", json=jsonable_encoder(request)
136
+ )
137
+ if response.status_code != 200:
138
+ raise ValueError(f"Error: {response.status_code} - {response.text}")
139
+ return AssistantMessage(**response.json())
140
+ ```
141
+
142
+ 2. Tokenize the input, call the model and detokenize
143
+
144
+ ```python
145
+ from typing import Any
146
+ from huggingface_hub import hf_hub_download
147
+
148
+
149
+ TEMP = 0.7
150
+ TOP_P = 0.95
151
+ MAX_TOK = 40_960
152
+
153
+ def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
154
+ file_path = hf_hub_download(repo_id=repo_id, filename=filename)
155
+ with open(file_path, "r") as file:
156
+ system_prompt = file.read()
157
+
158
+ index_begin_think = system_prompt.find("[THINK]")
159
+ index_end_think = system_prompt.find("[/THINK]")
160
+
161
+ return {
162
+ "role": "system",
163
+ "content": [
164
+ {"type": "text", "text": system_prompt[:index_begin_think]},
165
+ {
166
+ "type": "thinking",
167
+ "thinking": system_prompt[
168
+ index_begin_think + len("[THINK]") : index_end_think
169
+ ],
170
+ "closed": True,
171
+ },
172
+ {
173
+ "type": "text",
174
+ "text": system_prompt[index_end_think + len("[/THINK]") :],
175
+ },
176
+ ],
177
+ }
178
+
179
+ SYSTEM_PROMPT = load_system_prompt("mistralai/Magistral-Small-2507", "SYSTEM_PROMPT.txt")
180
+
181
+ query = "Write 4 sentences, each with at least 8 words. Now make absolutely sure that every sentence has exactly one word less than the previous sentence."
182
+ # or try out other queries
183
+ # query = "Exactly how many days ago did the French Revolution start? Today is June 4th, 2025."
184
+ # query = "Think about 5 random numbers. Verify if you can combine them with addition, multiplication, subtraction or division to 133"
185
+ # query = "If it takes 30 minutes to dry 12 T-shirts in the sun, how long does it take to dry 33 T-shirts?"
186
+ messages = [SYSTEM_PROMPT, {"role": "user", "content": [{"type": "text", "text": query}]}]
187
+
188
+ request = {"messages": messages, "temperature": TEMP, "top_p": TOP_P, "max_tokens": MAX_TOK}
189
+
190
+ generated_message = generate(request, mistral_common_url)
191
+ print(generated_message)
192
+ ```
193
+