At Mistral, we don't yet have too much experience with providing GGUF-quantized checkpoints to the community, but want to help improving the ecosystem going forward. If you encounter any problems with the provided checkpoints here, please open a discussion or pull request
Magistral Small 1.1 (GGUF)
Building upon Mistral Small 3.1 (2503), with added reasoning capabilities, undergoing SFT from Magistral Medium traces and RL on top, it's a small, efficient reasoning model with 24B parameters.
Magistral Small can be deployed locally, fitting within a single RTX 4090 or a 32GB RAM MacBook once quantized.
This is the GGUF version of the Magistral-Small-2507 model. We released the BF16 weights as well as the following quantized format:
- Q8_0
- Q5_K_M
- Q4_K_M
Our format does not have a chat template and instead we recommend to use
mistral-common
.
Updates compared with Magistral Small 1.0
Magistral Small 1.1 should give you about the same performance as Magistral Small 1.0 as seen in the benchmark results.
The update involves the following features:
- Better tone and model behaviour. You should experiment better LaTeX and Markdown formatting, and shorter answers on easy general prompts.
- The model is less likely to enter infinite generation loops.
[THINK]
and[/THINK]
special tokens encapsulate the reasoning content in a thinking chunk. This makes it easier to parse the reasoning trace and prevents confusion when the '[THINK]' token is given as a string in the prompt.- The reasoning prompt is now given in the system prompt.
Key Features
- Reasoning: Capable of long chains of reasoning traces before providing an answer.
- Multilingual: Supports dozens of languages, including English, French, German, Greek, Hindi, Indonesian, Italian, Japanese, Korean, Malay, Nepali, Polish, Portuguese, Romanian, Russian, Serbian, Spanish, Turkish, Ukrainian, Vietnamese, Arabic, Bengali, Chinese, and Farsi.
- Apache 2.0 License: Open license allowing usage and modification for both commercial and non-commercial purposes.
- Context Window: A 128k context window, but performance might degrade past 40k. Hence we recommend setting the maximum model length to 40k.
Usage
We recommend to use Magistral with llama.cpp along with mistral-common >= 1.8.3 server. See here for the documentation of mistral-common
server.
Install
Install
llama.cpp
following their guidelines.Install
mistral-common
with its dependencies.
pip install mistral-common[server]
- Download the weights from huggingface.
pip install -U "huggingface_hub[cli]"
huggingface-cli download \
"mistralai/Magistral-Small-2507-GGUF" \
--include "Magistral-Small-2507-Q4_K_M.gguf" \
--local-dir "mistralai/Magistral-Small-2507-GGUF/"
Launch the servers
- Launch the
llama.cpp
server
llama-server -m mistralai/Magistral-Small-2507-GGUF/Magistral-Small-2507-Q4_K_M.gguf -c 0
- Launch the
mistral-common
server and pass the url of thellama.cpp
server.
This is the server that will handle tokenization and detokenization and call the llama.cpp
server for generations.
mistral_common serve mistralai/Magistral-Small-2507 \
--host localhost --port 6000 \
--engine-url http://localhost:8080 --engine-backend llama_cpp \
--timeout 300
Use the model
- let's define the function to call the servers:
generate: call mistral-common
that will tokenizer, call the llama.cpp
server to generate new tokens and detokenize the output to an AssistantMessage
with think chunk and tool calls parsed.
from mistral_common.protocol.instruct.messages import AssistantMessage
from mistral_common.protocol.instruct.request import ChatCompletionRequest
from mistral_common.experimental.app.models import OpenAIChatCompletionRequest
from fastapi.encoders import jsonable_encoder
import requests
mistral_common_url = "http://127.0.0.1:6000"
def generate(
request: dict | ChatCompletionRequest | OpenAIChatCompletionRequest, url: str
) -> AssistantMessage:
response = requests.post(
f"{url}/v1/chat/completions", json=jsonable_encoder(request)
)
if response.status_code != 200:
raise ValueError(f"Error: {response.status_code} - {response.text}")
return AssistantMessage(**response.json())
- Tokenize the input, call the model and detokenize
from typing import Any
from huggingface_hub import hf_hub_download
TEMP = 0.7
TOP_P = 0.95
MAX_TOK = 40_960
def load_system_prompt(repo_id: str, filename: str) -> dict[str, Any]:
file_path = hf_hub_download(repo_id=repo_id, filename=filename)
with open(file_path, "r") as file:
system_prompt = file.read()
index_begin_think = system_prompt.find("[THINK]")
index_end_think = system_prompt.find("[/THINK]")
return {
"role": "system",
"content": [
{"type": "text", "text": system_prompt[:index_begin_think]},
{
"type": "thinking",
"thinking": system_prompt[
index_begin_think + len("[THINK]") : index_end_think
],
"closed": True,
},
{
"type": "text",
"text": system_prompt[index_end_think + len("[/THINK]") :],
},
],
}
SYSTEM_PROMPT = load_system_prompt("mistralai/Magistral-Small-2507", "SYSTEM_PROMPT.txt")
query = "Write 4 sentences, each with at least 8 words. Now make absolutely sure that every sentence has exactly one word less than the previous sentence."
# or try out other queries
# query = "Exactly how many days ago did the French Revolution start? Today is June 4th, 2025."
# query = "Think about 5 random numbers. Verify if you can combine them with addition, multiplication, subtraction or division to 133"
# query = "If it takes 30 minutes to dry 12 T-shirts in the sun, how long does it take to dry 33 T-shirts?"
messages = [SYSTEM_PROMPT, {"role": "user", "content": [{"type": "text", "text": query}]}]
request = {"messages": messages, "temperature": TEMP, "top_p": TOP_P, "max_tokens": MAX_TOK}
generated_message = generate(request, mistral_common_url)
print(generated_message)
- Downloads last month
- 757
4-bit
5-bit
8-bit
16-bit
Model tree for mistralai/Magistral-Small-2507-GGUF
Base model
mistralai/Mistral-Small-3.1-24B-Base-2503