DavidAU commited on
Commit
1b48c40
·
verified ·
1 Parent(s): ca5ff2e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +11 -3
README.md CHANGED
@@ -70,13 +70,21 @@ of Mistral's new reasoning model "Magistral-Small-2506":
70
 
71
  https://huggingface.co/mistralai/Magistral-Small-2506/
72
 
73
- These GGUFS are:
74
  - Quanted using NEO Imatrix Dataset
75
  - The Output Tensor is set at BF16 / 16 bit full precision.
76
  - Correct Jijna template which includes "System Prompt" embedded for reasoning.
77
- - 32K / 32,768 context max
 
78
 
79
- An additional repo of GGUFs set at 128k / 131,072 context will follow.
 
 
 
 
 
 
 
80
 
81
  Special thanks to "MLX-Community" for correct config/tokenizer files.
82
 
 
70
 
71
  https://huggingface.co/mistralai/Magistral-Small-2506/
72
 
73
+ About these GGUFS:
74
  - Quanted using NEO Imatrix Dataset
75
  - The Output Tensor is set at BF16 / 16 bit full precision.
76
  - Correct Jijna template which includes "System Prompt" embedded for reasoning.
77
+ - 32K / 32,768 context max (default/set at org repo)
78
+ - Suggest min context of 4k-8k for reasoning/output.-
79
 
80
+ An additional repo of GGUFs set at 128k / 131,072 context will follow, as per Mistrals notes that the model
81
+ was trained at 128k max context.
82
+
83
+ Please see notes at:
84
+
85
+ https://huggingface.co/mistralai/Magistral-Small-2506/
86
+
87
+ For temp, topk, top p and other suggested parameter settings.
88
 
89
  Special thanks to "MLX-Community" for correct config/tokenizer files.
90