DavidAU commited on
Commit
854317b
·
verified ·
1 Parent(s): 529ba5e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +105 -109
README.md CHANGED
@@ -49,8 +49,6 @@ tags:
49
  license: apache-2.0
50
  ---
51
 
52
- (uploading...)
53
-
54
  <h2>Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-NEO-MAX-44B-gguf</h2>
55
 
56
  <img src="mags-devs1.jpg" style="float:right; width:300px; height:300px; padding:10px;">
@@ -67,7 +65,111 @@ Full reasoning/thinking which can be turned on or off.
67
 
68
  GGUFs enhanced using NEO Imatrix dataset, and further enhanced with output tensor at bf16 (16 bit full precision).
69
 
70
- Info on each model below AND info on this MOE model / settings etc below this, including system prompt to turn reasoning on.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
71
 
72
  ---
73
 
@@ -176,109 +278,3 @@ For additional settings, usage information, benchmarks etc also see:
176
  https://huggingface.co/mistralai/Magistral-Small-2506
177
 
178
 
179
- ---
180
-
181
- <h2>Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B</h2>
182
-
183
- SETTINGS
184
-
185
- ---
186
-
187
- Max context is 128k/131072 ; for reasoning strongly suggest min 8k context window, if reasoning is on.
188
-
189
- REASONING SYSTEM PROMPT (optional):
190
-
191
- ```
192
- A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown and Latex to format your response. Write both your thoughts and summary in the same language as the task posed by the user.
193
-
194
- Your thinking process must follow the template below:
195
- <think>
196
- Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
197
- </think>
198
- ```
199
-
200
- <B>GENERAL:</B>
201
-
202
- All versions have default of 2 experts activated.
203
-
204
- Number of active experts can be adjusted in Lmstudio and other AI Apps.
205
-
206
- Suggest 2-4 generations, especially if using 1 expert (all models).
207
-
208
- Models will accept "simple prompt" as well as very detailed instructions ; however for larger projects
209
- I suggest using Q6/Q8 quants / optimized quants.
210
-
211
- <B>Suggested Settings : </B>
212
- - Temp .5 to .7 (or lower)
213
- - topk: 20, topp: .8, minp: .05
214
- - rep pen: 1.1 (can be lower)
215
- - Jinja Template (embedded) or CHATML template.
216
- - A System Prompt is not required. (ran tests with blank system prompt)
217
-
218
- For additional settings, usage information, benchmarks etc also see:
219
-
220
- https://huggingface.co/mistralai/Devstral-Small-2505
221
-
222
- and/or
223
-
224
- https://huggingface.co/mistralai/Magistral-Small-2506
225
-
226
- ---
227
-
228
- For more information / other Qwen/Mistral Coders / additional settings see:
229
-
230
- ---
231
-
232
- [ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]
233
-
234
- ---
235
-
236
- <H2>Help, Adjustments, Samplers, Parameters and More</H2>
237
-
238
- ---
239
-
240
- <B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
241
-
242
- See this document:
243
-
244
- https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
245
-
246
- <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
247
-
248
- In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
249
-
250
- Set the "Smoothing_factor" to 1.5
251
-
252
- : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
253
-
254
- : in text-generation-webui -> parameters -> lower right.
255
-
256
- : In Silly Tavern this is called: "Smoothing"
257
-
258
-
259
- NOTE: For "text-generation-webui"
260
-
261
- -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
262
-
263
- Source versions (and config files) of my models are here:
264
-
265
- https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
266
-
267
- OTHER OPTIONS:
268
-
269
- - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
270
-
271
- - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
272
-
273
- <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
274
-
275
- This a "Class 1" model:
276
-
277
- For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
278
-
279
- [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
280
-
281
- You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
282
-
283
- [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
284
-
 
49
  license: apache-2.0
50
  ---
51
 
 
 
52
  <h2>Mistral-2x24B-MOE-Power-CODER-Magistral-Devstral-Reasoning-Ultimate-NEO-MAX-44B-gguf</h2>
53
 
54
  <img src="mags-devs1.jpg" style="float:right; width:300px; height:300px; padding:10px;">
 
65
 
66
  GGUFs enhanced using NEO Imatrix dataset, and further enhanced with output tensor at bf16 (16 bit full precision).
67
 
68
+ Info on each model AFTER the info on this MOE model / settings, including system prompt to turn reasoning on.
69
+
70
+ ---
71
+
72
+ <h2>Mistral-2x24B-MOE-Power-Devstral-Magistral-Reasoning-Ultimate-44B</h2>
73
+
74
+ SETTINGS
75
+
76
+ ---
77
+
78
+ Max context is 128k/131072 ; for reasoning strongly suggest min 8k context window, if reasoning is on.
79
+
80
+ REASONING SYSTEM PROMPT (optional):
81
+
82
+ ```
83
+ A user will ask you to solve a task. You should first draft your thinking process (inner monologue) until you have derived the final answer. Afterwards, write a self-contained summary of your thoughts (i.e. your summary should be succinct but contain all the critical steps you needed to reach the conclusion). You should use Markdown and Latex to format your response. Write both your thoughts and summary in the same language as the task posed by the user.
84
+
85
+ Your thinking process must follow the template below:
86
+ <think>
87
+ Your thoughts or/and draft, like working through an exercise on scratch paper. Be as casual and as long as you want until you are confident to generate a correct answer.
88
+ </think>
89
+ ```
90
+
91
+ <B>GENERAL:</B>
92
+
93
+ All versions have default of 2 experts activated.
94
+
95
+ Number of active experts can be adjusted in Lmstudio and other AI Apps.
96
+
97
+ Suggest 2-4 generations, especially if using 1 expert (all models).
98
+
99
+ Models will accept "simple prompt" as well as very detailed instructions ; however for larger projects
100
+ I suggest using Q6/Q8 quants / optimized quants.
101
+
102
+ <B>Suggested Settings : </B>
103
+ - Temp .5 to .7 (or lower)
104
+ - topk: 20, topp: .8, minp: .05
105
+ - rep pen: 1.1 (can be lower)
106
+ - Jinja Template (embedded) or CHATML template.
107
+ - A System Prompt is not required. (ran tests with blank system prompt)
108
+
109
+ For additional settings, usage information, benchmarks etc also see:
110
+
111
+ https://huggingface.co/mistralai/Devstral-Small-2505
112
+
113
+ and/or
114
+
115
+ https://huggingface.co/mistralai/Magistral-Small-2506
116
+
117
+ ---
118
+
119
+ For more information / other Qwen/Mistral Coders / additional settings see:
120
+
121
+ [ https://huggingface.co/DavidAU/Qwen2.5-MOE-2x-4x-6x-8x__7B__Power-CODER__19B-30B-42B-53B-gguf ]
122
+
123
+ ---
124
+
125
+ <H2>Help, Adjustments, Samplers, Parameters and More</H2>
126
+
127
+ ---
128
+
129
+ <B>CHANGE THE NUMBER OF ACTIVE EXPERTS:</B>
130
+
131
+ See this document:
132
+
133
+ https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
134
+
135
+ <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
136
+
137
+ In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
138
+
139
+ Set the "Smoothing_factor" to 1.5
140
+
141
+ : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
142
+
143
+ : in text-generation-webui -> parameters -> lower right.
144
+
145
+ : In Silly Tavern this is called: "Smoothing"
146
+
147
+
148
+ NOTE: For "text-generation-webui"
149
+
150
+ -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
151
+
152
+ Source versions (and config files) of my models are here:
153
+
154
+ https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
155
+
156
+ OTHER OPTIONS:
157
+
158
+ - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
159
+
160
+ - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
161
+
162
+ <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
163
+
164
+ This a "Class 1" model:
165
+
166
+ For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
167
+
168
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
169
+
170
+ You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
171
+
172
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
173
 
174
  ---
175
 
 
278
  https://huggingface.co/mistralai/Magistral-Small-2506
279
 
280