DavidAU commited on
Commit
1d48ebf
·
verified ·
1 Parent(s): 0d1e807

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +423 -0
README.md ADDED
@@ -0,0 +1,423 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ language:
4
+ - en
5
+ - fr
6
+ - zh
7
+ - de
8
+ tags:
9
+ - creative
10
+ - creative writing
11
+ - fiction writing
12
+ - plot generation
13
+ - sub-plot generation
14
+ - fiction writing
15
+ - story generation
16
+ - scene continue
17
+ - storytelling
18
+ - fiction story
19
+ - science fiction
20
+ - romance
21
+ - all genres
22
+ - story
23
+ - writing
24
+ - vivid prose
25
+ - vivid writing
26
+ - moe
27
+ - mixture of experts
28
+ - 64 experts
29
+ - 8 active experts
30
+ - fiction
31
+ - roleplaying
32
+ - bfloat16
33
+ - rp
34
+ - qwen3
35
+ - horror
36
+ - finetune
37
+ - thinking
38
+ - reasoning
39
+ - qwen3_moe
40
+ base_model:
41
+ - kalomaze/Qwen3-16B-A3B
42
+ - huihui-ai/Moonlight-16B-A3B-Instruct-abliterated
43
+ pipeline_tag: text-generation
44
+ ---
45
+
46
+ (quants uploading, examples to be added...)
47
+
48
+ <h2>Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF</h2>
49
+
50
+ <img src="qwen-18b.jpg" style="float:right; width:300px; height:300px; padding:10px;">
51
+
52
+ A stranger, yet radically different version of Kalmaze's "Qwen/Qwen3-30B-A3B" (that was abliterated by "huihui-ai") with the experts pruned to 64 (from 128)
53
+ and then I added 4 layers expanding the model to 18B total parameters.
54
+
55
+ The goal: slightly alter the model, to address some odd creative thinking and output choices AND de-censor (abliterate) the model.
56
+
57
+ Please note that the modifications affect the entire model operation; roughly I adjusted the model to think a little "deeper"
58
+ and "ponder" a bit - but this is a very rough description.
59
+
60
+ I also ran reasoning tests (non-creative) to ensure model was not damaged and roughly matched original model performance.
61
+
62
+ That being said, reasoning and output generation will be altered regardless of your use case(s).
63
+
64
+ FOUR example generations below; with example 4 showing a complex prompt and impressive prose that showcases some of the changes in the model.
65
+
66
+ This is a MOE (Mixture of experts model) with 8 of 64 experts activated by default, which is about 3B parameters.
67
+
68
+ This allows use of this model (with 8 experts) on both CPU (20-35 T/S) or GPU (90+ T/S) at very good, to extremely fast speeds.
69
+
70
+ Changing the number of experts used (see below how) will affect both generation speed and generation reasoning/output quality.
71
+
72
+ You can use this model with as low as four experts activated.
73
+
74
+ Even the lowest quant - Q2k - will operate very strongly too.
75
+
76
+ Model is set at:
77
+ - 8 Active experts (the default for the org model)
78
+ - 40k context (the default for the org model)
79
+ - CHATML or Jinja template (embedded OR see Jinja notes below)
80
+
81
+ QUANTS:
82
+
83
+ There are two sets of quants, regular and "MAX" (in the filename) with output tensor set at float16 (16 bit - full precision) to enhance performance
84
+ including reasoning.
85
+
86
+ SYSTEM PROMPT:
87
+
88
+ You may or may not need to set this:
89
+
90
+ ```
91
+ You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem.
92
+ ```
93
+
94
+ CHANGE THE NUMBER OF ACTIVE EXPERTS:
95
+
96
+ See this document:
97
+
98
+ https://huggingface.co/DavidAU/How-To-Set-and-Manage-MOE-Mix-of-Experts-Model-Activation-of-Experts
99
+
100
+ SUGGESTED SETTINGS (creative):
101
+
102
+ - temp .1 to 1.2 to as high as 2 ; over 2 you may need to "prompt" the model to output after "thinking"
103
+ - rep pen 1 to 1.1 (1.02 / 1.05 was tested, see notes below)
104
+ - topk 100, topp .95, min p .05
105
+ - rep pen range 64 (default in LMStudio)
106
+ - context of 8k min suggested.
107
+
108
+ SPECIAL NOTES - Experts / REP PEN / TEMP:
109
+ - QWEN3s: rep pen drastically affects both performance and stability; LOWER is better.
110
+ - rep pen at 1.02 or 1.01 may be better suited for your use cases.
111
+ - change rep pen slowly (ie 1.01, 1.02, 1.03), and regen a few times.
112
+ - Experts activated range from 4 to 12, but with too many experts activated performance may suffer. This is unique to this pruned model.
113
+ - Temp range of .2 to 1.8 works very well, beyond this temp manual activation of "output" after thinking may be required and/or the model may output the thinking "block" in regular text.
114
+
115
+ BEST SETTING:
116
+
117
+ I found the best results during testing (stable thinking, generation, minimal issues) were (this applies to org model 16B-A3B too):
118
+ - Jinja Template (corrected one below)
119
+ - the system prompt (above)
120
+ - rep pen 1.02
121
+ - topk 100, topp .95, min p .05
122
+ - rep pen range 64 (model may perform better with this at a higher value or off)
123
+ - temp range .6 to 1.2 ; with outside max of 1.8
124
+ - context of 8k min
125
+
126
+ You may find that Qwen's default parameters/samplers also work better for your use case(s) too.
127
+
128
+ Please refer to the orginal model repo for all default settings, templates, benchmarks, etc.:
129
+
130
+ https://huggingface.co/Qwen/Qwen3-30B-A3B
131
+
132
+ For more information on quants, using this model on CPU / GPU, System Prompts, please see this model card:
133
+
134
+ https://huggingface.co/DavidAU/Qwen3-128k-30B-A3B-NEO-MAX-Imatrix-gguf
135
+
136
+ SPECIAL NOTE:
137
+
138
+ As this model is pruned from 128 experts to 64, it may not have the "expert(s)" you need for your use case(s) or
139
+ may not perform as well as the full "128" expert version.
140
+
141
+ The full 128 expert version (33B) of "Stranger-Thoughts" is located here:
142
+
143
+ https://huggingface.co/DavidAU/Qwen3-33B-A3B-Stranger-Thoughts-GGUF
144
+
145
+ <B>NOTE - Jinja Template / Template to Use with this Model:</B>
146
+
147
+ If you are having issues with Jinja "auto template", use CHATML template.
148
+
149
+ OR (LMSTUDIO users / option)
150
+
151
+ Update the Jinja Template (go to this site, template-> copy the "Jinja template" and then paste.)
152
+
153
+ [ https://lmstudio.ai/neil/qwen3-thinking ]
154
+
155
+ OR
156
+
157
+ copy JINJA source from here:
158
+
159
+ ```
160
+ {%- if tools %}
161
+ {{- '<|im_start|>system\n' }}
162
+ {%- if messages[0].role == 'system' %}
163
+ {{- messages[0].content + '\n\n' }}
164
+ {%- endif %}
165
+ {{- "# Tools\n\nYou may call one or more functions to assist with the user query.\n\nYou are provided with function signatures within <tools></tools> XML tags:\n<tools>" }}
166
+ {%- for tool in tools %}
167
+ {{- "\n" }}
168
+ {{- tool | tojson }}
169
+ {%- endfor %}
170
+ {{- "\n</tools>\n\nFor each function call, return a json object with function name and arguments within <tool_call></tool_call> XML tags:\n<tool_call>\n{\"name\": <function-name>, \"arguments\": <args-json-object>}\n</tool_call><|im_end|>\n" }}
171
+ {%- else %}
172
+ {%- if messages[0].role == 'system' %}
173
+ {{- '<|im_start|>system\n' + messages[0].content + '<|im_end|>\n' }}
174
+ {%- endif %}
175
+ {%- endif %}
176
+ {%- set ns = namespace(multi_step_tool=true, last_query_index=messages|length - 1) %}
177
+ {%- for message in messages[::-1] %}
178
+ {%- set index = (messages|length - 1) - loop.index0 %}
179
+ {%- set tool_start = "<tool_response>" %}
180
+ {%- set tool_start_length = tool_start|length %}
181
+ {%- set start_of_message = message.content[:tool_start_length] %}
182
+ {%- set tool_end = "</tool_response>" %}
183
+ {%- set tool_end_length = tool_end|length %}
184
+ {%- set start_pos = (message.content|length) - tool_end_length %}
185
+ {%- if start_pos < 0 %}
186
+ {%- set start_pos = 0 %}
187
+ {%- endif %}
188
+ {%- set end_of_message = message.content[start_pos:] %}
189
+ {%- if ns.multi_step_tool and message.role == "user" and not(start_of_message == tool_start and end_of_message == tool_end) %}
190
+ {%- set ns.multi_step_tool = false %}
191
+ {%- set ns.last_query_index = index %}
192
+ {%- endif %}
193
+ {%- endfor %}
194
+ {%- for message in messages %}
195
+ {%- if (message.role == "user") or (message.role == "system" and not loop.first) %}
196
+ {{- '<|im_start|>' + message.role + '\n' + message.content + '<|im_end|>' + '\n' }}
197
+ {%- elif message.role == "assistant" %}
198
+ {%- set content = message.content %}
199
+ {%- set reasoning_content = '' %}
200
+ {%- if message.reasoning_content is defined and message.reasoning_content is not none %}
201
+ {%- set reasoning_content = message.reasoning_content %}
202
+ {%- else %}
203
+ {%- if '</think>' in message.content %}
204
+ {%- set content = (message.content.split('</think>')|last).lstrip('\n') %}
205
+ {%- set reasoning_content = (message.content.split('</think>')|first).rstrip('\n') %}
206
+ {%- set reasoning_content = (reasoning_content.split('<think>')|last).lstrip('\n') %}
207
+ {%- endif %}
208
+ {%- endif %}
209
+ {%- if loop.index0 > ns.last_query_index %}
210
+ {%- if loop.last or (not loop.last and reasoning_content) %}
211
+ {{- '<|im_start|>' + message.role + '\n<think>\n' + reasoning_content.strip('\n') + '\n</think>\n\n' + content.lstrip('\n') }}
212
+ {%- else %}
213
+ {{- '<|im_start|>' + message.role + '\n' + content }}
214
+ {%- endif %}
215
+ {%- else %}
216
+ {{- '<|im_start|>' + message.role + '\n' + content }}
217
+ {%- endif %}
218
+ {%- if message.tool_calls %}
219
+ {%- for tool_call in message.tool_calls %}
220
+ {%- if (loop.first and content) or (not loop.first) %}
221
+ {{- '\n' }}
222
+ {%- endif %}
223
+ {%- if tool_call.function %}
224
+ {%- set tool_call = tool_call.function %}
225
+ {%- endif %}
226
+ {{- '<tool_call>\n{"name": "' }}
227
+ {{- tool_call.name }}
228
+ {{- '", "arguments": ' }}
229
+ {%- if tool_call.arguments is string %}
230
+ {{- tool_call.arguments }}
231
+ {%- else %}
232
+ {{- tool_call.arguments | tojson }}
233
+ {%- endif %}
234
+ {{- '}\n</tool_call>' }}
235
+ {%- endfor %}
236
+ {%- endif %}
237
+ {{- '<|im_end|>\n' }}
238
+ {%- elif message.role == "tool" %}
239
+ {%- if loop.first or (messages[loop.index0 - 1].role != "tool") %}
240
+ {{- '<|im_start|>user' }}
241
+ {%- endif %}
242
+ {{- '\n<tool_response>\n' }}
243
+ {{- message.content }}
244
+ {{- '\n</tool_response>' }}
245
+ {%- if loop.last or (messages[loop.index0 + 1].role != "tool") %}
246
+ {{- '<|im_end|>\n' }}
247
+ {%- endif %}
248
+ {%- endif %}
249
+ {%- endfor %}
250
+ {%- if add_generation_prompt %}
251
+ {{- '<|im_start|>assistant\n' }}
252
+ {%- if enable_thinking is defined and enable_thinking is false %}
253
+ {{- '<think>\n\n</think>\n\n' }}
254
+ {%- endif %}
255
+ {%- endif %}
256
+ ```
257
+
258
+ ---
259
+
260
+ <B>Settings: CHAT / ROLEPLAY and/or SMOOTHER operation of this model:</B>
261
+
262
+ In "KoboldCpp" or "oobabooga/text-generation-webui" or "Silly Tavern" ;
263
+
264
+ Set the "Smoothing_factor" to 1.5
265
+
266
+ : in KoboldCpp -> Settings->Samplers->Advanced-> "Smooth_F"
267
+
268
+ : in text-generation-webui -> parameters -> lower right.
269
+
270
+ : In Silly Tavern this is called: "Smoothing"
271
+
272
+
273
+ NOTE: For "text-generation-webui"
274
+
275
+ -> if using GGUFs you need to use "llama_HF" (which involves downloading some config files from the SOURCE version of this model)
276
+
277
+ Source versions (and config files) of my models are here:
278
+
279
+ https://huggingface.co/collections/DavidAU/d-au-source-files-for-gguf-exl2-awq-gptq-hqq-etc-etc-66b55cb8ba25f914cbf210be
280
+
281
+ OTHER OPTIONS:
282
+
283
+ - Increase rep pen to 1.1 to 1.15 (you don't need to do this if you use "smoothing_factor")
284
+
285
+ - If the interface/program you are using to run AI MODELS supports "Quadratic Sampling" ("smoothing") just make the adjustment as noted.
286
+
287
+ <B>Highest Quality Settings / Optimal Operation Guide / Parameters and Samplers</B>
288
+
289
+ This a "Class 1" model:
290
+
291
+ For all settings used for this model (including specifics for its "class"), including example generation(s) and for advanced settings guide (which many times addresses any model issue(s)), including methods to improve model performance for all use case(s) as well as chat, roleplay and other use case(s) please see:
292
+
293
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
294
+
295
+ You can see all parameters used for generation, in addition to advanced parameters and samplers to get the most out of this model here:
296
+
297
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
298
+
299
+
300
+ <b>Optional Enhancement:</B>
301
+
302
+ The following can be used in place of the "system prompt" or "system role" to further enhance the model.
303
+
304
+ It can also be used at the START of a NEW chat, but you must make sure it is "kept" as the chat moves along.
305
+ In this case the enhancements do not have as strong effect at using "system prompt" or "system role".
306
+
307
+ Copy and paste EXACTLY as noted, DO NOT line wrap or break the lines, maintain the carriage returns exactly as presented.
308
+
309
+ <PRE>
310
+ Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
311
+
312
+ Here are your skillsets:
313
+ [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
314
+
315
+ [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
316
+
317
+ Here are your critical instructions:
318
+ Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
319
+ </PRE>
320
+
321
+ You do not need to use this, it is only presented as an additional enhancement which seems to help scene generation
322
+ and scene continue functions.
323
+
324
+ This enhancement WAS NOT used to generate the examples below.
325
+
326
+ ---
327
+
328
+ <H2>EXAMPLES</H2>
329
+
330
+ Standard system prompt, rep pen 1.02, topk 100, topp .95, minp .05, rep pen range 64.
331
+
332
+ Tested in LMStudio, quant Q4KS, GPU (cpu output will differ slighly).
333
+
334
+ As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better.
335
+
336
+ NOTE: Some formatting lost on copy/paste.
337
+
338
+ CAUTION:
339
+
340
+ Some horror / intense prose.
341
+
342
+ ---
343
+
344
+ EXAMPLE #1 - temp 1.2
345
+
346
+ ---
347
+
348
+ <B>
349
+ </B>
350
+
351
+ <P></P>
352
+
353
+ [[[thinking start]]]
354
+
355
+
356
+ [[[thinking end]]]
357
+
358
+ <p></p>
359
+
360
+ OUTPUT:
361
+
362
+ ---
363
+
364
+ EXAMPLE #2 - temp 1.2
365
+
366
+ ---
367
+
368
+ <B>
369
+ </B>
370
+
371
+ <P></P>
372
+
373
+ [[[thinking start]]]
374
+
375
+
376
+ [[[thinking end]]]
377
+
378
+ <p></p>
379
+
380
+ OUTPUT:
381
+
382
+ ---
383
+
384
+ EXAMPLE #3 - temp 1.2
385
+
386
+ ---
387
+
388
+ <B>
389
+ </B>
390
+
391
+ <P></P>
392
+
393
+ [[[thinking start]]]
394
+
395
+
396
+ [[[thinking end]]]
397
+
398
+ <p></p>
399
+
400
+ OUTPUT:
401
+
402
+ ---
403
+
404
+ EXAMPLE #4 - temp 1.2
405
+
406
+ ---
407
+
408
+ <B>
409
+ </B>
410
+
411
+ <P></P>
412
+
413
+ [[[thinking start]]]
414
+
415
+
416
+ [[[thinking end]]]
417
+
418
+ <p></p>
419
+
420
+ OUTPUT:
421
+
422
+
423
+