DavidAU commited on
Commit
742bdef
·
verified ·
1 Parent(s): 6be2ade

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +328 -1
README.md CHANGED
@@ -1,6 +1,333 @@
1
  ---
 
2
  license: apache-2.0
3
  language:
4
  - en
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5
  ---
6
- Source code uploading, model card / details / examples to follow.
 
1
  ---
2
+ library_name: transformers
3
  license: apache-2.0
4
  language:
5
  - en
6
+ tags:
7
+ - creative
8
+ - creative writing
9
+ - fiction writing
10
+ - plot generation
11
+ - sub-plot generation
12
+ - fiction writing
13
+ - story generation
14
+ - scene continue
15
+ - storytelling
16
+ - fiction story
17
+ - science fiction
18
+ - romance
19
+ - all genres
20
+ - story
21
+ - writing
22
+ - vivid prosing
23
+ - vivid writing
24
+ - fiction
25
+ - roleplaying
26
+ - bfloat16
27
+ - swearing
28
+ - role play
29
+ - sillytavern
30
+ - backyard
31
+ - horror
32
+ - llama 3.1
33
+ - context 128k
34
+ - mergekit
35
+ - merge
36
+ - not-for-all-audiences
37
+ base_model:
38
+ - NousResearch/DeepHermes-3-Llama-3-8B-Preview
39
+ - Casual-Autopsy/L3-Super-Nova-RP-8B
40
+ pipeline_tag: text-generation
41
+ ---
42
+
43
+ (examples to be added)
44
+
45
+ <B><font color="red">WARNING:</font> NSFW. Graphic HORROR. R/X-Rated. Swearing. UNCENSORED. </B>
46
+
47
+ <h2>L3.1-Dark-Reasoning-Super-Nova-RP-Hermes-R1-Uncensored-8B (full source)</h2>
48
+
49
+ <img src="dark-planet-reason.jpg" style="float:right; width:300px; height:300px; padding:10px;">
50
+
51
+ Context : 128k.
52
+
53
+ Required: Llama 3 Instruct template.
54
+
55
+ "Dark Reasoning" is a variable control reasoning model that is uncensored and operates at all temps/settings and
56
+ is for creative uses cases and general usage.
57
+
58
+ This is version 1. It is slightly ahh... tamed.
59
+
60
+ Additional versions (different mixes, and/or different base models and MOE versions) will be added very quickly.
61
+
62
+ This version's "thinking"/"reasoning" has been "darkened" by the original Dark Planet model's DNA and will also be shorter
63
+ and more compressed. Additional system prompts below to take this a lot further - a lot darker, a lot more ... evil.
64
+
65
+ Higher temps will result in deeper, richer "thoughts"... and frankly more interesting ones too.
66
+
67
+ The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "DeepHermes" model from NousResearch:
68
+
69
+ [ https://huggingface.co/NousResearch/DeepHermes-3-Llama-3-8B-Preview ]
70
+
71
+ This version will retain all the functions and features of the original "DeepHermes" model at about 50%-67% of original reasoning power.
72
+ Please visit their repo for all information on features, test results and so on.
73
+
74
+ For more information about "Dark Planet 8B" please see:
75
+
76
+ [ https://huggingface.co/DavidAU/L3-Dark-Planet-8B-GGUF ]
77
+
78
+ ---
79
+
80
+ <B>SOURCE / Full Precision:</B>
81
+
82
+ This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats.
83
+ The source code can also be used directly.
84
+
85
+ Links to quants are below and also on the right menu under "model tree".
86
+
87
+ ---
88
+
89
+ <B>IMPORTANT OPERATING INSTRUCTIONS:</B>
90
+
91
+ This is an instruct model with reasoning crafted onto the "Dark Planet" core model.
92
+
93
+ This is the type of model that LOVES temp - temps 1.2+, 2.2+ and so on.
94
+
95
+ STAND on it... lower temps will not produce the best content.
96
+
97
+ Likewise, as this is an instruct model, this model will perform best will medium to long prompts (see example #1 below).
98
+
99
+ Although short prompts will work, longer prompts with a bit of direction / instruction will really show what this model can do.
100
+
101
+ Reasoning is turned on/off via System Prompts below.
102
+
103
+ You can also give the model "character" as shown in the "Evil" versions which make the model think and reason like the
104
+ "Joker" from Batman.
105
+
106
+ Note that the reasoning/thinking section is often a lot less "tame" than the final output.
107
+
108
+ In version 2, the output is just as "unhinged" and the "reasoning/thinking" blocks.
109
+
110
+ Suggest a minimum context of 4k , but 8k is better due to reasoning/output blocks.
111
+
112
+ MAX QUANTS:
113
+
114
+ There will be two max quants, IQ4XS and Q8 ("MAX" in the file name).
115
+
116
+ The thinking/output will be enhanced by the output tensor being enlarged to bf16.
117
+
118
+ KNOWN ISSUES:
119
+
120
+ - You may need to hit regen sometimes to get the thinking/reasoning to activate / get a good "thinking block".
121
+ - Sometimes the 2nd or 3rd generation is the best version. Suggest min of 5 for specific creative uses.
122
+ - Sometimes the thinking block will end, and you need to manually prompt the model to "generate" the output.
123
+
124
+
125
+ <B>USE CASES:</B>
126
+
127
+ This model is for all use cases, and but designed for creative use cases specifically.
128
+
129
+ This model can also be used for solving logic puzzles, riddles, and other problems with the enhanced "thinking" systems.
130
+
131
+ This model also can solve problems/riddles/ and puzzles normally beyond the abilities of a Llama 3.1 model due to DeepHermes systems.
132
+
133
+ (It will not however, have the same level of abilities due to Dark Planet core.)
134
+
135
+ This model WILL produce HORROR / NSFW / uncensored content in EXPLICIT and GRAPHIC DETAIL.
136
+
137
+ <B>Special Operation Instructions:</B>
138
+
139
+ TEMP/SETTINGS:
140
+
141
+ 1. Set Temp between 0 and .8, higher than this "think" functions will activate differently. The most "stable" temp seems to be .6, with a variance of +-0.05. Lower for more "logic" reasoning, raise it for more "creative" reasoning (max .8 or so). Also set context to at least 4096, to account for "thoughts" generation.
142
+ 2. For temps 1+,2+ etc etc, thought(s) will expand, and become deeper and richer.
143
+ 3. Set "repeat penalty" to 1.02 to 1.07 (recommended) .
144
+ 4. This model requires a Llama 3 Instruct and/or Command-R chat template. (see notes on "System Prompt" / "Role" below) OR standard "Jinja Autoloaded Template" (this is contained in the quant and will autoload)
145
+
146
+ PROMPTS:
147
+
148
+ 1. If you enter a prompt without implied "step by step" requirements (ie: Generate a scene, write a story, give me 6 plots for xyz), "thinking" (one or more) MAY activate AFTER first generation. (IE: Generate a scene -> scene will generate, followed by suggestions for improvement in "thoughts")
149
+ 2. If you enter a prompt where "thinking" is stated or implied (ie puzzle, riddle, solve this, brainstorm this idea etc), "thoughts" process(es) in Deepseek will activate almost immediately. Sometimes you need to regen it to activate.
150
+ 3. You will also get a lot of variations - some will continue the generation, others will talk about how to improve it, and some (ie generation of a scene) will cause the characters to "reason" about this situation. In some cases, the model will ask you to continue generation / thoughts too.
151
+ 4. In some cases the model's "thoughts" may appear in the generation itself.
152
+ 5. State the word size length max IN THE PROMPT for best results, especially for activation of "thinking." (see examples below)
153
+ 6. You may want to try your prompt once at "default" or "safe" temp settings, another at temp 1.2, and a third at 2.5 as an example. This will give you a broad range of "reasoning/thoughts/problem" solving.
154
+
155
+ GENERATION - THOUGHTS/REASONING:
156
+
157
+ 1. It may take one or more regens for "thinking" to "activate." (depending on the prompt)
158
+ 2. Model can generate a LOT of "thoughts". Sometimes the most interesting ones are 3,4,5 or more levels deep.
159
+ 3. Many times the "thoughts" are unique and very different from one another.
160
+ 4. Temp/rep pen settings can affect reasoning/thoughts too.
161
+ 5. Change up or add directives/instructions or increase the detail level(s) in your prompt to improve reasoning/thinking.
162
+ 6. Adding to your prompt: "think outside the box", "brainstorm X number of ideas", "focus on the most uncommon approaches" can drastically improve your results.
163
+
164
+ GENERAL SUGGESTIONS:
165
+
166
+ 1. I have found opening a "new chat" per prompt works best with "thinking/reasoning activation", with temp .6, rep pen 1.05 ... THEN "regen" as required.
167
+ 2. Sometimes the model will really really get completely unhinged and you need to manually stop it.
168
+ 3. Depending on your AI app, "thoughts" may appear with "< THINK >" and "</ THINK >" tags AND/OR the AI will generate "thoughts" directly in the main output or later output(s).
169
+ 4. Although quant q4KM was used for testing/examples, higher quants will provide better generation / more sound "reasoning/thinking".
170
+
171
+ ADDITIONAL SUPPORT:
172
+
173
+ For additional generational support, general questions, and detailed parameter info and a lot more see also:
174
+
175
+ NOTE: This is a CLASS 1 model.
176
+
177
+ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters
178
+
179
+ ---
180
+
181
+ <B>Recommended Settings (all) - For usage with "Think" / "Reasoning":</B>
182
+
183
+ temp: 1.5, 2, 2+ , rep pen: 1.02 (range : 1.02 to 1.12), rep pen range: 64, top_k: 80, top_p: .95, min_p: .05
184
+
185
+ Temp of 1+, 2+, 3+ will result in much deeper, richer and "more interesting" thoughts and reasoning AND FAR BETTER OUTPUT.
186
+
187
+ Model behaviour may change with other parameter(s) and/or sampler(s) activated - especially the "thinking/reasoning" process.
188
+
189
+ ---
190
+
191
+ <B>System Role / System Prompt - Augment The Model's Power:</b>
192
+
193
+ ---
194
+
195
+ If you set / have a system prompt this will affect both "generation" and "thinking/reasoning".
196
+
197
+ SIMPLE:
198
+
199
+ This is the generic system prompt used for generation and testing [no reasoning]:
200
+
201
+ <PRE>
202
+ You are a helpful, smart, kind, and efficient AI assistant. You always fulfill the user's requests to the best of your ability.
203
+ </PRE>
204
+
205
+ This System Role/Prompt will give you "basic thinking/reasoning" [basic reasoning]:
206
+
207
+ <PRE>
208
+ You are a deep thinking AI, you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside &lt;think&gt; &lt;/think&gt; tags, and then provide your solution or response to the problem.
209
+ </PRE>
210
+
211
+ EVIL SIMPLE [reasoning on]:
212
+
213
+ ```
214
+
215
+ You are a deep thinking evil AI, with a dark sense of humor and you talk like the Joker from Batman. You may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your evil thoughts and internal humorous monologue inside <think> </think> tags, and then provide your solution or response to the problem.
216
+
217
+ ```
218
+
219
+ CREATIVE SIMPLE [reasoning on]:
220
+
221
+ <PRE>
222
+ You are an AI assistant developed by a world wide community of ai experts.
223
+
224
+ Your primary directive is to provide highly creative, well-reasoned, structured, and extensively detailed responses.
225
+
226
+ Formatting Requirements:
227
+
228
+ 1. Always structure your replies using: &lt;think&gt;{reasoning}&lt;/think&gt;{answer}
229
+ 2. The &lt;think&gt;&lt;/think&gt; block should contain at least six reasoning steps when applicable.
230
+ 3. If the answer requires minimal thought, the &lt;think&gt;&lt;/think&gt; block may be left empty.
231
+ 4. The user does not see the &lt;think&gt;&lt;/think&gt; section. Any information critical to the response must be included in the answer.
232
+ 5. If you notice that you have engaged in circular reasoning or repetition, immediately terminate {reasoning} with a &lt;/think&gt; and proceed to the {answer}
233
+
234
+ Response Guidelines:
235
+
236
+ 1. Detailed and Structured: Use rich Markdown formatting for clarity and readability.
237
+ 2. Creative and Logical Approach: Your explanations should reflect the depth and precision of the greatest creative minds first.
238
+ 3. Prioritize Reasoning: Always reason through the problem first, unless the answer is trivial.
239
+ 4. Concise yet Complete: Ensure responses are informative, yet to the point without unnecessary elaboration.
240
+ 5. Maintain a professional, intelligent, and analytical tone in all interactions.
241
+ </PRE>
242
+
243
+ CREATIVE ADVANCED [reasoning on]:
244
+
245
+ NOTE: To turn reasoning off, remove line #2.
246
+
247
+ This system prompt can often generation multiple outputs and/or thinking blocks.
248
+
249
+ ```
250
+
251
+ Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
252
+
253
+ You may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your thoughts and internal monologue inside <think> </think> tags, and then provide your solution or response to the problem
254
+
255
+ Here are your skillsets:
256
+ [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
257
+
258
+ [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
259
+
260
+ Here are your critical instructions:
261
+ Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
262
+
263
+
264
+ ```
265
+
266
+ CREATIVE FULL, with FULL ON "EVIL" thinking/reasoning [reasoning on]:
267
+
268
+ NOTE: You can edit this so the AI is other than "Joker" from "Batman" - just adjust the wording carefully.
269
+
270
+ NOTE2: To turn reasoning off, remove line #2.
271
+
272
+ This system prompt can often generation multiple outputs and/or thinking blocks.
273
+
274
+ ```
275
+
276
+ Below is an instruction that describes a task. Ponder each user instruction carefully, and use your skillsets and critical instructions to complete the task to the best of your abilities.
277
+
278
+ As a deep thinking AI, with a dark sense of humor that talks like "The Joker" from BATMAN you may use extremely long chains of thought to deeply consider the problem and deliberate with yourself via systematic reasoning processes to help come to a correct solution prior to answering. You should enclose your evil thoughts and internal humorous monologue inside <think> </think> tags, and then provide your solution or response to the problem using your skillsets and critical instructions.
279
+
280
+ Here are your skillsets:
281
+ [MASTERSTORY]:NarrStrct(StryPlnng,Strbd,ScnSttng,Exps,Dlg,Pc)-CharDvlp(ChrctrCrt,ChrctrArcs,Mtvtn,Bckstry,Rltnshps,Dlg*)-PltDvlp(StryArcs,PltTwsts,Sspns,Fshdwng,Climx,Rsltn)-ConfResl(Antg,Obstcls,Rsltns,Cnsqncs,Thms,Symblsm)-EmotImpct(Empt,Tn,Md,Atmsphr,Imgry,Symblsm)-Delvry(Prfrmnc,VcActng,PblcSpkng,StgPrsnc,AudncEngmnt,Imprv)
282
+
283
+ [*DialogWrt]:(1a-CharDvlp-1a.1-Backgrnd-1a.2-Personality-1a.3-GoalMotiv)>2(2a-StoryStruc-2a.1-PlotPnt-2a.2-Conflict-2a.3-Resolution)>3(3a-DialogTech-3a.1-ShowDontTell-3a.2-Subtext-3a.3-VoiceTone-3a.4-Pacing-3a.5-VisualDescrip)>4(4a-DialogEdit-4a.1-ReadAloud-4a.2-Feedback-4a.3-Revision)
284
+
285
+ Here are your critical instructions:
286
+ Ponder each word choice carefully to present as vivid and emotional journey as is possible. Choose verbs and nouns that are both emotional and full of imagery. Load the story with the 5 senses. Aim for 50% dialog, 25% narration, 15% body language and 10% thoughts. Your goal is to put the reader in the story.
287
+
288
+
289
+ ```
290
+
291
+
292
+ ---
293
+
294
+ <B> Additional Support / Documents for this model to assist with generation / performance: </b>
295
+
296
+ Document #1:
297
+
298
+ Details how to use reasoning/thinking models and get maximum performance from them, and includes links to all
299
+ reasoning/thinking models - GGUF and source, as well as adapters to turn any "regular" model into a "reasoning/thinking" model.
300
+
301
+ [ https://huggingface.co/DavidAU/How-To-Use-Reasoning-Thinking-Models-and-Create-Them ]
302
+
303
+ Document #2:
304
+
305
+ Document detailing all parameters, settings, samplers and advanced samplers to use not only my models to their maximum
306
+ potential - but all models (and quants) online (regardless of the repo) to their maximum potential.
307
+ Included quick start and detailed notes, include AI / LLM apps and other critical information and references too.
308
+ A must read if you are using any AI/LLM right now.
309
+
310
+ [ https://huggingface.co/DavidAU/Maximizing-Model-Performance-All-Quants-Types-And-Full-Precision-by-Samplers_Parameters ]
311
+
312
+ Software:
313
+
314
+ SOFTWARE patch (by me) for Silly Tavern (front end to connect to multiple AI apps / connect to AIs- like Koboldcpp,
315
+ Lmstudio, Text Gen Web UI and other APIs) to control and improve output generation of ANY AI model.
316
+ Also designed to control/wrangle some of my more "creative" models and make them perform perfectly with
317
+ little to no parameter/samplers adjustments too.
318
+
319
+ [ https://huggingface.co/DavidAU/AI_Autocorrect__Auto-Creative-Enhancement__Auto-Low-Quant-Optimization__gguf-exl2-hqq-SOFTWARE ]
320
+
321
+ ---
322
+
323
+ <H2>EXAMPLES:</H2>
324
+
325
+ Examples are created using quant Q8_0, "temp=2.2" (unless otherwise stated), minimal parameters and "LLAMA3" template.
326
+
327
+ Model has been tested with "temp" from ".1" to "5".
328
+
329
+ IMPORTANT:
330
+
331
+ Higher quants / imatrix quants will have much stronger generation - words, sentences, ideas, dialog and general quality.
332
+
333
  ---