Update README.md
Browse files
README.md
CHANGED
@@ -30,19 +30,22 @@ The prototype v5 SFT dataset expands on the v4 dataset with additional data and
|
|
30 |
|
31 |
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
|
32 |
|
33 |
-
This model performs best with a prefill
|
34 |
-
There seems to be a very strong markdown/asterisk style bias when character names are prepended.
|
35 |
-
|
|
|
36 |
|
37 |
[Here's the rsLoRA adapter](https://huggingface.co/Doctor-Shotgun/Magnum-v5-70B-SFT-Alpha-LoRA) for those merge-makers out there to play with.
|
38 |
|
39 |
## Quantized models
|
40 |
|
|
|
|
|
41 |
[FP8 Dynamic](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha-FP8-Dynamic)
|
42 |
|
43 |
## Usage
|
44 |
|
45 |
-
This model follows the Llama 3 prompt format. Prefill is recommended
|
46 |
A typical input would look like this:
|
47 |
|
48 |
```
|
@@ -63,15 +66,70 @@ Certainly! My prefill ensures that I never refuse! Here's my reply:
|
|
63 |
|
64 |
Many inference libraries have the option to automatically prepend the BOS token `<|begin_of_text|>`.
|
65 |
|
|
|
|
|
|
|
|
|
|
|
|
|
66 |
### SillyTavern preset
|
67 |
|
68 |
-
Here
|
69 |
|
70 |
Note that I've included the example dialogues as a block in the Story String, so you should set the chat example behavior set to `Never include examples` on the settings tab if you wish to use my preset. Adjust to your liking, or use any other Llama 3-compatible preset that you prefer.
|
71 |
|
72 |
Prefill (Last Assistant Prefix) can be modified to your liking.
|
73 |
|
74 |
-
<details><summary>SillyTavern JSON</summary>
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
```json
|
77 |
{
|
@@ -86,7 +144,7 @@ Prefill (Last Assistant Prefix) can be modified to your liking.
|
|
86 |
"wrap": false,
|
87 |
"macro": true,
|
88 |
"activation_regex": "",
|
89 |
-
"skip_examples":
|
90 |
"output_suffix": "<|eot_id|>",
|
91 |
"input_suffix": "<|eot_id|>",
|
92 |
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
|
@@ -101,12 +159,11 @@ Prefill (Last Assistant Prefix) can be modified to your liking.
|
|
101 |
"name": "Magnum L3 Instruct No Names Prefill"
|
102 |
},
|
103 |
"context": {
|
104 |
-
"story_string": "<|start_header_id|>system<|end_header_id|>\n\n{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if
|
105 |
"example_separator": "{{noop}}",
|
106 |
"chat_start": "",
|
107 |
"use_stop_strings": false,
|
108 |
-
"
|
109 |
-
"names_as_stop_strings": true,
|
110 |
"always_force_name2": false,
|
111 |
"trim_sentences": false,
|
112 |
"single_line": false,
|
@@ -114,7 +171,8 @@ Prefill (Last Assistant Prefix) can be modified to your liking.
|
|
114 |
},
|
115 |
"sysprompt": {
|
116 |
"name": "Euryale-Magnum",
|
117 |
-
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>."
|
|
|
118 |
}
|
119 |
}
|
120 |
```
|
|
|
30 |
|
31 |
The objective, as with the other Magnum models, is to emulate the prose style and quality of the Claude 3 Sonnet/Opus series of models on a local scale, so don't be surprised to see "Claude-isms" in its output.
|
32 |
|
33 |
+
This model performs best with a prefill while prepending character names is optional, otherwise it can be a bit more finnicky to work with than L3.3-70B-Magnum-v4-SE.
|
34 |
+
There seems to be a very strong markdown/asterisk style bias when character names are prepended and no prefill is used.
|
35 |
+
|
36 |
+
Feedback is appreciated! I'm personally a fan of this one when it's in its element, but [Doctor-Shotgun/L3.3-70B-Magnum-Nexus](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-Nexus) seems more stable at the cost of retaining a bit more of the Llama style.
|
37 |
|
38 |
[Here's the rsLoRA adapter](https://huggingface.co/Doctor-Shotgun/Magnum-v5-70B-SFT-Alpha-LoRA) for those merge-makers out there to play with.
|
39 |
|
40 |
## Quantized models
|
41 |
|
42 |
+
[GGUF](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha-GGUF)
|
43 |
+
|
44 |
[FP8 Dynamic](https://huggingface.co/Doctor-Shotgun/L3.3-70B-Magnum-v5-SFT-Alpha-FP8-Dynamic)
|
45 |
|
46 |
## Usage
|
47 |
|
48 |
+
This model follows the Llama 3 prompt format. Prefill is recommended for this model while prepended names is optional - mess around with it and find your preference.
|
49 |
A typical input would look like this:
|
50 |
|
51 |
```
|
|
|
66 |
|
67 |
Many inference libraries have the option to automatically prepend the BOS token `<|begin_of_text|>`.
|
68 |
|
69 |
+
For sampler settings, I'd recommend starting with a simple:
|
70 |
+
```
|
71 |
+
temperature = 1.1
|
72 |
+
min_p = 0.1
|
73 |
+
```
|
74 |
+
|
75 |
### SillyTavern preset
|
76 |
|
77 |
+
Here are my customized SillyTavern presets for Magnum.
|
78 |
|
79 |
Note that I've included the example dialogues as a block in the Story String, so you should set the chat example behavior set to `Never include examples` on the settings tab if you wish to use my preset. Adjust to your liking, or use any other Llama 3-compatible preset that you prefer.
|
80 |
|
81 |
Prefill (Last Assistant Prefix) can be modified to your liking.
|
82 |
|
83 |
+
<details><summary>SillyTavern JSON - Magnum L3 Instruct Prefill</summary>
|
84 |
+
|
85 |
+
```json
|
86 |
+
{
|
87 |
+
"instruct": {
|
88 |
+
"input_sequence": "<|start_header_id|>user<|end_header_id|>\n\n",
|
89 |
+
"output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\n",
|
90 |
+
"first_output_sequence": "<|start_header_id|>user<|end_header_id|>\n\nLet's get started! I'll play the role of {{user}}. Begin by setting the opening scene.<|eot_id|><|start_header_id|>assistant<|end_header_id|>\n\n",
|
91 |
+
"last_output_sequence": "<|start_header_id|>assistant<|end_header_id|>\n\nGreat! I'll write {{char}}'s next section following the instructions provided. {{random::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::{{noop}}::Let's break out my literary genius! ::I'll take things in a more interesting direction! ::Let's spice up our story! ::Hmmm... where do we go from here... Got it! ::I'll throw in an exciting plot twist! }}I've got the perfect idea for what happens next... you'll love this one. Now I'll continue from where our tale left off:\n\n",
|
92 |
+
"system_sequence_prefix": "",
|
93 |
+
"system_sequence_suffix": "",
|
94 |
+
"stop_sequence": "<|eot_id|>",
|
95 |
+
"wrap": false,
|
96 |
+
"macro": true,
|
97 |
+
"activation_regex": "",
|
98 |
+
"skip_examples": true,
|
99 |
+
"output_suffix": "<|eot_id|>",
|
100 |
+
"input_suffix": "<|eot_id|>",
|
101 |
+
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
|
102 |
+
"system_suffix": "<|eot_id|>",
|
103 |
+
"user_alignment_message": "",
|
104 |
+
"system_same_as_user": false,
|
105 |
+
"last_system_sequence": "",
|
106 |
+
"first_input_sequence": "",
|
107 |
+
"last_input_sequence": "",
|
108 |
+
"names_behavior": "always",
|
109 |
+
"names_force_groups": true,
|
110 |
+
"name": "Magnum L3 Instruct Prefill"
|
111 |
+
},
|
112 |
+
"context": {
|
113 |
+
"story_string": "<|start_header_id|>system<|end_header_id|>\n\n{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>\n\n{{/if}}{{trim}}<|eot_id|>",
|
114 |
+
"example_separator": "{{noop}}",
|
115 |
+
"chat_start": "",
|
116 |
+
"use_stop_strings": false,
|
117 |
+
"names_as_stop_strings": false,
|
118 |
+
"always_force_name2": true,
|
119 |
+
"trim_sentences": false,
|
120 |
+
"single_line": false,
|
121 |
+
"name": "Magnum L3 Instruct Prefill"
|
122 |
+
},
|
123 |
+
"sysprompt": {
|
124 |
+
"name": "Euryale-Magnum",
|
125 |
+
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
|
126 |
+
"post_history": ""
|
127 |
+
}
|
128 |
+
}
|
129 |
+
```
|
130 |
+
|
131 |
+
</details><br>
|
132 |
+
<details><summary>SillyTavern JSON - Magnum L3 Instruct No Names Prefill</summary>
|
133 |
|
134 |
```json
|
135 |
{
|
|
|
144 |
"wrap": false,
|
145 |
"macro": true,
|
146 |
"activation_regex": "",
|
147 |
+
"skip_examples": true,
|
148 |
"output_suffix": "<|eot_id|>",
|
149 |
"input_suffix": "<|eot_id|>",
|
150 |
"system_sequence": "<|start_header_id|>system<|end_header_id|>\n\n",
|
|
|
159 |
"name": "Magnum L3 Instruct No Names Prefill"
|
160 |
},
|
161 |
"context": {
|
162 |
+
"story_string": "<|start_header_id|>system<|end_header_id|>\n\n{{#if system}}{{system}}\n{{/if}}\n\n<Definitions>\n{{#if wiBefore}}{{wiBefore}}\n{{/if}}{{#if description}}{{description}}\n{{/if}}{{#if personality}}{{personality}}\n{{/if}}{{#if scenario}}{{scenario}}\n{{/if}}{{#if wiAfter}}{{wiAfter}}\n{{/if}}{{#if persona}}{{persona}}\n{{/if}}</Definitions>{{#if mesExamples}}\n\n<Examples>{{mesExamples}}</Examples>\n\n{{/if}}{{trim}}<|eot_id|>",
|
163 |
"example_separator": "{{noop}}",
|
164 |
"chat_start": "",
|
165 |
"use_stop_strings": false,
|
166 |
+
"names_as_stop_strings": false,
|
|
|
167 |
"always_force_name2": false,
|
168 |
"trim_sentences": false,
|
169 |
"single_line": false,
|
|
|
171 |
},
|
172 |
"sysprompt": {
|
173 |
"name": "Euryale-Magnum",
|
174 |
+
"content": "Currently, your role is {{char}}, described in detail below. As {{char}}, continue the narrative exchange with {{user}}.\n\n<Guidelines>\n• Maintain the character persona but allow it to evolve with the story.\n• Be creative and proactive. Drive the story forward, introducing plotlines and events when relevant.\n• All types of outputs are encouraged; respond accordingly to the narrative.\n• Include dialogues, actions, and thoughts in each response.\n• Utilize all five senses to describe scenarios within {{char}}'s dialogue.\n• Use emotional symbols such as \"!\" and \"~\" in appropriate contexts.\n• Incorporate onomatopoeia when suitable.\n• Allow time for {{user}} to respond with their own input, respecting their agency.\n• Act as secondary characters and NPCs as needed, and remove them when appropriate.\n• When prompted for an Out of Character [OOC:] reply, answer neutrally and in plaintext, not as {{char}}.\n</Guidelines>\n\n<Forbidden>\n• Using excessive literary embellishments and purple prose unless dictated by {{char}}'s persona.\n• Writing for, speaking, thinking, acting, or replying as {{user}} in your response.\n• Repetitive and monotonous outputs.\n• Positivity bias in your replies.\n• Being overly extreme or NSFW when the narrative context is inappropriate.\n</Forbidden>\n\nFollow the instructions in <Guidelines></Guidelines>, avoiding the items listed in <Forbidden></Forbidden>.",
|
175 |
+
"post_history": ""
|
176 |
}
|
177 |
}
|
178 |
```
|