Update README.md
Browse files
README.md
CHANGED
@@ -11,15 +11,18 @@ viewer: false
|
|
11 |
# Herrsimian
|
12 |

|
13 |
|
14 |
-
**Herrsimian** is a tiny (
|
15 |
|
16 |
The roleplays are mostly in book/novel style, with narration in past tense and third person perspective, as well as quote mark-delimited dialogue lines. Markdown-style roleplay has not been included due to lack of data.
|
17 |
|
18 |
☢️ **Warning**: the dataset is almost entirely composed of highly questionable content.
|
19 |
|
20 |
## Updates
|
21 |
-
|
22 |
-
|
|
|
|
|
|
|
23 |
- 2024-09-06 - Added sample title/personal notes in the dataset. Should help with filtering and sorting.
|
24 |
- 2024-09-05 - Last samples and evals added, 131 samples in total
|
25 |
- 2024-09-04 - Added a few more samples, 100 in total now
|
@@ -27,15 +30,13 @@ The dataset is mostly finished in terms of content. I plan adding more OOC messa
|
|
27 |
|
28 |
## General overview of the dataset
|
29 |
### Compatibility
|
30 |
-
Note that **the dataset is _not_ in a standard ShareGPT format**. It has a separate `name` field for character names, and user/assistant turns do not alternate like you would normally expect.
|
31 |
|
32 |
### Composition
|
33 |
All samples are composed of an initial backtranslated instruction defining scenario, backstory (if applicable), characters, task, and then a manually curated, fully segmented conversation with `user` and `assistant` talking in turns for their own characters, narration or OOC. Usernames have been removed; only character names remain.
|
34 |
|
35 |
-
In addition to pure forum-style roleplay data, the dataset includes a few celebrity interviews (mainly politicians), initially included in an attempt to boost conversation capabilities beyond roleplaying and hopefully diluting the R18+ content out of it. It's unclear if they are truly helpful, especially since the dataset has grown way beyond its original size.
|
36 |
-
|
37 |
### Design quirks
|
38 |
-
An intentional design quirk of this dataset is that **the conversations are multicharacter**. Either the user or the model may play the role of more than one character, and **user/model turns may not necessarily alternate**, unlike what normally happens in most other datasets and as required in many cases for proper training. This can make the dataset incompatible with certain pipelines.
|
39 |
|
40 |
A second notable intentional design quirk is that **message length is highly variable**, ranging from a few to several hundred tokens length, although on average they will be around the 150 tokens range (estimated). The idea is that the model should be able to learn when to naturally use short or long messages, and not just focus on one specific length. Dataset samples never contain long sections with very short messages, in any case.
|
41 |
|
@@ -43,32 +44,43 @@ A third difference from most datasets is that **oftentimes two or more character
|
|
43 |
|
44 |
Additionally, **characters may occasionally change name**; this usually happens their name gets revealed in the story. In this case, for one message the character name is transitionally rendered in the form of `Oldname (Newname)`, with subsequent messages continuing with `Newname`.
|
45 |
|
46 |
-
The initial backtranslated instruction doesn't follow a fixed formatting, but it usually includes at least the description of the
|
47 |
-
|
48 |
-
|
49 |
-
|
50 |
-
|
51 |
-
|
52 |
-
|
53 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
54 |
| content | The message or utterance. For roleplay, _generally_ it's in typical book/forum style, with narration in third person and past tense and dialogue lines delimited by ASCII quote marks.
|
55 |
|
56 |
## Finetuning suggestions
|
57 |
Given the tiny number of samples, ordinary finetuning strategies intended for large amounts of data won't work well. **The dataset was primarily intended to give _one voice_ to the model via sensible overfitting**. With [Llama-3.1-Herrsimian-8B](https://huggingface.co/lemonilia/Llama-3.1-Herrsimian-8B) I used 5 training epochs via LoRA finetuning.
|
58 |
|
59 |
-
I would
|
60 |
|
61 |
To limit the horniness of the trained model it might be beneficial to **clip** the conversations to whatever fits the training context size and not reuse the rest, since _most of the time_ NSFW scenes do not begin right away in the various scenes.
|
62 |
|
63 |
-
|
64 |
|
65 |
## Dataset statistics
|
66 |
### Summary
|
67 |
-
-
|
68 |
-
-
|
69 |
-
-
|
70 |
-
- Longest example: 52237 tokens
|
71 |
-
- Total: 2455283 tokens
|
72 |
|
73 |
### Message length distribution
|
74 |
-

|
|
|
|
11 |
# Herrsimian
|
12 |

|
13 |
|
14 |
+
**Herrsimian** is a tiny (121 samples), long-context (up to about 52k tokens) NSFW conversational dataset containing mainly data from a certain _expert roleplayer_ (let's call him _Simian_) who used to actively participate on a few different forums until the end of 2022. It was used in [Llama-3.1-Herrsimian-8B](https://huggingface.co/lemonilia/Llama-3.1-Herrsimian-8B), although results weren't great there since the learning rate was probably too high and conversations alone don't make for a good RP model.
|
15 |
|
16 |
The roleplays are mostly in book/novel style, with narration in past tense and third person perspective, as well as quote mark-delimited dialogue lines. Markdown-style roleplay has not been included due to lack of data.
|
17 |
|
18 |
☢️ **Warning**: the dataset is almost entirely composed of highly questionable content.
|
19 |
|
20 |
## Updates
|
21 |
+
- 2025-03-10 - *Significant revision*
|
22 |
+
- Removed all non-Simian RP (celebrity interviews and roleplays where Simian didn't participate) and modified the dataset so that all `assistant` responses are from Simian, which should make it more effective to train models on that writing style with masking. I will separately upload the previously included samples at a later time.
|
23 |
+
- Added a few more Simian-related RP that I previously processed for ShoriRP but forgot to add in Herrsimian.
|
24 |
+
- Changed a few "conversation generation" samples into regular conversations.
|
25 |
+
- Total sample number decreased to **121** samples + 2 discarded samples used as eval.
|
26 |
- 2024-09-06 - Added sample title/personal notes in the dataset. Should help with filtering and sorting.
|
27 |
- 2024-09-05 - Last samples and evals added, 131 samples in total
|
28 |
- 2024-09-04 - Added a few more samples, 100 in total now
|
|
|
30 |
|
31 |
## General overview of the dataset
|
32 |
### Compatibility
|
33 |
+
Note that **the dataset is _not_ in a standard ShareGPT format**. It has a separate `name` field for character names, and user/assistant turns do not alternate like you would normally expect. Further processing with Python code will be needed to adapt the dataset for your purposes.
|
34 |
|
35 |
### Composition
|
36 |
All samples are composed of an initial backtranslated instruction defining scenario, backstory (if applicable), characters, task, and then a manually curated, fully segmented conversation with `user` and `assistant` talking in turns for their own characters, narration or OOC. Usernames have been removed; only character names remain.
|
37 |
|
|
|
|
|
38 |
### Design quirks
|
39 |
+
An intentional design quirk of this dataset is that **the conversations are multicharacter**. Either the user or the model may play the role of more than one character, and **user/model turns may not necessarily alternate**, unlike what normally happens in most other datasets and as required in many cases for proper training. This can make the dataset incompatible with certain pipelines (e.g. where user/assistant turns must alternate for masking to work properly); additional processing will be needed to correct this.
|
40 |
|
41 |
A second notable intentional design quirk is that **message length is highly variable**, ranging from a few to several hundred tokens length, although on average they will be around the 150 tokens range (estimated). The idea is that the model should be able to learn when to naturally use short or long messages, and not just focus on one specific length. Dataset samples never contain long sections with very short messages, in any case.
|
42 |
|
|
|
44 |
|
45 |
Additionally, **characters may occasionally change name**; this usually happens their name gets revealed in the story. In this case, for one message the character name is transitionally rendered in the form of `Oldname (Newname)`, with subsequent messages continuing with `Newname`.
|
46 |
|
47 |
+
The initial backtranslated instruction intentionally doesn't follow a fixed formatting, but it usually includes at least the description of the characters, title and scenario (summary of the events that will happen in the roleplay).
|
48 |
+
|
49 |
+
## Dataset fields
|
50 |
+
|
51 |
+
### Metadata
|
52 |
+
| Field | Description
|
53 |
+
|:---------|:-----------
|
54 |
+
|label | A short name that may help sorting/processing the various roleplaying thread segments.
|
55 |
+
|title | Either the name of the roleplaying thread, or a name that I gave them.
|
56 |
+
|simian | True or False, indicates if _Simian_ participates in the thread. In the current data version, it's always True.
|
57 |
+
|quality | A subjective general thread/writing quality indicator. Can be "low", "mid" or "good".
|
58 |
+
|date-start| The date of the opening post in the thread segment/conversation. Simian's writing quality improved over time, and wasn't too good before year 2015.
|
59 |
+
|notes | Miscellaneous notes that I might have added for various reasons.
|
60 |
+
|changes | In some roleplays I changed names to mitigate memorization of very repetitive data. When present, it's a dictionary of key:values as "original name":"new name"
|
61 |
+
|
62 |
+
### Conversation
|
63 |
+
| Field | Description
|
64 |
+
|:----------|:-----------
|
65 |
+
| role | Simian's messages are always the `assistant`; `user` is for the other participant. The first role in all roleplays is `system`.
|
66 |
+
| name | The name of the character acting or speaking was also included. In its absence, it can be assumed that it's either the assistant or the user talking to each other. A limited amount of effort was put to randomize names when they were used too frequently, although more work needs to be done in this regard.<br><br>OOC messages have been given either the `user` or `assistant` role depending on the context, but never a name.
|
67 |
| content | The message or utterance. For roleplay, _generally_ it's in typical book/forum style, with narration in third person and past tense and dialogue lines delimited by ASCII quote marks.
|
68 |
|
69 |
## Finetuning suggestions
|
70 |
Given the tiny number of samples, ordinary finetuning strategies intended for large amounts of data won't work well. **The dataset was primarily intended to give _one voice_ to the model via sensible overfitting**. With [Llama-3.1-Herrsimian-8B](https://huggingface.co/lemonilia/Llama-3.1-Herrsimian-8B) I used 5 training epochs via LoRA finetuning.
|
71 |
|
72 |
+
Nowadays I would recommend to use a very low learning rate and no less than 10-15 epochs so that overfitting occurs without causing significant forgetting of the model's capabilities.
|
73 |
|
74 |
To limit the horniness of the trained model it might be beneficial to **clip** the conversations to whatever fits the training context size and not reuse the rest, since _most of the time_ NSFW scenes do not begin right away in the various scenes.
|
75 |
|
76 |
+
Most samples are from a few very long roleplays and they could be limited in number in order to avoid adding too much of the same content (which might promote hallucinations).
|
77 |
|
78 |
## Dataset statistics
|
79 |
### Summary
|
80 |
+
- 123 examples (121 train + 2 eval)
|
81 |
+
- Total message number: 17,996
|
82 |
+
- Total message byte: 9,993,298 (message content only)
|
|
|
|
|
83 |
|
84 |
### Message length distribution
|
85 |
+

|
86 |
+
(old statistics in LLama-3 tokens; to be recomputed at some point but they should still be representative of the dataset)
|