mookiezi commited on
Commit
629e3e1
Β·
1 Parent(s): 0c02ba8

Removed blank messages

Browse files
Files changed (2) hide show
  1. README.md +75 -77
  2. train.parquet +2 -2
README.md CHANGED
@@ -27,7 +27,7 @@ size_categories:
27
 
28
  > **Discord-Dialogues** is a large-scale dataset of anonymized Discord conversations from late spring to early fall 2025 for training and evaluating realistic conversational AI models in a ChatML-friendly format.
29
 
30
- This dataset contains 7.5 million exchanges spread out over 17 million turns, with more than 145 million words.
31
 
32
  ---
33
 
@@ -63,9 +63,10 @@ size_categories:
63
  - Training relevance/reward models
64
  - Dialogue generation research
65
 
66
- Use case examples:
67
- - [mookiezi/Discord-Micae-8B-Preview](https://huggingface.co/mookiezi/Discord-Micae-8B-Preview) β€” experimental larger model
68
- - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) β€” stable smaller model
 
69
 
70
  ---
71
 
@@ -74,22 +75,22 @@ Use case examples:
74
  This dataset was constructed with a custom multi-stage filtering toolkit:
75
 
76
  1. **SQL filters** (`filter.sql`)
77
- Postgres regex/text filters for PII, bot/command patterns, links, embeds, and automation noise.
78
 
79
  2. **Smart cleaner** (`smartclean.py`)
80
- Multi-stage process: normalize text, slang replacement, resample by length, and enforce structural validation.
81
- Filters out structural noise such as code blocks, trading posts, and LFG.
82
 
83
  3. **Dedupe** (`dedupe.py`)
84
- Deduplicates conversations by hashing message chains
85
- Keeps only unique rows preferring the longest final assistant message when duplicates occur.
86
 
87
  4. **Fix End** (`fixend.py`)
88
- Strips any prefix of spaces, commas, or non-emoticon colons before `<|im_end|>` to the plain token.
89
 
90
  5. **ToS risk filter** (`tos.py`)
91
- Drops or redacts unsafe categories (sexual violence, CSA, slurs, harassment, doxxing, self-harm, extremism) and PII.
92
- Uses fuzzy/leet/diacritic-aware regex.
93
 
94
  The full filtering scripts are open source at the [filters GitHub repository](https://github.com/mookiezi/filters).
95
 
@@ -112,81 +113,78 @@ The full end-to-end pipeline is documented in the [dataset-pipeline GitHub repos
112
  <div style="display:flex; gap:20px; align-items:flex-start;">
113
 
114
  <div>
115
-
116
- | Metric | Value |
117
- | ------------------------ | --------------: |
118
- | Samples (count) | 7,546,294 |
119
- | Min length (tokens) | 7 |
120
- | Max length (tokens) | 5,979 |
121
- | Mean length (tokens) | 33.02 |
122
- | Median length (tokens) | 29 |
123
- | Std dev (tokens) | 17.39 |
124
- | Skew | 26.46 |
125
- | Kurtosis | 7,487.55 |
126
- | Total tokens | 249,193,745 |
127
- | Total characters | 1,291,480,299 |
128
- | Total words | 145,887,976 |
129
- | Avg chars per sample | 171.14 |
130
- | Avg words per sample | 19.33 |
131
- | Avg chars per word | 8.85 |
132
- | Tokens per char | 0.19 |
133
- | Total assistant blocks | 9,341,891 |
134
 
135
  </div>
136
 
137
  <div>
138
 
139
-
140
- | Tokens | Count |
141
- | --------- | --------: |
142
- | 0–8 | 1 |
143
- | 8–16 | 110,310 |
144
- | 16–32 | 4,382,094 |
145
- | 32–64 | 2,674,780 |
146
- | 64–128 | 360,401 |
147
- | 128–256 | 18,083 |
148
- | 256οΏ½οΏ½384 | 417 |
149
- | 384–512 | 75 |
150
- | 512–768 | 78 |
151
- | 768–1024 | 30 |
152
- | 1024–2048 | 18 |
153
- | 2048–4096 | 3 |
154
 
155
  </div>
156
 
157
  <div>
158
 
159
-
160
- | Turns | Count |
161
- | ----- | --------: |
162
- | 2 | 5,969,540 |
163
- | 3 | 1,080,526 |
164
- | 4 | 319,794 |
165
- | 5 | 102,553 |
166
- | 6 | 41,246 |
167
- | 7 | 16,904 |
168
- | 8 | 7,715 |
169
- | 9 | 3,691 |
170
- | 10 | 1,867 |
171
- | 11 | 1,007 |
172
- | 12 | 575 |
173
- | 13 | 334 |
174
- | 14 | 189 |
175
- | 15 | 129 |
176
- | 16 | 67 |
177
- | 17 | 62 |
178
- | 18 | 32 |
179
- | 19 | 21 |
180
- | 20 | 8 |
181
- | 21 | 11 |
182
- | 22 | 11 |
183
- | 23 | 2 |
184
- | 24 | 1 |
185
- | 25 | 3 |
186
- | 27 | 2 |
187
- | 29 | 1 |
188
- | 32 | 1 |
189
- | 33 | 2 |
190
 
191
  </div>
192
 
 
27
 
28
  > **Discord-Dialogues** is a large-scale dataset of anonymized Discord conversations from late spring to early fall 2025 for training and evaluating realistic conversational AI models in a ChatML-friendly format.
29
 
30
+ This dataset contains 7.5 million exchanges spread out over 17 million turns, with more than 145 million words.
31
 
32
  ---
33
 
 
63
  - Training relevance/reward models
64
  - Dialogue generation research
65
 
66
+ Use case examples:
67
+
68
+ - [mookiezi/Discord-Micae-8B-Preview](https://huggingface.co/mookiezi/Discord-Micae-8B-Preview) β€” experimental larger model
69
+ - [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) β€” stable smaller model
70
 
71
  ---
72
 
 
75
  This dataset was constructed with a custom multi-stage filtering toolkit:
76
 
77
  1. **SQL filters** (`filter.sql`)
78
+ Postgres regex/text filters for PII, bot/command patterns, links, embeds, and automation noise.
79
 
80
  2. **Smart cleaner** (`smartclean.py`)
81
+ Multi-stage process: normalize text, slang replacement, resample by length, and enforce structural validation.
82
+ Filters out structural noise such as code blocks, trading posts, and LFG.
83
 
84
  3. **Dedupe** (`dedupe.py`)
85
+ Deduplicates conversations by hashing message chains
86
+ Keeps only unique rows preferring the longest final assistant message when duplicates occur.
87
 
88
  4. **Fix End** (`fixend.py`)
89
+ Strips any prefix of spaces, commas, or non-emoticon colons before `<|im_end|>` to the plain token.
90
 
91
  5. **ToS risk filter** (`tos.py`)
92
+ Drops or redacts unsafe categories (sexual violence, CSA, slurs, harassment, doxxing, self-harm, extremism) and PII.
93
+ Uses fuzzy/leet/diacritic-aware regex.
94
 
95
  The full filtering scripts are open source at the [filters GitHub repository](https://github.com/mookiezi/filters).
96
 
 
113
  <div style="display:flex; gap:20px; align-items:flex-start;">
114
 
115
  <div>
116
+ | Metric | Value |
117
+ | ---------------------- | ------------: |
118
+ | Samples (count) | 7,545,871 |
119
+ | Min length (tokens) | 7 |
120
+ | Max length (tokens) | 5,979 |
121
+ | Mean length (tokens) | 33.02 |
122
+ | Median length (tokens) | 29 |
123
+ | Std dev (tokens) | 17.39 |
124
+ | Skew | 26.46 |
125
+ | Kurtosis | 7,488.16 |
126
+ | Total tokens | 249,177,149 |
127
+ | Total characters | 1,291,390,677 |
128
+ | Total words | 145,878,876 |
129
+ | Avg chars per sample | 171.14 |
130
+ | Avg words per sample | 19.33 |
131
+ | Avg chars per word | 8.85 |
132
+ | Tokens per char | 0.19 |
133
+ | Total assistant blocks | 9,340,986 |
 
134
 
135
  </div>
136
 
137
  <div>
138
 
139
+ | Tokens | Count |
140
+ | --------- | --------: |
141
+ | 0–8 | 1 |
142
+ | 8–16 | 110,310 |
143
+ | 16–32 | 4,381,916 |
144
+ | 32–64 | 2,674,572 |
145
+ | 64–128 | 360,365 |
146
+ | 128–256 | 18,082 |
147
+ | 256–384 | 417 |
148
+ | 384–512 | 75 |
149
+ | 512–768 | 78 |
150
+ | 768–1024 | 30 |
151
+ | 1024–2048 | 18 |
152
+ | 2048–4096 | 3 |
 
153
 
154
  </div>
155
 
156
  <div>
157
 
158
+ | Turns | Count |
159
+ | ----- | --------: |
160
+ | 2 | 5,969,540 |
161
+ | 3 | 1,080,218 |
162
+ | 4 | 319,720 |
163
+ | 5 | 102,535 |
164
+ | 6 | 41,236 |
165
+ | 7 | 16,896 |
166
+ | 8 | 7,713 |
167
+ | 9 | 3,691 |
168
+ | 10 | 1,866 |
169
+ | 11 | 1,005 |
170
+ | 12 | 575 |
171
+ | 13 | 334 |
172
+ | 14 | 189 |
173
+ | 15 | 129 |
174
+ | 16 | 67 |
175
+ | 17 | 62 |
176
+ | 18 | 32 |
177
+ | 19 | 21 |
178
+ | 20 | 8 |
179
+ | 21 | 11 |
180
+ | 22 | 11 |
181
+ | 23 | 2 |
182
+ | 24 | 1 |
183
+ | 25 | 3 |
184
+ | 27 | 2 |
185
+ | 29 | 1 |
186
+ | 32 | 1 |
187
+ | 33 | 2 |
 
188
 
189
  </div>
190
 
train.parquet CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fe52210778f5d71724664d9c365a71599913f93c30cc60df8e674dd3c45c08ca
3
- size 362018517
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:766cba829774f9e93076645a6e27a3cd08248816ad76fb63427396254725af47
3
+ size 361536164