File size: 7,003 Bytes
8d85c94
 
4df6912
 
 
 
 
 
 
 
 
 
 
 
 
8d85c94
 
4df6912
8d85c94
 
0b344cc
8d85c94
 
 
df837dd
8d85c94
 
29d445b
8d85c94
c3494ce
a878719
0ff72c8
 
427377a
17f4019
df837dd
17f4019
427377a
 
17f4019
 
 
9336f79
427377a
 
8d85c94
c8f627b
4df6912
 
d4f4325
4df6912
ff1bb5e
4df6912
 
 
 
c8f627b
 
8d85c94
 
 
4df6912
 
 
c8f627b
629e3e1
 
 
 
aa41182
 
 
 
 
 
e40a2a0
ad6ef1c
629e3e1
e40a2a0
ad6ef1c
629e3e1
 
e40a2a0
25d6c37
629e3e1
 
25d6c37
58b7f84
629e3e1
 
aa41182
a089c16
3e90179
c8f627b
8d85c94
a8b2294
41b4f3b
a8b2294
41b4f3b
 
 
a8b2294
f633220
a8b2294
f633220
 
 
590395b
 
ababedf
590395b
c8f627b
f42ba5a
cd99036
8d85c94
43500c6
c9c90e4
 
 
 
 
c3494ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9c90e4
 
 
 
 
 
 
 
c3494ce
 
 
 
 
 
 
c9c90e4
 
c3494ce
c9c90e4
 
 
 
 
 
 
 
c3494ce
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c9c90e4
 
c3494ce
c9c90e4
 
 
 
 
 
 
43500c6
 
c8f627b
 
4d31f02
 
4708250
c8f627b
33d6faa
 
8e09dec
8818702
8d85c94
 
c8f627b
 
 
0c62f17
c8f627b
0c62f17
 
c8f627b
0c62f17
 
 
 
 
 
8d85c94
c8f627b
8d85c94
c8f627b
8d85c94
994d9eb
3d3c39e
4df6912
 
510e754
0b344cc
eef70f5
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
---
tags:
    - discord
    - chatml
    - conversation
    - dialogue
    - multi-turn
    - single-turn
    - fine-tuning
    - reward-model
    - llm-training
    - chat-dataset
    - open-source
    - anonymized-data
    - casual-dialogue
license: apache-2.0
language:
    - en
pretty_name: Discord-Dialogues
size_categories:
    - 1M<n<10M
---

<p align="center">
  <img src="assets/Discord-Dialogues.png" alt="Discord-Dialogues">
</p>

> **Discord-Dialogues** is a large-scale dataset of anonymized Discord conversations from late spring to early fall 2025 for training and evaluating realistic conversational AI models in a ChatML-friendly format.

This dataset contains 7.3 million exchanges spread out over 16 million turns, with more than 139 million words.

---

<p align="center">
  <a href="https://atlas.nomic.ai/data/mookiezi/discord-alpha/map">
    <img src="assets/discord-alpha.png" alt="discord-alpha">
  </a>
</p>

<p align="center">
  <a href="https://atlas.nomic.ai/data/mookiezi/discord-alpha/map"><strong>Nomic Atlas Map</strong></a>
</p>

---

## Features

-   Mixed single and multi-turn exchanges
-   Human-only dialogues (no bots)
-   Filtered for ToS and harmful content
-   Links, embeds, and commands removed
-   Trading posts, code blocks, and LFG removed
-   Two-author chains only
-   Merged self-replies from the same author into a single message
-   Cleaned and deduplicated for relevance
-   Primarily English, with some other languages present

---

## Use

-   Fine-tuning conversational models
-   Training relevance/reward models
-   Dialogue generation research

Use case examples:

-   [mookiezi/Discord-Micae-8B-Preview](https://huggingface.co/mookiezi/Discord-Micae-8B-Preview) — experimental larger model
-   [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B) — stable smaller model

---

## Filtering Pipeline

This dataset was constructed with a custom multi-stage filtering toolkit:

1. **SQL filters** (`filter.sql`)  
   Postgres regex/text filters for PII, bot/command patterns, links, embeds, and automation noise.

2. **Smart cleaner** (`smartclean.py`)  
   Multi-stage process: normalize text, slang replacement, resample by length, and enforce structural validation.
   Filters out structural noise such as code blocks, trading posts, and LFG.

3. **Dedupe** (`dedupe.py`)
   Deduplicates conversations by hashing message chains
   Keeps only unique rows preferring the longest final assistant message when duplicates occur.

4. **ToS risk filter** (`tos.py`)
   Drops or redacts unsafe categories (sexual violence, CSA, slurs, harassment, doxxing, self-harm, extremism) and PII.
   Uses fuzzy/leet/diacritic-aware regex.

The full filtering scripts are open source at the [dataset cleaning toolkit GitHub repository](https://github.com/mookiezi/dataset-cleaning-toolkit).

---

## Dataset Pipeline

The full end-to-end pipeline is documented in the [dataset-pipeline GitHub repository](https://github.com/mookiezi/dataset-pipeline).

---

## Uncensored Edition

A larger uncensored archive (10,011,633 dialogues, 215M words) is mirrored on [Internet Archive](https://archive.org/details/discord-dialogues-uncensored).

---

## Collection Policy

-   All data was collected adhering to Discord's [Terms of Service](https://discord.com/terms) and [Community Guidelines](https://discord.com/guidelines).

---

## Dataset Statistics <span style="font-weight:normal;">(using the [NousResearch/Hermes-3-Llama-3.1-8B tokenizer](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B))</span>

<div style="display:flex; gap:20px; align-items:flex-start;">

<div>

| Metric                 |         Value |
| ---------------------- | ------------: |
| Samples (count)        |     7,303,464 |
| Total turns            |    16,881,010 |
| Total assistant turns  |     9,016,287 |
| Min length (tokens)    |            10 |
| Max length (tokens)    |         2,542 |
| Mean length (tokens)   |         32.79 |
| Median length (tokens) |            28 |
| Std dev (tokens)       |         16.56 |
| Skew                   |          6.04 |
| Kurtosis               |        326.54 |
| Total tokens           |   239,458,213 |
| Total characters       | 1,242,238,794 |
| Total words            |   139,922,950 |
| Avg chars per sample   |        170.09 |
| Avg words per sample   |         19.16 |
| Avg chars per word     |          8.88 |
| Tokens per char        |          0.19 |

</div>

<div>

| Tokens    |     Count |
| --------- | --------: |
| 8–16      |   107,264 |
| 16–32     | 4,278,713 |
| 32–64     | 2,566,176 |
| 64–128    |   334,829 |
| 128–256   |    15,920 |
| 256–384   |       363 |
| 384–512   |        71 |
| 512–768   |        78 |
| 768–1024  |        30 |
| 1024–2048 |        17 |
| 2048–4096 |         3 |

</div>

<div>

| Turns |     Count |
| ----- | --------: |
| 2     | 5,795,019 |
| 3     | 1,038,500 |
| 4     |   304,442 |
| 5     |    96,758 |
| 6     |    38,620 |
| 7     |    15,714 |
| 8     |     7,108 |
| 9     |     3,391 |
| 10    |     1,709 |
| 11    |       909 |
| 12    |       526 |
| 13    |       291 |
| 14    |       163 |
| 15    |       113 |
| 16    |        58 |
| 17    |        57 |
| 18    |        28 |
| 19    |        20 |
| 20    |         7 |
| 21    |        10 |
| 22    |        10 |
| 23    |         2 |
| 24    |         1 |
| 25    |         2 |
| 27    |         2 |
| 29    |         1 |
| 32    |         1 |
| 33    |         2 |

</div>

</div>

---

## Disclaimer

Although filtering reduced the exchanges by about 75% (leaving roughly 7.5% of the full data dump), this dataset is still intended as a large-scale dump. For best training results, further curation to target high-signal data relevant to your goals is recommended.

In addition to the raw text, the dataset includes supporting columns such as characters, words, tokens, and turns. These provide length statistics and turn counts per exchange, which can be used for sampling, weighting, or filtering strategies.

---

## License

This project is licensed under the Apache License 2.0.

---

## How to cite:

```bibtex
@misc{discord-dialogues-2025,
  title = {Discord-Dialogues},
  author = {mookiezi},
  year = {2025},
  url={https://huggingface.co/datasets/mookiezi/Discord-Dialogues}
}
```

---

## Related

-   [https://archive.org/details/discord-dialogues-uncensored](https://archive.org/details/discord-dialogues-uncensored)
-   [mookiezii/Discord-Hermes-3-8B](https://huggingface.co/mookiezii/Discord-Hermes-3-8B)
-   [mookiezi/Discord-Micae-Hermes-3-3B](https://huggingface.co/mookiezi/Discord-Micae-Hermes-3-3B)
-   [mookiezi/Discord-OpenMicae](https://huggingface.co/datasets/mookiezi/Discord-OpenMicae)
-   [NousResearch/Hermes-3-Llama-3.1-8B](https://huggingface.co/NousResearch/Hermes-3-Llama-3.1-8B)

[​](https://20000.online/micae)
[​](https://20000.online/openmicae)
[​](https://20000.online/discord-dialogues)