Update README.md
Browse files
README.md
CHANGED
|
@@ -6,9 +6,10 @@ tags:
|
|
| 6 |
---
|
| 7 |
|
| 8 |
## Why should you use this and not the tiktoken included in the orignal model?
|
| 9 |
-
1.
|
| 10 |
-
2. Original tokenizer
|
| 11 |
-
3.
|
|
|
|
| 12 |
|
| 13 |
modified from original code @ https://huggingface.co/Xenova/dbrx-instruct-tokenizer
|
| 14 |
|
|
|
|
| 6 |
---
|
| 7 |
|
| 8 |
## Why should you use this and not the tiktoken included in the orignal model?
|
| 9 |
+
1. This tokenizer is validated with the https://huggingface.co/datasets/xn (all languages) to be encode/decode compatible with dbrx-base tiktoken
|
| 10 |
+
2. Original tokenizer pad the vocabulary to correct size with `<extra_N>` tokens but encoder never uses them
|
| 11 |
+
3. Original tokenizer use eos as pad token which may confuse trainers to mask out the eos token so model never output eos.
|
| 12 |
+
4. [NOT FIXED: INVESTIGATING] config.json embedding size of "vocab_size": 100352 does not match 100277
|
| 13 |
|
| 14 |
modified from original code @ https://huggingface.co/Xenova/dbrx-instruct-tokenizer
|
| 15 |
|