Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,73 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
language: ja
|
| 3 |
+
license: cc-by-sa-4.0
|
| 4 |
+
library_name: transformers
|
| 5 |
+
tags:
|
| 6 |
+
- gpt2
|
| 7 |
+
datasets:
|
| 8 |
+
- wikipedia
|
| 9 |
+
- cc100
|
| 10 |
+
- oscar
|
| 11 |
+
widget:
|
| 12 |
+
- text: "<s>昨日私は京都で"
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# Model Card for Japanese character-level GPT-2 Large
|
| 16 |
+
|
| 17 |
+
## Model description
|
| 18 |
+
|
| 19 |
+
This is a Japanese character-level GPT-2 Large (717M parameters) language model pre-trained on Japanese Wikipedia, the Japanese portion of CC-100, and the Japanese portion of OSCAR.
|
| 20 |
+
|
| 21 |
+
## How to use
|
| 22 |
+
|
| 23 |
+
You can use this model directly with a pipeline for text generation.
|
| 24 |
+
|
| 25 |
+
```python
|
| 26 |
+
>>> from transformers import pipeline, set_seed
|
| 27 |
+
>>> generator = pipeline('text-generation', model='ku-nlp/gpt2-large-japanese-char')
|
| 28 |
+
>>> set_seed(5)
|
| 29 |
+
>>> generator("<s>昨日私は京都で", max_length=30, do_sample=True, num_return_sequences=5)
|
| 30 |
+
|
| 31 |
+
[{'generated_text': '<s>昨日私は京都で仕事だったのですが、帰りは車を信号で止めて、'},
|
| 32 |
+
{'generated_text': '<s>昨日私は京都で開かれた大阪市都市戦略会議に出席しました。そ'},
|
| 33 |
+
{'generated_text': '<s>昨日私は京都で行われました関西の教育者・学校事例が集まるイ'},
|
| 34 |
+
{'generated_text': '<s>昨日私は京都では初雪を見ました。朝は少しパッとしない天気で'},
|
| 35 |
+
{'generated_text': '<s>昨日私は京都でこみっくトレジャーさんの撮影を見学させていた'}]
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
You can also use this model to get the features of a given text.
|
| 39 |
+
|
| 40 |
+
## Vocabulary
|
| 41 |
+
|
| 42 |
+
A character-level vocabulary of size 6K is used. To be precise, rare characters may be split into bytes because byte-level byte-pair encoding (BPE) is used. The BPE tokenizer was trained on a small subset of the training data. Since the data were converted into a one-character-per-line format, merge operations never go beyond character boundaries.
|
| 43 |
+
|
| 44 |
+
Note that the tokenizer maps U+0020 to `[UNK]` because preprocessing eliminated whitespace characters (U+0020) from training data. Use U+3000 (Ideographic Space) instead.
|
| 45 |
+
|
| 46 |
+
## Training data
|
| 47 |
+
|
| 48 |
+
We used the following corpora for pre-training:
|
| 49 |
+
|
| 50 |
+
- Japanese Wikipedia (as of 20221020, 3.2GB, 27M sentences, 1.3M documents)
|
| 51 |
+
- Japanese portion of CC-100 (85GB, 619M sentences, 66M documents)
|
| 52 |
+
- Japanese portion of OSCAR (54GB, 326M sentences, 25M documents)
|
| 53 |
+
|
| 54 |
+
Note that we filtered out documents annotated with "header", "footer", or "noisy" tags in OSCAR.
|
| 55 |
+
Also note that Japanese Wikipedia was duplicated 10 times to make the total size of the corpus comparable to that of CC-100 and OSCAR. As a result, the total size of the training data is 171GB.
|
| 56 |
+
|
| 57 |
+
## Training procedure
|
| 58 |
+
|
| 59 |
+
The training took about 8 months (with 7 interruptions) with a single NVIDIA A100 80GB GPU.
|
| 60 |
+
|
| 61 |
+
The following hyperparameters were used during pre-training:
|
| 62 |
+
|
| 63 |
+
- learning_rate: 2e-4
|
| 64 |
+
- per_device_train_batch_size: 6
|
| 65 |
+
- gradient_accumulation_steps: 98
|
| 66 |
+
- optimizer: AdamW with betas=(0.9, 0.999) and epsilon=1e-06
|
| 67 |
+
- weight_decay: 0.01
|
| 68 |
+
- lr_scheduler_type: linear
|
| 69 |
+
- max_grad_norm: 1.0
|
| 70 |
+
- max_steps: 500,000 (but terminated at 186,000 steps ~= 2.0 epochs)
|
| 71 |
+
- warmup_steps: 10,000
|
| 72 |
+
|
| 73 |
+
The eval loss was 1.309 while the eval accuracy was 0.6890. The evaluation set consists of 5,000 randomly sampled documents from each of the training corpora.
|