CzeGPT-2 / README.md
hales's picture
Update README.md
74bd42c verified
|
raw
history blame
1.65 kB
metadata
language: cs
license: cc-by-nc-sa-4.0
datasets:
  - csTenTen17

CzeGPT-2

CzeGPT-2 is a Czech version of GPT-2 language model by OpenAI with LM Head on top. The model has the same architectural dimensions as the GPT-2 small (12 layers, 12 heads, 1024 tokens on input/output, and embedding vectors with 768 dimensions) resulting in 124 M trainable parameters. It was trained on a 5 GB slice of cleaned csTenTen17 dataset.

The model is a good building block for any down-stream task requiring autoregressive text generation.

Tokenizer

Along, we also provide a tokenizer (vocab and merges) with vocab size of 50257 that was used during the pre-training phase. It is the byte-level BPE tokenizer used in the original paper and was trained on the whole 5 GB train set.

Training results

The model's perplexity on a 250 MB random slice of csTenTen17 dataset is 42.12. This value is unfortunately not directly comparable to any other model, since there is no competition in Czech autoregressive models yet (and comparison with models for other languages is meaningless, because of different tokenization and test data).

Running the predictions

The repository includes a simple Jupyter Notebook that can help with the first steps when using the model.

How to cite

@article{hajek_horak2024,
   author = "Adam Hájek and Aleš Horák",
   title  = "CzeGPT-2 -- Training New Model for Czech Generative Text Processing Evaluated with the Summarization Task",
   journal= "IEEE Access",
   year   = "2024",
   volume = "12",
   pages  = "34570--34581",
   doi    = "10.1109/ACCESS.2024.3371689",
}