Kristijan's picture
Update README.md
4b2e4e2
|
raw
history blame
828 Bytes
---
language:
- en
library_name: pytorch
tags:
- language-model
- gpt2
- transformer
- wikitext-103
model-index:
- name: gpt2_wt103-40m_12-layer
results:
- task:
type: language-modeling
dataset:
type: wikitext
name: Wikitext-103
metrics:
- type: perplexity
value: 40.3
---
# Model description
paper: [Characterizing Verbatim Short-Term Memory in Neural Language Models](https://doi.org/10.48550/arXiv.2210.13569)
This is a gpt2-small-like decoder-only transformer model trained on a 40M token subset of the [wikitext-103 dataset](https://paperswithcode.com/dataset/wikitext-103).
# Intended uses
This checkpoint is intended for research purposes, for example those interested in studying the behavior of transformer language models trained on smaller datasets.