This checkpoint of the 1.3B GLA model used in the paper Gated Linear Attention. The model is trained with 100B tokens from the SlimPajama dataset tokenized with Llama2 tokenizer.
See the model and loading script in this repo.
- Downloads last month
- 0
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support