Configuration Parsing Warning: In config.json: "architectures" must be an array

Model Card

This model is an Attention (Llama architecture) model pretrained on 30Bn tokens of the Pile corpus.

Model Sources

The model implementation and training code that produced the model are provided here: https://github.com/HazyResearch/based

Downloads last month
10
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train hazyresearch/attn-360M-30B

Collection including hazyresearch/attn-360M-30B