Configuration Parsing Warning: In config.json: "architectures" must be an array

Model Card

This model is an Attention (Llama architecture) model pretrained on 30Bn tokens of the Pile corpus.

Model Sources

The model implementation and training code that produced the model are provided here: https://github.com/HazyResearch/based

Downloads last month
77
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model’s pipeline type.

Dataset used to train hazyresearch/attn-360M-30B

Collection including hazyresearch/attn-360M-30B