This is a tiny random Llama model derived from "meta-llama/Llama-2-7b-hf".

See make_tiny_model.py for how this was done.

This is useful for functional testing (not quality generation, since its weights are random and the tokenizer has been shrunk to 3k items)

Downloads last month
519
Safetensors
Model size
104k params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Spaces using stas/tiny-random-llama-2 2