ahxt/llama2_xs_460M_experimental-GGUF
Quantized GGUF model files for llama2_xs_460M_experimental from ahxt
Name | Quant method | Size |
---|---|---|
llama2_xs_460m_experimental.q2_k.gguf | q2_k | 212.56 MB |
llama2_xs_460m_experimental.q3_k_m.gguf | q3_k_m | 238.87 MB |
llama2_xs_460m_experimental.q4_k_m.gguf | q4_k_m | 288.51 MB |
llama2_xs_460m_experimental.q5_k_m.gguf | q5_k_m | 333.29 MB |
llama2_xs_460m_experimental.q6_k.gguf | q6_k | 380.87 MB |
llama2_xs_460m_experimental.q8_0.gguf | q8_0 | 492.67 MB |
Original Model Card:
LLaMa Lite: Reduced-Scale, Experimental Versions of LLaMA and LLaMa 2
In this series of repos, we present an open-source reproduction of Meta AI's LLaMA and LLaMa 2 large language models. However, with significantly reduced model sizes, the experimental version of llama1_s has 1.8B parameters, and the experimental version of llama2_xs has 460M parameters. ('s' stands for small, while 'xs' denotes extra small).
Dataset and Tokenization
We train our models on part of RedPajama dataset. We use the GPT2Tokenizer to tokenize the text.
Using with HuggingFace Transformers
The experimental checkpoints can be directly loaded by Transformers library. The following code snippet shows how to load the our experimental model and generate text with it.
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM
# model_path = 'ahxt/llama2_xs_460M_experimental'
model_path = 'ahxt/llama1_s_1.8B_experimental'
model = AutoModelForCausalLM.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
model.eval()
prompt = 'Q: What is the largest bird?\nA:'
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
tokens = model.generate(input_ids, max_length=20)
print( tokenizer.decode(tokens[0].tolist(), skip_special_tokens=True) )
# Q: What is the largest bird?\nA: The largest bird is the bald eagle.
Evaluation
We evaluate our models on the MMLU task markdown table
Models | #parameters | zero-shot | 5-shot |
---|---|---|---|
llama | 7B | 28.46 | 35.05 |
openllama | 3B | 24.90 | 26.71 |
TinyLlama-1.1B-step-50K-105b | 1.1B | 19.00 | 26.53 |
llama2_xs_460M | 0.46B | 21.13 | 26.39 |
Contact
This experimental version is developed by: Xiaotian Han from Texas A&M University. And these experimental verisons are for research only.
Open LLM Leaderboard Evaluation Results
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 26.65 |
ARC (25-shot) | 24.91 |
HellaSwag (10-shot) | 38.47 |
MMLU (5-shot) | 26.17 |
TruthfulQA (0-shot) | 41.59 |
Winogrande (5-shot) | 49.88 |
GSM8K (5-shot) | 0.0 |
DROP (3-shot) | 5.51 |
- Downloads last month
- 45
Model tree for afrideva/llama2_xs_460M_experimental-GGUF
Base model
ahxt/llama2_xs_460M_experimental