Zephyr-7B-Beta (Fine-tuned for Subnet 20 BitAgent)
This is a fine-tuned version of Mistral-7B adapted for function-calling and reasoning tasks in Bittensor Subnet 20 (BitAgent).
π§ Use Case
- Works as an agent LLM inside the BitAgent subnet.
- Supports reasoning and function-calling outputs.
- Optimized for task delegation and structured outputs.
π How to Use
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("username/zephyr-7b-beta-bitagent")
tokenizer = AutoTokenizer.from_pretrained("username/zephyr-7b-beta-bitagent")
prompt = "Summarize the latest research in AI safety in 3 bullet points."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=200)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 35
Model tree for lia21/bitagent
Base model
mistralai/Mistral-7B-v0.1Datasets used to train lia21/bitagent
Evaluation results
- normalized accuracy on AI2 Reasoning Challenge (25-Shot)test set self-reported62.030
- normalized accuracy on HellaSwag (10-Shot)validation set self-reported84.360
- f1 score on Drop (3-Shot)validation set self-reported9.660
- mc2 on TruthfulQA (0-shot)validation set self-reported57.450
- accuracy on GSM8k (5-shot)test set self-reported12.740
- accuracy on MMLU (5-Shot)test set self-reported61.070
- accuracy on Winogrande (5-shot)validation set self-reported77.740
- win rate on AlpacaEvalself-reported0.906
- score on MT-Benchself-reported7.340