File size: 6,566 Bytes
e5b61b5
aff308a
e5b61b5
 
 
 
aff308a
 
 
 
 
 
 
 
e5b61b5
 
 
 
 
 
aff308a
 
 
 
e5b61b5
 
 
 
 
 
aff308a
 
 
 
e5b61b5
 
 
 
 
 
aff308a
 
 
 
e5b61b5
 
 
 
 
 
aff308a
 
 
e5b61b5
aff308a
e5b61b5
aff308a
e5b61b5
 
 
 
aff308a
eea9131
aff308a
eea9131
 
 
 
 
 
aff308a
eea9131
 
aff308a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eea9131
aff308a
 
 
 
eea9131
aff308a
 
 
 
 
 
eea9131
aff308a
eea9131
aff308a
 
 
 
 
 
 
 
 
 
eea9131
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aff308a
 
 
eea9131
 
 
 
 
aff308a
eea9131
 
 
 
 
aff308a
 
 
 
 
 
 
 
 
 
 
 
 
 
eea9131
aff308a
 
 
 
 
eea9131
 
aff308a
 
eea9131
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
aff308a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
eea9131
 
 
 
 
 
 
 
 
 
 
 
 
aff308a
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
---
language: en
license: other
tags:
  - qwen
  - grpo
  - instruct
  - fine-tuned
  - reasoning
  - 3b
  - menda
  - chat
  - transformers
library_name: transformers
datasets:
  - gsm8k
model-index:
  - name: Menda-3B-500
    results:
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: arc-challenge
          name: ARC-Challenge
        metrics:
          - name: Accuracy
            type: accuracy
            value: 50.0
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: boolq
          name: BoolQ
        metrics:
          - name: Accuracy
            type: accuracy
            value: 90.0
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: hellaswag
          name: HellaSwag
        metrics:
          - name: Accuracy
            type: accuracy
            value: 40.0
      - task:
          type: text-generation
          name: Text Generation
        dataset:
          type: mmlu
          name: MMLU (Overall)
        metrics:
          - name: Accuracy
            type: accuracy
            value: 68.60
---

# Menda-3B-500: GRPO-Tuned Qwen2.5 Model

Menda-3B-500 is a fine-tuned version of Qwen2.5-3B-Instruct, trained with GRPO (Guided Reinforcement from Preference Optimization) for 500 steps. This model shows improved performance on reasoning benchmarks compared to the base model.

## Model Details

- **Base Model**: Qwen/Qwen2.5-3B-Instruct
- **Training Method**: GRPO (Guided Reinforcement from Preference Optimization)
- **Training Steps**: 500
- **Parameters**: 3 billion
- **Context Length**: 32K tokens
- **Training Data**: GSM8K (mathematical reasoning)
- **Chat Template**: Uses the Qwen2 chat template

## Chat Format

This model uses the standard Qwen2 chat template. For best results when using the model directly, format your prompts as follows:

```
<|im_start|>system
You are a helpful AI assistant.<|im_end|>
<|im_start|>user
Your question here<|im_end|>
<|im_start|>assistant
```

When using the model through the Hugging Face Transformers library, the chat template will be applied automatically when using the `chat_template` functionality:

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "weathermanj/Menda-3B-500"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Explain the concept of machine learning in simple terms."}
]

prompt = tokenizer.apply_chat_template(messages, tokenize=False)
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

## Benchmark Results

Menda-3B-500 has been evaluated on several standard benchmarks:

| Benchmark | Task Type | Accuracy |
|-----------|-----------|----------|
| ARC-Challenge | Scientific Reasoning | 50.0% |
| BoolQ | Reading Comprehension | 90.0% |
| HellaSwag | Common Sense Reasoning | 40.0% |
| Lambada | Text Completion | 70.0% |
| PIQA | Physical Reasoning | 90.0% |
| Winogrande | Commonsense Reasoning | 90.0% |

### MMLU Performance

| MMLU Category | Score |
|---------------|-------|
| Overall | 68.60% |
| Humanities | 75.38% |
| Social Sciences | 75.83% |
| STEM | 60.00% |
| Other | 67.69% |

## Key Strengths

- **Balanced Performance**: Maintains strong performance across diverse tasks with minimal trade-offs.
- **Improved BoolQ**: Achieves 90% on BoolQ, showing excellent reading comprehension capabilities.
- **Strong Reasoning**: Maintains 90% on both PIQA and Winogrande, demonstrating robust reasoning abilities.
- **Efficient Training**: Achieves impressive results with relatively minimal training (500 steps).
- **Stable Knowledge**: Maintains strong MMLU performance (68.60%) across diverse knowledge domains.

## Usage Examples

### Basic Usage with Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "weathermanj/Menda-3B-500"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

prompt = "Explain the concept of machine learning in simple terms."
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_length=300)
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(response)
```

### Chat Usage with Transformers

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "weathermanj/Menda-3B-500"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

messages = [
    {"role": "system", "content": "You are a helpful AI assistant."},
    {"role": "user", "content": "Give me a short introduction to large language models."}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512
)
generated_ids = [
    output_ids[len(input_ids):] for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
]

response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
```

### Using with Ollama

You can also use this model with Ollama by converting it to GGUF format:

```bash
# Convert to GGUF
python -m llama_cpp.convert_hf_to_gguf weathermanj/Menda-3B-500 --outfile menda-3b-500.gguf

# Create Ollama model
cat > Modelfile << EOF
FROM menda-3b-500.gguf
TEMPLATE """{{ .Prompt }}"""
PARAMETER temperature 0.7
PARAMETER top_p 0.9
PARAMETER top_k 40
EOF

ollama create menda-3b-500 -f Modelfile
ollama run menda-3b-500
```

## Training Configuration

The model was trained using the GRPO methodology with the following configuration:

- **LoRA Rank**: 128
- **Learning Rate**: 5e-6
- **Optimizer**: AdamW (8-bit)
- **Batch Size**: 8 per device
- **Gradient Accumulation Steps**: 4
- **Training Samples**: 100 examples from GSM8K

## License

This model inherits the license of the base Qwen2.5-3B-Instruct model. Please refer to the [Qwen2 license](https://huggingface.co/Qwen/Qwen2-3B-Instruct/blob/main/LICENSE) for details.