File size: 3,758 Bytes
0add802
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
---
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-vp0.3
language: en
datasets:
- Word2Li/MiddOptimized
tags:
- llama-factory
- full
pipeline_tag: text-generation
model-index:
- name: Mistral-7B-v0.3-Middo-Alpaca-4o-mini
  results:
    - task:
        type: text-generation
      dataset:
        name: MMLU
        type: MMLU
      metrics:
        - name: weighted accuracy
          type: weighted accuracy
          value: 43.26
          verified: true
    - task:
        type: text-generation
      dataset:
        name: IFEval
        type: IFEval
      metrics:
        - name: overall accuracy
          type: overall accuracy
          value: 49.80
          verified: true
    - task:
        type: text-generation
      dataset:
        name: GSM8K
        type: GSM8K
      metrics:
        - name: accuracy
          type: accuracy
          value: 41.09
          verified: true
    - task:
        type: text-generation
      dataset:
        name: MATH
        type: MATH
      metrics:
        - name: accuracy
          type: accuracy
          value: 10.02
          verified: true
    - task:
        type: text-generation
      dataset:
        name: HumanEval
        type: HumanEval
      metrics:
        - name: humaneval_pass@1
          type: humaneval_pass@1
          value: 41.46
          verified: true
    - task:
        type: text-generation
      dataset:
        name: MBPP
        type: MBPP
      metrics:
        - name: score
          type: score
          value: 34.60
          verified: true
    - task:
        type: text-generation
      dataset:
        name: Hellaswag
        type: Hellaswag
      metrics:
        - name: accuracy
          type: accuracy
          value: 66.02
          verified: true
    - task:
        type: text-generation
      dataset:
        name: GPQA
        type: GPQA
      metrics:
        - name: accuracy
          type: accuracy
          value: 22.22
          verified: true
metrics:
- accuracy
---

# Mistral-7B-v0.3-Middo-WizardLM

Paper: [Middo: Model-Informed Dynamic Data Optimization for Enhanced LLM Fine-Tuning via Closed-Loop Learning](https://arxiv.org/abs/2508.21589)

Code: https://github.com/Word2VecT/Middo

## Model description

This model is a fine-tuned version of [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3) on the [MiddOptimzed/mistral_wizard](https://huggingface.co/datasets/Word2Li/MiddOptimized/viewer/default/mistral_wizard) dataset.

## Training and evaluation data

### Training data

Middo optimized [WizardLMTeam/WizardLM_evol_instruct_70k](https://huggingface.co/datasets/WizardLMTeam/WizardLM_evol_instruct_70k) on [mistralai/Mistral-7B-v0.3](https://huggingface.co/mistralai/Mistral-7B-v0.3).

### Evaluation data

- General
  - MMLU
  - IFEval
- Math
  - GSM8K
  - MATH
- Code
  - HumanEval
  - MBPP
- Reasoning
  - Hellaswag
  - GPQA

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:

- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0

### Training results

- epoch: 1.0
- total_flos: 4.871785990877872e+18
- train_loss: 0.6260631282554998
- train_runtime: 6928.3413
- train_samples_per_second: 12.871
- train_steps_per_second: 0.05

### Framework versions

- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1