metadata
library_name: transformers
license: apache-2.0
base_model: mistralai/Mistral-7B-vp0.3
language: en
datasets:
- Word2Li/MiddOptimized
tags:
- llama-factory
- full
pipeline_tag: text-generation
model-index:
- name: Mistral-7B-v0.3-Middo-Alpaca-4o-mini
results:
- task:
type: text-generation
dataset:
name: MMLU
type: MMLU
metrics:
- name: weighted accuracy
type: weighted accuracy
value: 43.26
verified: true
- task:
type: text-generation
dataset:
name: IFEval
type: IFEval
metrics:
- name: overall accuracy
type: overall accuracy
value: 49.8
verified: true
- task:
type: text-generation
dataset:
name: GSM8K
type: GSM8K
metrics:
- name: accuracy
type: accuracy
value: 41.09
verified: true
- task:
type: text-generation
dataset:
name: MATH
type: MATH
metrics:
- name: accuracy
type: accuracy
value: 10.02
verified: true
- task:
type: text-generation
dataset:
name: HumanEval
type: HumanEval
metrics:
- name: humaneval_pass@1
type: humaneval_pass@1
value: 41.46
verified: true
- task:
type: text-generation
dataset:
name: MBPP
type: MBPP
metrics:
- name: score
type: score
value: 34.6
verified: true
- task:
type: text-generation
dataset:
name: Hellaswag
type: Hellaswag
metrics:
- name: accuracy
type: accuracy
value: 66.02
verified: true
- task:
type: text-generation
dataset:
name: GPQA
type: GPQA
metrics:
- name: accuracy
type: accuracy
value: 22.22
verified: true
metrics:
- accuracy
Mistral-7B-v0.3-Middo-WizardLM
Code: https://github.com/Word2VecT/Middo
Model description
This model is a fine-tuned version of mistralai/Mistral-7B-v0.3 on the MiddOptimzed/mistral_wizard dataset.
Training and evaluation data
Training data
Middo optimized WizardLMTeam/WizardLM_evol_instruct_70k on mistralai/Mistral-7B-v0.3.
Evaluation data
- General
- MMLU
- IFEval
- Math
- GSM8K
- MATH
- Code
- HumanEval
- MBPP
- Reasoning
- Hellaswag
- GPQA
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- total_eval_batch_size: 64
- optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.03
- num_epochs: 1.0
Training results
- epoch: 1.0
- total_flos: 4.871785990877872e+18
- train_loss: 0.6260631282554998
- train_runtime: 6928.3413
- train_samples_per_second: 12.871
- train_steps_per_second: 0.05
Framework versions
- Transformers 4.55.0
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.21.1