--- license: apache-2.0 library_name: peft tags: - generated_from_trainer base_model: mistralai/Mistral-7B-v0.1 model-index: - name: org_modelorg_model results: [] --- # org_modelorg_model This model is a fine-tuned version of [mistralai/Mistral-7B-v0.1](https://huggingface.co/mistralai/Mistral-7B-v0.1) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9110 - F1 Micro: 0.8214 - F1 Macro: 0.8156 - F1 Weighted: 0.8234 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 400 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 Micro | F1 Macro | F1 Weighted | |:-------------:|:------:|:----:|:---------------:|:--------:|:--------:|:-----------:| | 1.979 | 0.0154 | 25 | 1.4043 | 0.7220 | 0.7197 | 0.7259 | | 1.3006 | 0.0308 | 50 | 1.2184 | 0.7775 | 0.7754 | 0.7807 | | 1.1099 | 0.0462 | 75 | 1.1320 | 0.8010 | 0.7970 | 0.8040 | | 1.1383 | 0.0615 | 100 | 1.0762 | 0.8039 | 0.8007 | 0.8072 | | 1.0121 | 0.0769 | 125 | 1.0230 | 0.8010 | 0.7967 | 0.8037 | | 1.0296 | 0.0923 | 150 | 0.9966 | 0.8099 | 0.8056 | 0.8130 | | 1.0485 | 0.1077 | 175 | 0.9745 | 0.8111 | 0.8063 | 0.8139 | | 0.9996 | 0.1231 | 200 | 0.9647 | 0.8030 | 0.7984 | 0.8052 | | 0.9815 | 0.1385 | 225 | 0.9490 | 0.8160 | 0.8099 | 0.8178 | | 0.9456 | 0.1538 | 250 | 0.9378 | 0.8073 | 0.8033 | 0.8099 | | 0.8896 | 0.1692 | 275 | 0.9298 | 0.8143 | 0.8091 | 0.8164 | | 0.994 | 0.1846 | 300 | 0.9239 | 0.8064 | 0.8030 | 0.8094 | | 0.8588 | 0.2 | 325 | 0.9142 | 0.8119 | 0.8079 | 0.8145 | | 0.8971 | 0.2154 | 350 | 0.9139 | 0.8216 | 0.8158 | 0.8236 | | 0.9647 | 0.2308 | 375 | 0.9133 | 0.8223 | 0.8163 | 0.8242 | | 0.9352 | 0.2462 | 400 | 0.9110 | 0.8214 | 0.8156 | 0.8234 | ### Framework versions - PEFT 0.10.0 - Transformers 4.40.2 - Pytorch 2.3.0+cu118 - Datasets 2.19.0 - Tokenizers 0.19.1