--- library_name: transformers license: apache-2.0 base_model: mistralai/Mistral-7B-Instruct-v0.3 tags: - generated_from_trainer model-index: - name: mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify results: [] --- # mistral-7b-instruct-v0.3-mimic4-adapt-multilabel-classify This model is a fine-tuned version of [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) on the None dataset. It achieves the following results on the evaluation set: - F1 Micro: 0.0 - F1 Macro: 0.0 - Precision At 5: 0.2749 - Recall At 5: 0.0637 - Precision At 8: 0.2540 - Recall At 8: 0.0909 - Precision At 15: 0.1905 - Recall At 15: 0.1224 - Rare F1 Micro: 0.0 - Rare F1 Macro: 0.0 - Rare Precision: 0.0 - Rare Recall: 0.0 - Rare Precision At 5: 0.0037 - Rare Recall At 5: 0.0013 - Rare Precision At 8: 0.0043 - Rare Recall At 8: 0.0023 - Rare Precision At 15: 0.0049 - Rare Recall At 15: 0.0048 - Not Rare F1 Micro: 0.0 - Not Rare F1 Macro: 0.0 - Not Rare Precision: 0.0 - Not Rare Recall: 0.0 - Not Rare Precision At 5: 0.2742 - Not Rare Recall At 5: 0.1680 - Not Rare Precision At 8: 0.2540 - Not Rare Recall At 8: 0.2396 - Not Rare Precision At 15: 0.1906 - Not Rare Recall At 15: 0.3248 - Loss: 0.0209 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 32 - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | F1 Micro | F1 Macro | Precision At 5 | Recall At 5 | Precision At 8 | Recall At 8 | Precision At 15 | Recall At 15 | Rare F1 Micro | Rare F1 Macro | Rare Precision | Rare Recall | Rare Precision At 5 | Rare Recall At 5 | Rare Precision At 8 | Rare Recall At 8 | Rare Precision At 15 | Rare Recall At 15 | Not Rare F1 Micro | Not Rare F1 Macro | Not Rare Precision | Not Rare Recall | Not Rare Precision At 5 | Not Rare Recall At 5 | Not Rare Precision At 8 | Not Rare Recall At 8 | Not Rare Precision At 15 | Not Rare Recall At 15 | Validation Loss | |:-------------:|:------:|:----:|:--------:|:--------:|:--------------:|:-----------:|:--------------:|:-----------:|:---------------:|:------------:|:-------------:|:-------------:|:--------------:|:-----------:|:-------------------:|:----------------:|:-------------------:|:----------------:|:--------------------:|:-----------------:|:-----------------:|:-----------------:|:------------------:|:---------------:|:-----------------------:|:--------------------:|:-----------------------:|:--------------------:|:------------------------:|:---------------------:|:---------------:| | 0.023 | 0.9981 | 262 | 0.0 | 0.0 | 0.2749 | 0.0637 | 0.2540 | 0.0909 | 0.1905 | 0.1224 | 0.0 | 0.0 | 0.0 | 0.0 | 0.0037 | 0.0013 | 0.0043 | 0.0023 | 0.0049 | 0.0048 | 0.0 | 0.0 | 0.0 | 0.0 | 0.2742 | 0.1680 | 0.2540 | 0.2396 | 0.1906 | 0.3248 | 0.0209 | ### Framework versions - Transformers 4.49.0 - Pytorch 2.6.0 - Datasets 3.6.0 - Tokenizers 0.21.1