gm-lora-bfloat16-idefics2-8b-xrayvqa-finetuned-mimic-short
This model is a fine-tuned version of HuggingFaceM4/idefics2-8b on an unknown dataset. It achieves the following results on the evaluation set:
- Loss: 1.0730
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 10
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.171 | 0.1119 | 50 | 1.2339 |
1.2051 | 0.2237 | 100 | 1.1744 |
1.1372 | 0.3356 | 150 | 1.1323 |
1.1046 | 0.4474 | 200 | 1.1095 |
1.0673 | 0.5593 | 250 | 1.0877 |
1.0713 | 0.6711 | 300 | 1.0761 |
1.0824 | 0.7830 | 350 | 1.0611 |
1.0358 | 0.8949 | 400 | 1.0511 |
1.0288 | 1.0067 | 450 | 1.0389 |
0.9158 | 1.1186 | 500 | 1.0443 |
0.9199 | 1.2304 | 550 | 1.0382 |
0.9032 | 1.3423 | 600 | 1.0339 |
0.8834 | 1.4541 | 650 | 1.0292 |
0.9048 | 1.5660 | 700 | 1.0271 |
0.9101 | 1.6779 | 750 | 1.0193 |
0.8928 | 1.7897 | 800 | 1.0164 |
0.9032 | 1.9016 | 850 | 1.0124 |
0.8649 | 2.0134 | 900 | 1.0234 |
0.7615 | 2.1253 | 950 | 1.0433 |
0.7588 | 2.2371 | 1000 | 1.0366 |
0.7759 | 2.3490 | 1050 | 1.0331 |
0.7696 | 2.4609 | 1100 | 1.0349 |
0.7587 | 2.5727 | 1150 | 1.0324 |
0.7532 | 2.6846 | 1200 | 1.0309 |
0.7702 | 2.7964 | 1250 | 1.0287 |
0.7648 | 2.9083 | 1300 | 1.0275 |
0.7452 | 3.0201 | 1350 | 1.0529 |
0.6471 | 3.1320 | 1400 | 1.0683 |
0.665 | 3.2438 | 1450 | 1.0727 |
0.6563 | 3.3557 | 1500 | 1.0713 |
0.6499 | 3.4676 | 1550 | 1.0721 |
0.6538 | 3.5794 | 1600 | 1.0741 |
0.6437 | 3.6913 | 1650 | 1.0740 |
0.6486 | 3.8031 | 1700 | 1.0734 |
0.66 | 3.9150 | 1750 | 1.0730 |
Framework versions
- Transformers 4.41.0.dev0
- Pytorch 2.2.1+cu121
- Datasets 2.19.1
- Tokenizers 0.19.1
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for gimarchetti/gm-lora-bfloat16-idefics2-8b-xrayvqa-finetuned-mimic-short
Base model
HuggingFaceM4/idefics2-8b