bert-lora-for-author-profiling
This model is a fine-tuned version of google-bert/bert-base-cased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.7931
- Age Acc: 0.5827
- Age Precision: 0.5537
- Age Recall: 0.5827
- Age F1: 0.5258
- Age Precision Macro: 0.5152
- Age Recall Macro: 0.2723
- Age F1 Macro: 0.2861
- Gender Acc: 0.6949
- Gender Precision: 0.6949
- Gender Recall: 0.6949
- Gender F1: 0.6949
- Gender Precision Macro: 0.6948
- Gender Recall Macro: 0.6949
- Gender F1 Macro: 0.6948
- Joint Acc: 0.4110
- Avg Acc: 0.6388
- Avg Precision: 0.6243
- Avg Recall: 0.6388
- Avg F1: 0.6104
- Avg Precision Macro: 0.6050
- Avg Recall Macro: 0.4836
- Avg F1 Macro: 0.4905
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.7145e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: linear
- num_epochs: 3
Training results
| Training Loss | Epoch | Step | Validation Loss | Age Acc | Age Precision | Age Recall | Age F1 | Age Precision Macro | Age Recall Macro | Age F1 Macro | Gender Acc | Gender Precision | Gender Recall | Gender F1 | Gender Precision Macro | Gender Recall Macro | Gender F1 Macro | Joint Acc | Avg Acc | Avg Precision | Avg Recall | Avg F1 | Avg Precision Macro | Avg Recall Macro | Avg F1 Macro |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 0.8387 | 0.5155 | 5000 | 0.8252 | 0.5643 | 0.5195 | 0.5643 | 0.5035 | 0.4352 | 0.2432 | 0.2445 | 0.6811 | 0.6813 | 0.6811 | 0.6811 | 0.6812 | 0.6812 | 0.6811 | 0.3875 | 0.6227 | 0.6004 | 0.6227 | 0.5923 | 0.5582 | 0.4622 | 0.4628 |
| 0.816 | 1.0309 | 10000 | 0.8115 | 0.5727 | 0.5435 | 0.5727 | 0.5103 | 0.4579 | 0.2526 | 0.2578 | 0.6850 | 0.6857 | 0.6850 | 0.6849 | 0.6855 | 0.6853 | 0.6850 | 0.3961 | 0.6289 | 0.6146 | 0.6289 | 0.5976 | 0.5717 | 0.4690 | 0.4714 |
| 0.8083 | 1.5464 | 15000 | 0.8012 | 0.5792 | 0.5481 | 0.5792 | 0.5215 | 0.5131 | 0.2669 | 0.2773 | 0.6901 | 0.6904 | 0.6901 | 0.6901 | 0.6903 | 0.6903 | 0.6901 | 0.4052 | 0.6346 | 0.6193 | 0.6346 | 0.6058 | 0.6017 | 0.4786 | 0.4837 |
| 0.806 | 2.0619 | 20000 | 0.7962 | 0.5810 | 0.5539 | 0.5810 | 0.5224 | 0.5235 | 0.2680 | 0.2803 | 0.6930 | 0.6932 | 0.6930 | 0.6930 | 0.6930 | 0.6931 | 0.6930 | 0.4083 | 0.6370 | 0.6236 | 0.6370 | 0.6077 | 0.6083 | 0.4805 | 0.4866 |
| 0.7999 | 2.5773 | 25000 | 0.7931 | 0.5827 | 0.5537 | 0.5827 | 0.5258 | 0.5152 | 0.2723 | 0.2861 | 0.6949 | 0.6949 | 0.6949 | 0.6949 | 0.6948 | 0.6949 | 0.6948 | 0.4110 | 0.6388 | 0.6243 | 0.6388 | 0.6104 | 0.6050 | 0.4836 | 0.4905 |
Framework versions
- PEFT 0.17.1
- Transformers 4.56.1
- Pytorch 2.6.0+cu124
- Datasets 4.0.0
- Tokenizers 0.22.0
- Downloads last month
- 3
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support
Model tree for KonradBRG/bert-lora-for-author-profiling
Base model
google-bert/bert-base-cased