AmelieSchreiber's picture
Update README.md
d912336
|
raw
history blame
767 Bytes
metadata
license: mit
library_name: peft

Training procedure

Finally, it looks like overfitting has been circumvented.

Train metrics: 
    {'eval_loss': 0.11367090046405792, 
     'eval_accuracy': 0.961073623713503, 
     'eval_precision': 0.3506606081587021, 
     'eval_recall': 0.9097597679932995, 
     'eval_f1': 0.5062071663690367, 
     'eval_auc': 0.9359920115129883, 
     'eval_mcc': 0.5513080553639849}
Test metrics: 
    {'eval_loss': 0.11328430473804474, 
     'eval_accuracy': 0.9604888971537066, 
     'eval_precision': 0.34630886072474065, 
     'eval_recall': 0.9135862937475725, 
     'eval_f1': 0.5022370749476722, 
     'eval_auc': 0.9375606817360377, 
     'eval_mcc': 0.5489185177475369}

Framework versions

  • PEFT 0.5.0