DunnBC22's picture
Update README.md
d630adf
|
raw
history blame
4.2 kB
metadata
license: mit
tags:
  - generated_from_trainer
model-index:
  - name: xlnet-base-cased-finetuned-WikiCorpus-PoS
    results: []
datasets:
  - Babelscape/wikineural
language:
  - en
metrics:
  - accuracy
  - f1
  - recall
  - precision
  - seqeval
pipeline_tag: token-classification

xlnet-base-cased-finetuned-WikiNeural-PoS

This model is a fine-tuned version of xlnet-base-cased.

It achieves the following results on the evaluation set:

  • Loss: 0.0949
  • Loc: {'precision': 0.9289891395154553, 'recall': 0.9336691855583543, 'f1': 0.931323283082077, 'number': 5955}
  • Misc: {'precision': 0.8191960332920134, 'recall': 0.9140486069946651, 'f1': 0.8640268957788569, 'number': 5061}
  • Org: {'precision': 0.9199886104783599, 'recall': 0.9367932734125833, 'f1': 0.9283148972848728, 'number': 3449}
  • Per: {'precision': 0.9687377113645301, 'recall': 0.9456813819577735, 'f1': 0.9570707070707071, 'number': 5210}
  • Overall Precision: 0.9068
  • Overall Recall: 0.9324
  • Overall F1: 0.9194
  • Overall Accuracy: 0.9904

Model description

For more information on how it was created, check out the following link: https://github.com/DunnBC22/NLP_Projects/blob/main/Token%20Classification/Monolingual/WikiNeural%20-%20Transformer%20Comparison/POS%20Project%20with%20Wikineural%20Dataset%20-%20XLNet%20Transformer.ipynb

Intended uses & limitations

This model is intended to demonstrate my ability to solve a complex problem using technology.

Training and evaluation data

Dataset Source: https://huggingface.co/datasets/Babelscape/wikineural

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • num_epochs: 2

Training results

Training Loss Epoch Step Validation Loss Loc Misc Org Per Overall Precision Overall Recall Overall F1 Overall Accuracy
0.1119 1.0 5795 0.1067 {'precision': 0.9053637984119267, 'recall': 0.9382031905961377, 'f1': 0.9214910110506349, 'number': 5955} {'precision': 0.7967393230551125, 'recall': 0.8883619837976684, 'f1': 0.8400597907324365, 'number': 5061} {'precision': 0.911225658648339, 'recall': 0.9225862568860539, 'f1': 0.9168707679008787, 'number': 3449} {'precision': 0.958470156461271, 'recall': 0.9523992322456813, 'f1': 0.9554250505439492, 'number': 5210} 0.8899 0.9264 0.9078 0.9887
0.0724 2.0 11590 0.0949 {'precision': 0.9289891395154553, 'recall': 0.9336691855583543, 'f1': 0.931323283082077, 'number': 5955} {'precision': 0.8191960332920134, 'recall': 0.9140486069946651, 'f1': 0.8640268957788569, 'number': 5061} {'precision': 0.9199886104783599, 'recall': 0.9367932734125833, 'f1': 0.9283148972848728, 'number': 3449} {'precision': 0.9687377113645301, 'recall': 0.9456813819577735, 'f1': 0.9570707070707071, 'number': 5210} 0.9068 0.9324 0.9194 0.9904

Framework versions

  • Transformers 4.28.1
  • Pytorch 2.0.0
  • Datasets 2.11.0
  • Tokenizers 0.13.3