arabert-sarcasm-detector

This model is a fine-tuned aubmindlab/bert-base-arabertv02-twitter that was fine-tuned on the ArSarcasT dataset. It achieves the following results on the evaluation set:

Eval Datatset Accuracy F1 Precession Recall
ArSarcasT 0.839 0.730 0.743 0.761
iSarcasmEVAL 0.897 0.633 0.646 0.620
AraSarcasmV2 0.769 0.553 0.587 0.523
IDAT 0.807 0.807 0.775 0.842

Model description

Fine-tuned aubmindlab/bert-base-arabertv02-twitter on Sarcastic tweets dataset for sarcasm detection text classification.

Intended uses & limitations

More information needed

Training and evaluation data

  • Training dataset: ArSarcasT development split.
  • Evaluation Datasets:
    • ArSarcasm-v2 test dataset.
    • iSarcasmEVAL test dataset.
    • ArSarcasT test dataset.

Training procedure

Fine-tuning, 3 epochs.

Training hyperparameters

The following hyperparameters were used during training:

  • optimizer: None
  • training_precision: float32

Training results

Framework versions

  • Transformers 4.28.1
  • TensorFlow 2.12.0
  • Tokenizers 0.13.3

Paper Citation

If you use this fine-tuned model based on the original ARABERT model, please cite the following paper:

Galal, M. A., Yousef, A. H., Zayed, H. H., & Medhat, W. (2024). Arabic sarcasm detection: An enhanced fine-tuned language model approach. Ain Shams Engineering Journal, 15(6), 102736. https://doi.org/10.1016/j.asej.2024.102736

Dataset Repo

https://github.com/Mabdelaziz/ArSarcasT

Downloads last month
0
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.

Model tree for MohamedGalal/arabert-sarcasm-detector

Adapter
(1)
this model