mdeberta-v3-base-subjectivity-arabic

This model is a fine-tuned version of microsoft/mdeberta-v3-base on the CheckThat! Lab Task 1 Subjectivity Detection at CLEF 2025. It achieves the following results on the evaluation set:

  • Loss: 0.7419
  • Macro F1: 0.5291
  • Macro P: 0.5526
  • Macro R: 0.5414
  • Subj F1: 0.3839
  • Subj P: 0.5082
  • Subj R: 0.3085
  • Accuracy: 0.5739

Model description

This model is part of AI Wizards' participation in the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles. It aims to classify sentences as subjective or objective, a key component in combating misinformation, improving fact-checking pipelines, and supporting journalists. The model enhances transformer-based classifiers by integrating sentiment scores, derived from an auxiliary model, with sentence representations. This sentiment-augmented architecture, applied here with mDeBERTaV3-base, has shown consistent performance gains, particularly in subjective F1 score.

Intended uses & limitations

This model is intended for subjectivity detection in sentences from news articles, classifying them as either subjective (opinion-laden) or objective. This capability is valuable for applications such as combating misinformation, improving fact-checking pipelines, and supporting journalists. It has been evaluated across monolingual (Arabic, German, English, Italian, Bulgarian), multilingual, and zero-shot settings (Greek, Romanian, Polish, Ukrainian).

A key strategy employed is decision threshold calibration to address class imbalance prevalent across languages. Users should be aware that the initial official multilingual Macro F1 score was lower due to a submission error (skewed class distribution), which was later corrected offline to Macro F1 = 0.68, placing the team 9th overall in the challenge.

Training and evaluation data

The model was trained and evaluated on datasets provided for the CLEF 2025 CheckThat! Lab Task 1: Subjectivity Detection in News Articles. Training and development datasets were available for Arabic, German, English, Italian, and Bulgarian. For final evaluation, additional unseen languages such as Greek, Romanian, Polish, and Ukrainian were used to assess generalization capabilities. The training incorporates sentiment scores from an auxiliary model and utilizes decision threshold calibration to mitigate class imbalance.

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 1e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: linear
  • num_epochs: 6

Training results

Training Loss Epoch Step Validation Loss Macro F1 Macro P Macro R Subj F1 Subj P Subj R Accuracy
No log 1.0 153 0.6892 0.5274 0.5500 0.5396 0.3827 0.5041 0.3085 0.5717
No log 2.0 306 0.6969 0.5314 0.5508 0.5414 0.3939 0.5039 0.3234 0.5717
No log 3.0 459 0.6918 0.5433 0.5618 0.5513 0.4132 0.5188 0.3433 0.5803
0.6716 4.0 612 0.7192 0.5360 0.5445 0.5400 0.4237 0.4902 0.3731 0.5632
0.6716 5.0 765 0.7238 0.5253 0.5633 0.5447 0.3607 0.5288 0.2736 0.5824
0.6716 6.0 918 0.7419 0.5291 0.5526 0.5414 0.3839 0.5082 0.3085 0.5739

Framework versions

  • Transformers 4.49.0
  • Pytorch 2.5.1+cu121
  • Datasets 3.3.1
  • Tokenizers 0.21.0

How to use

You can use the model directly with the transformers library for text classification:

from transformers import pipeline

# Load the text classification pipeline
classifier = pipeline(
    "text-classification",
    model="MatteoFasulo/mdeberta-v3-base-subjectivity-arabic",
    tokenizer="microsoft/mdeberta-v3-base",
)

text1 = "وهكذا بدأت النساء يعين أهمية دورهن في عدم الصمت أمام هذه الاقتحامات ورفضها بإعلاء صيحات الله أكبر."
result1 = classifier(text1)
print(f"Text: '{text1}' Classification: {result1}")

text2 = "ستشمل الشحنة الأولية نصف الجرعات، يليها النصف الثاني بعد ثلاثة أسابيع."
result2 = classifier(text2)
print(f"Text: '{text2}' Classification: {result2}")

Code

The official code and materials for this project are available on GitHub: https://github.com/MatteoFasulo/clef2025-checkthat.

Citation

If you find our work helpful or inspiring, please feel free to cite it:

@misc{fasulo2025aiwizardscheckthat2025,
      title={AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles}, 
      author={Matteo Fasulo and Luca Babboni and Luca Tedeschini},
      year={2025},
      eprint={2507.11764},
      archivePrefix={arXiv},
      primaryClass={cs.CL},
      url={https://arxiv.org/abs/2507.11764}, 
}
Downloads last month
12
Safetensors
Model size
279M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for MatteoFasulo/mdeberta-v3-base-subjectivity-arabic

Finetuned
(202)
this model

Dataset used to train MatteoFasulo/mdeberta-v3-base-subjectivity-arabic

Collection including MatteoFasulo/mdeberta-v3-base-subjectivity-arabic