Datasets:
license: cc-by-nc-sa-4.0
task_categories:
- text-classification
language:
- en
- ar
- bg
- de
- el
- it
- pl
- ro
- uk
tags:
- subjectivity-detection
- news-articles
viewer: true
pretty_name: 'CLEF 2025 CheckThat! Lab - Task 1: Subjectivity in News Articles'
size_categories:
- 1K<n<10K
configs:
- config_name: arabic
data_files:
- split: train
path:
- data/arabic/train_ar.tsv
- split: dev
path:
- data/arabic/dev_ar.tsv
- split: dev_test
path:
- data/arabic/dev_test_ar.tsv
- split: test
path:
- data/arabic/test_ar_unlabeled.tsv
sep: "\t"
- config_name: bulgarian
data_files:
- split: train
path:
- data/bulgarian/train_bg.tsv
- split: dev
path:
- data/bulgarian/dev_bg.tsv
- split: dev_test
path:
- data/bulgarian/dev_test_bg.tsv
sep: "\t"
- config_name: english
data_files:
- split: train
path:
- data/english/train_en.tsv
- split: dev
path:
- data/english/dev_en.tsv
- split: dev_test
path:
- data/english/dev_test_en.tsv
- split: test
path:
- data/english/test_en_unlabeled.tsv
sep: "\t"
- config_name: german
data_files:
- split: train
path:
- data/german/train_de.tsv
- split: dev
path:
- data/german/dev_de.tsv
- split: dev_test
path:
- data/german/dev_test_de.tsv
- split: test
path:
- data/german/test_de_unlabeled.tsv
sep: "\t"
- config_name: greek
data_files:
- split: test
path:
- data/greek/test_gr_unlabeled.tsv
sep: "\t"
- config_name: italian
data_files:
- split: train
path:
- data/italian/train_it.tsv
- split: dev
path:
- data/italian/dev_it.tsv
- split: dev_test
path:
- data/italian/dev_test_it.tsv
- split: test
path:
- data/italian/test_it_unlabeled.tsv
sep: "\t"
- config_name: multilingual
data_files:
- split: dev_test
path:
- data/multilingual/dev_test_multilingual.tsv
- split: test
path:
- data/multilingual/test_multilingual_unlabeled.tsv
sep: "\t"
- config_name: polish
data_files:
- split: test
path:
- data/polish/test_pol_unlabeled.tsv
sep: "\t"
- config_name: romanian
data_files:
- split: test
path:
- data/romanian/test_ro_unlabeled.tsv
sep: "\t"
- config_name: ukrainian
data_files:
- split: test
path:
- data/ukrainian/test_ukr_unlabeled.tsv
sep: "\t"
CLEF‑2025 CheckThat! Lab Task 1: Subjectivity in News Articles
Systems are challenged to distinguish whether a sentence from a news article expresses the subjective view of the author behind it or presents an objective view on the covered topic instead.
This is a binary classification tasks in which systems have to identify whether a text sequence (a sentence or a paragraph) is subjective (SUBJ) or objective (OBJ).
The task comprises three settings:
- Monolingual: train and test on data in a given language L
- Multilingual: train and test on data comprising several languages
- Zero-shot: train on several languages and test on unseen languages
Datasets statistics
- English
- train: 830 sentences, 532 OBJ, 298 SUBJ
- dev: 462 sentences, 222 OBJ, 240 SUBJ
- dev-test: 484 sentences, 362 OBJ, 122 SUBJ
- Italian
- train: 1613 sentences, 1231 OBJ, 382 SUBJ
- dev: 667 sentences, 490 OBJ, 177 SUBJ
- dev-test - 513 sentences, 377 OBJ, 136 SUBJ
- German
- train: 800 sentences, 492 OBJ, 308 SUBJ
- dev: 491 sentences, 317 OBJ, 174 SUBJ
- dev-test - 337 sentences, 226 OBJ, 111 SUBJ
- Bulgarian
- train: 729 sentences, 406 OBJ, 323 SUBJ
- dev: 467 sentences, 175 OBJ, 139 SUBJ
- dev-test - 250 sentences, 143 OBJ, 107 SUBJ
- test: TBA
- Arabic
- train: 2,446 sentences, 1391 OBJ, 1055 SUBJ
- dev: 742 sentences, 266 OBJ, 201 SUBJ
- dev-test - 748 sentences, 425 OBJ, 323 SUBJ
Input Data Format
The data will be provided as a TSV file with three columns:
sentence_id sentence label
Where:
- sentence_id: sentence id for a given sentence in a news article
- sentence: sentence's text
- label: OBJ and SUBJ
Note: For English, the training and development (validation) sets will also include a fourth column, "solved_conflict", whose boolean value reflects whether the annotators had a strong disagreement.
Examples:
b9e1635a-72aa-467f-86d6-f56ef09f62c3 Gone are the days when they led the world in recession-busting SUBJ
f99b5143-70d2-494a-a2f5-c68f10d09d0a The trend is expected to reverse as soon as next month. OBJ
Output Data Format
The output must be a TSV format with two columns: sentence_id and label.
Evaluation Metrics
This task is evaluated as a classification task using F1-macro measure. Other metrics include Precision, Recall, and F1 of the SUBJ class and the macro-averaged scores.
Scorers
The code base with the scorer script is available on the original GitLab repository - clef2025-checkthat-lab-task1.
To evaluate the output of your model which should be in the output format required, please run the script below:
python evaluate.py -g dev_truth.tsv -p dev_predicted.tsv
where dev_predicted.tsv is the output of your model on the dev set, and dev_truth.tsv is the golden label file provided by authors.
The file can be used also to validate the format of the submission, simply use the provided test file as gold data.
Baselines
The code base with the script to train the baseline model is provided in the original GitLab repository - clef2025-checkthat-lab-task1. The script can be run as follow:
python baseline.py -trp train_data.tsv -ttp dev_data.tsv
where train_data.tsv is the file to be used for training and dev_data.tsv is the file on which doing the prediction.
The baseline is a logistic regressor trained on a Sentence-BERT multilingual representation of the data.
Leaderboard
The leaderboard is available in the original GitLab repository - clef2025-checkthat-lab-task1.
Related Work
The dataset was used in AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles.
Information regarding the annotation guidelines can be found in the following papers:
Federico Ruggeri, Francesco Antici, Andrea Galassi, aikaterini Korre, Arianna Muti, Alberto Barron, On the Definition of Prescriptive Annotation Guidelines for Language-Agnostic Subjectivity Detection, in: Proceedings of Text2Story — Sixth Workshop on Narrative Extraction From Texts, CEUR-WS.org, 2023, Vol 3370, pp. 103 - 111
Francesco Antici, Andrea Galassi, Federico Ruggeri, Katerina Korre, Arianna Muti, Alessandra Bardi, Alice Fedotova, Alberto Barrón-Cedeño, A Corpus for Sentence-level Subjectivity Detection on English News Articles, in: Proceedings of Joint International Conference on Computational Linguistics, Language Resources and Evaluation (COLING-LREC), 2024
Suwaileh, Reem, Maram Hasanain, Fatema Hubail, Wajdi Zaghouani, and Firoj Alam. "ThatiAR: Subjectivity Detection in Arabic News Sentences." arXiv preprint arXiv:2406.05559 (2024).
Credits
ECIR 2025
Alam, F. et al. (2025). The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval. In: Hauff, C., et al. Advances in Information Retrieval. ECIR 2025. Lecture Notes in Computer Science, vol 15576. Springer, Cham. https://doi.org/10.1007/978-3-031-88720-8_68
@InProceedings{10.1007/978-3-031-88720-8_68,
author="Alam, Firoj
and Stru{\ss}, Julia Maria
and Chakraborty, Tanmoy
and Dietze, Stefan
and Hafid, Salim
and Korre, Katerina
and Muti, Arianna
and Nakov, Preslav
and Ruggeri, Federico
and Schellhammer, Sebastian
and Setty, Vinay
and Sundriyal, Megha
and Todorov, Konstantin
and V., Venktesh",
editor="Hauff, Claudia
and Macdonald, Craig
and Jannach, Dietmar
and Kazai, Gabriella
and Nardini, Franco Maria
and Pinelli, Fabio
and Silvestri, Fabrizio
and Tonellotto, Nicola",
title="The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval",
booktitle="Advances in Information Retrieval",
year="2025",
publisher="Springer Nature Switzerland",
address="Cham",
pages="467--478",
isbn="978-3-031-88720-8",
}
CLEF 2025 LNCS
@InProceedings{clef-checkthat:2025-lncs,
author = {
Alam, Firoj
and Struß, Julia Maria
and Chakraborty, Tanmoy
and Dietze, Stefan
and Hafid, Salim
and Korre, Katerina
and Muti, Arianna
and Nakov, Preslav
and Ruggeri, Federico
and Schellhammer, Sebastian
and Setty, Vinay
and Sundriyal, Megha
and Todorov, Konstantin
and Venktesh, V
},
title = {Overview of the {CLEF}-2025 {CheckThat! Lab}: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval},
editor = {
Carrillo-de-Albornoz, Jorge and
Gonzalo, Julio and
Plaza, Laura and
García Seco de Herrera, Alba and
Mothe, Josiane and
Piroi, Florina and
Rosso, Paolo and
Spina, Damiano and
Faggioli, Guglielmo and
Ferro, Nicola
},
booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF 2025)},
year = {2025}
}
CLEF 2025 CEUR papers
@proceedings{clef2025-workingnotes,
editor = "Faggioli, Guglielmo and
Ferro, Nicola and
Rosso, Paolo and
Spina, Damiano",
title = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
booktitle = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum",
series = "CLEF~2025",
address = "Madrid, Spain",
year = 2025
}
Task 1 overview paper
@inproceedings{clef-checkthat:2025:task1,
title = {Overview of the {CLEF-2025 CheckThat!} Lab Task 1 on Subjectivity in News Article},
author = {
Ruggeri, Federico and
Muti, Arianna and
Korre, Katerina and
Stru{\ss}, Julia Maria and
Siegel, Melanie and
Wiegand, Michael and
Alam, Firoj and
Biswas, Rafiul and
Zaghouani, Wajdi and
Nawrocka, Maria and
Ivasiuk, Bogdan and
Razvan, Gogu and
Mihail, Andreiana
},
crossref = {clef2025-workingnotes}
}