--- license: cc-by-nc-sa-4.0 task_categories: - text-classification language: - en - ar - bg - de - el - it - pl - ro - uk tags: - subjectivity-detection - news-articles viewer: true pretty_name: 'CLEF 2025 CheckThat! Lab - Task 1: Subjectivity in News Articles' size_categories: - 1K sentence_id sentence label Where:
* sentence_id: sentence id for a given sentence in a news article
* sentence: sentence's text
* label: *OBJ* and *SUBJ* **Note:** For English, the training and development (validation) sets will also include a fourth column, "solved_conflict", whose boolean value reflects whether the annotators had a strong disagreement. **Examples:** > b9e1635a-72aa-467f-86d6-f56ef09f62c3 Gone are the days when they led the world in recession-busting SUBJ > > f99b5143-70d2-494a-a2f5-c68f10d09d0a The trend is expected to reverse as soon as next month. OBJ ## Output Data Format The output must be a TSV format with two columns: sentence_id and label. ## Evaluation Metrics This task is evaluated as a classification task using F1-macro measure. Other metrics include Precision, Recall, and F1 of the SUBJ class and the macro-averaged scores. ## Scorers The code base with the scorer script is available on the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1). To evaluate the output of your model which should be in the output format required, please run the script below: > python evaluate.py -g dev_truth.tsv -p dev_predicted.tsv where dev_predicted.tsv is the output of your model on the dev set, and dev_truth.tsv is the golden label file provided by authors. The file can be used also to validate the format of the submission, simply use the provided test file as gold data. ## Baselines The code base with the script to train the baseline model is provided in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1). The script can be run as follow: > python baseline.py -trp train_data.tsv -ttp dev_data.tsv where train_data.tsv is the file to be used for training and dev_data.tsv is the file on which doing the prediction. The baseline is a logistic regressor trained on a Sentence-BERT multilingual representation of the data. ## Leaderboard The leaderboard is available in the original GitLab repository - [clef2025-checkthat-lab-task1](https://gitlab.com/checkthat_lab/clef2025-checkthat-lab/-/tree/main/task1). ## Related Work The dataset was used in [AI Wizards at CheckThat! 2025: Enhancing Transformer-Based Embeddings with Sentiment for Subjectivity Detection in News Articles](https://huggingface.co/papers/2507.11764). Information regarding the annotation guidelines can be found in the following papers: > Federico Ruggeri, Francesco Antici, Andrea Galassi, aikaterini Korre, Arianna Muti, Alberto Barron, _[On the Definition of Prescriptive Annotation Guidelines for Language-Agnostic Subjectivity Detection](https://ceur-ws.org/Vol-3370/paper10.pdf)_, in: Proceedings of Text2Story — Sixth Workshop on Narrative Extraction From Texts, CEUR-WS.org, 2023, Vol 3370, pp. 103 - 111 > Francesco Antici, Andrea Galassi, Federico Ruggeri, Katerina Korre, Arianna Muti, Alessandra Bardi, Alice Fedotova, Alberto Barrón-Cedeño, _[A Corpus for Sentence-level Subjectivity Detection on English News Articles](https://arxiv.org/abs/2305.18034)_, in: Proceedings of Joint International Conference on Computational Linguistics, Language Resources and Evaluation (COLING-LREC), 2024 > Suwaileh, Reem, Maram Hasanain, Fatema Hubail, Wajdi Zaghouani, and Firoj Alam. "ThatiAR: Subjectivity Detection in Arabic News Sentences." arXiv preprint arXiv:2406.05559 (2024). > ## Credits ### ECIR 2025 Alam, F. et al. (2025). The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval. In: Hauff, C., et al. Advances in Information Retrieval. ECIR 2025. Lecture Notes in Computer Science, vol 15576. Springer, Cham. https://doi.org/10.1007/978-3-031-88720-8_68 ```bibtex @InProceedings{10.1007/978-3-031-88720-8_68, author="Alam, Firoj and Stru{\ss}, Julia Maria and Chakraborty, Tanmoy and Dietze, Stefan and Hafid, Salim and Korre, Katerina and Muti, Arianna and Nakov, Preslav and Ruggeri, Federico and Schellhammer, Sebastian and Setty, Vinay and Sundriyal, Megha and Todorov, Konstantin and V., Venktesh", editor="Hauff, Claudia and Macdonald, Craig and Jannach, Dietmar and Kazai, Gabriella and Nardini, Franco Maria and Pinelli, Fabio and Silvestri, Fabrizio and Tonellotto, Nicola", title="The CLEF-2025 CheckThat! Lab: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval", booktitle="Advances in Information Retrieval", year="2025", publisher="Springer Nature Switzerland", address="Cham", pages="467--478", isbn="978-3-031-88720-8", } ``` ### CLEF 2025 LNCS ```bibtex @InProceedings{clef-checkthat:2025-lncs, author = { Alam, Firoj and Struß, Julia Maria and Chakraborty, Tanmoy and Dietze, Stefan and Hafid, Salim and Korre, Katerina and Muti, Arianna and Nakov, Preslav and Ruggeri, Federico and Schellhammer, Sebastian and Setty, Vinay and Sundriyal, Megha and Todorov, Konstantin and Venktesh, V }, title = {Overview of the {CLEF}-2025 {CheckThat! Lab}: Subjectivity, Fact-Checking, Claim Normalization, and Retrieval}, editor = { Carrillo-de-Albornoz, Jorge and Gonzalo, Julio and Plaza, Laura and García Seco de Herrera, Alba and Mothe, Josiane and Piroi, Florina and Rosso, Paolo and Spina, Damiano and Faggioli, Guglielmo and Ferro, Nicola }, booktitle = {Experimental IR Meets Multilinguality, Multimodality, and Interaction. Proceedings of the Sixteenth International Conference of the CLEF Association (CLEF 2025)}, year = {2025} } ``` ### CLEF 2025 CEUR papers ```bibtex @proceedings{clef2025-workingnotes, editor = "Faggioli, Guglielmo and Ferro, Nicola and Rosso, Paolo and Spina, Damiano", title = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum", booktitle = "Working Notes of CLEF 2025 - Conference and Labs of the Evaluation Forum", series = "CLEF~2025", address = "Madrid, Spain", year = 2025 } ``` ### Task 1 overview paper ```bibtex @inproceedings{clef-checkthat:2025:task1, title = {Overview of the {CLEF-2025 CheckThat!} Lab Task 1 on Subjectivity in News Article}, author = { Ruggeri, Federico and Muti, Arianna and Korre, Katerina and Stru{\ss}, Julia Maria and Siegel, Melanie and Wiegand, Michael and Alam, Firoj and Biswas, Rafiul and Zaghouani, Wajdi and Nawrocka, Maria and Ivasiuk, Bogdan and Razvan, Gogu and Mihail, Andreiana }, crossref = {clef2025-workingnotes} } ```