Profiling News Media for Factuality and Bias Using LLMs and the Fact-Checking Methodology of Human Experts
Abstract
A novel methodology using large language models with curated prompts improves predictions of media outlet factuality and political bias, validated through experiments and error analysis.
In an age characterized by the proliferation of mis- and disinformation online, it is critical to empower readers to understand the content they are reading. Important efforts in this direction rely on manual or automatic fact-checking, which can be challenging for emerging claims with limited information. Such scenarios can be handled by assessing the reliability and the political bias of the source of the claim, i.e., characterizing entire news outlets rather than individual claims or articles. This is an important but understudied research direction. While prior work has looked into linguistic and social contexts, we do not analyze individual articles or information in social media. Instead, we propose a novel methodology that emulates the criteria that professional fact-checkers use to assess the factuality and political bias of an entire outlet. Specifically, we design a variety of prompts based on these criteria and elicit responses from large language models (LLMs), which we aggregate to make predictions. In addition to demonstrating sizable improvements over strong baselines via extensive experiments with multiple LLMs, we provide an in-depth error analysis of the effect of media popularity and region on model performance. Further, we conduct an ablation study to highlight the key components of our dataset that contribute to these improvements. To facilitate future research, we released our dataset and code at https://github.com/mbzuai-nlp/llm-media-profiling.
Community
Profiling News Media for Factuality and Bias Using LLMs and the Fact-Checking Methodology of Human Experts
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Facts are Harder Than Opinions -- A Multilingual, Comparative Analysis of LLM-Based Fact-Checking Reliability (2025)
- BiasLab: Toward Explainable Political Bias Detection with Dual-Axis Annotations and Rationale Indicators (2025)
- Resolving Conflicting Evidence in Automated Fact-Checking: A Study on Retrieval-Augmented LLMs (2025)
- Frame In, Frame Out: Do LLMs Generate More Biased News Headlines than Humans? (2025)
- A Generative-AI-Driven Claim Retrieval System Capable of Detecting and Retrieving Claims from Social Media Platforms in Multiple Languages (2025)
- Unified Large Language Models for Misinformation Detection in Low-Resource Linguistic Settings (2025)
- PRISM: A Framework for Producing Interpretable Political Bias Embeddings with Political-Aware Cross-Encoder (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper