Papers
arxiv:2509.01058

Speaking at the Right Level: Literacy-Controlled Counterspeech Generation with RAG-RL

Published on Sep 1
Authors:
,
,
,
,

Abstract

A retrieval-augmented generation framework with reinforcement learning generates tailored counterspeech for different health literacy levels, improving accessibility and user preference.

AI-generated summary

Health misinformation spreading online poses a significant threat to public health. Researchers have explored methods for automatically generating counterspeech to health misinformation as a mitigation strategy. Existing approaches often produce uniform responses, ignoring that the health literacy level of the audience could affect the accessibility and effectiveness of counterspeech. We propose a Controlled-Literacy framework using retrieval-augmented generation (RAG) with reinforcement learning (RL) to generate tailored counterspeech adapted to different health literacy levels. In particular, we retrieve knowledge aligned with specific health literacy levels, enabling accessible and factual information to support generation. We design a reward function incorporating subjective user preferences and objective readability-based rewards to optimize counterspeech to the target health literacy level. Experiment results show that Controlled-Literacy outperforms baselines by generating more accessible and user-preferred counterspeech. This research contributes to more equitable and impactful public health communication by improving the accessibility and comprehension of counterspeech to health misinformation

Community

Paper author

This paper introduces Controlled-Literacy, a framework combining Retrieval-Augmented Generation (RAG) with Reinforcement Learning to generate counterspeech in health contexts that is adapted for different literacy levels (low / medium / high). Key parts: readability filtering using FKRE, simulated preference signals, and a hybrid reward to balance clarity, politeness, and factuality.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2509.01058 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2509.01058 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2509.01058 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.