Papers
arxiv:2510.09558

AutoPR: Let's Automate Your Academic Promotion!

Published on Oct 10
· Submitted by Qiguang Chen on Oct 13
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

AutoPR, a multi-agent framework, automates the promotion of research papers by transforming them into engaging public content, significantly improving engagement metrics compared to direct LLM pipelines.

AI-generated summary

As the volume of peer-reviewed research surges, scholars increasingly rely on social platforms for discovery, while authors invest considerable effort in promoting their work to ensure visibility and citations. To streamline this process and reduce the reliance on human effort, we introduce Automatic Promotion (AutoPR), a novel task that transforms research papers into accurate, engaging, and timely public content. To enable rigorous evaluation, we release PRBench, a multimodal benchmark that links 512 peer-reviewed articles to high-quality promotional posts, assessing systems along three axes: Fidelity (accuracy and tone), Engagement (audience targeting and appeal), and Alignment (timing and channel optimization). We also introduce PRAgent, a multi-agent framework that automates AutoPR in three stages: content extraction with multimodal preparation, collaborative synthesis for polished outputs, and platform-specific adaptation to optimize norms, tone, and tagging for maximum reach. When compared to direct LLM pipelines on PRBench, PRAgent demonstrates substantial improvements, including a 604% increase in total watch time, a 438% rise in likes, and at least a 2.9x boost in overall engagement. Ablation studies show that platform modeling and targeted promotion contribute the most to these gains. Our results position AutoPR as a tractable, measurable research problem and provide a roadmap for scalable, impactful automated scholarly communication.

Community

Paper submitter

🧐 Why AutoPR?
The academic community continues to expand output each year without a corresponding increase in visibility or value. In 2024 alone, NeurIPS accepted over 4,000 papers, with conference volumes at CVPR and ICCV also soaring. In such an environment of overwhelming information, how can individual research stand out?
2

Limitations of Traditional Human-Curated Promotion:
🤯 Manually creating post-publication publicity is time-consuming: writing copy, selecting visuals, and adapting to multiple platforms can take hours or even days per paper.
😭 Worse, such posts often fail to match platform styles and end up buried by recommendation algorithms.
Can Existing LLMs Help?

💡Concept Overview — AutoPR

3
This work propose a new task, AutoPR, which enables large language models (LLMs) to automatically generate accurate, engaging, and platform-optimized promotional content directly from research materials — including manuscripts, figures, and supplementary data. [Figure 2]

To support this task, this work introduce PRBench, the first benchmark dataset for academic promotion. It contains 512 published papers paired with human-written high-quality promotional examples, establishing gold standards for three core metrics:

  • Fidelity (accuracy)
  • Engagement (attractiveness)
  • Alignment (platform compatibility)

🍖 Findings

5

  • Our evaluation reveals that current LLMs struggle to produce high-quality promotional content directly.
  • Long chain-of-thought (CoT) strategies show negligible improvement.
  • In-context learning (ICL) yields only marginal gains.

🔧 Solution — PRAgent

4

We propose PRAgent, a three-stage multi-agent system for end-to-end generation of research promotion content.
🧩 Content Extraction: Hierarchical summarization and PDF layout analysis enable precise alignment between visual and textual components.
🤝Multi-Agent Synthesis: Specialized agents collaboratively compose a coherent and visually integrated draft that highlights key contributions.
🎯 Platform Adaptation: The final output is refined for tone, format, hashtags, and visual layout, producing platform-optimized, ready-to-publish content.

📊 Key Results
On the PRBench-Core dataset, PRAgent achieves an average improvement of at least 7.15% over direct LLM generation, with boosts exceeding 20% for certain models.
In real-world deployment on the Xiaohongshu platform over a continuous 10-day test (PRAgent @Synaptic Flow vs. Direct Baseline @Emergent Mind ):

6

  • Average watch time ⬆️ 604%
  • Likes ⬆️ 438%
  • Homepage visits ⬆️ 575%

☺️ Broader Impact
Beyond saving researchers substantial time and effort, AutoPR and PRAgent seek to democratize the dissemination of scientific knowledge—empowering those less skilled in self-promotion to achieve equal visibility in the global research ecosystem.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2510.09558 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 1

Collections including this paper 2