Abstract
Large Language Models (LLMs) have shown remarkable capabilities, with optimizing their input prompts playing a pivotal role in maximizing their performance. However, while LLM prompts consist of both the task-agnostic system prompts and task-specific user prompts, existing work on prompt optimization has focused on user prompts specific to individual queries or tasks, and largely overlooked the system prompt that is, once optimized, applicable across different tasks and domains. Motivated by this, we introduce the novel problem of bilevel system prompt optimization, whose objective is to design system prompts that are robust to diverse user prompts and transferable to unseen tasks. To tackle this problem, we then propose a meta-learning framework, which meta-learns the system prompt by optimizing it over various user prompts across multiple datasets, while simultaneously updating the user prompts in an iterative manner to ensure synergy between them. We conduct experiments on 14 unseen datasets spanning 5 different domains, on which we show that our approach produces system prompts that generalize effectively to diverse user prompts. Also, our findings reveal that the optimized system prompt enables rapid adaptation even to unseen tasks, requiring fewer optimization steps for test-time user prompts while achieving improved performance.
Community
We tackle the new problem of optimizing system prompts, inspired by their crucial roles in shaping LLM behavior across multiple tasks and domains, which are, however, receiving little attention compared to user prompts. Also, we tackle this problem with a meta-learning framework that learns robust, transferable system prompts across tasks and domains.
Github: https://github.com/Dozi01/MetaSPO
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- MARS: A Multi-Agent Framework Incorporating Socratic Guidance for Automated Prompt Optimization (2025)
- Learning Optimal Prompt Ensemble for Multi-source Visual Prompt Transfer (2025)
- E-InMeMo: Enhanced Prompting for Visual In-Context Learning (2025)
- MODP: Multi Objective Directional Prompting (2025)
- PLHF: Prompt Optimization with Few-Shot Human Feedback (2025)
- Evolutionary Prompt Optimization Discovers Emergent Multimodal Reasoning Strategies in Vision-Language Models (2025)
- VPO: Aligning Text-to-Video Generation Models with Prompt Optimization (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper