Uncertainty is Fragile: Manipulating Uncertainty in Large Language Models
Abstract
Large Language Models (LLMs) are employed across various high-stakes domains, where the reliability of their outputs is crucial. One commonly used method to assess the reliability of LLMs' responses is uncertainty estimation, which gauges the likelihood of their answers being correct. While many studies focus on improving the accuracy of uncertainty estimations for LLMs, our research investigates the fragility of uncertainty estimation and explores potential attacks. We demonstrate that an attacker can embed a backdoor in LLMs, which, when activated by a specific trigger in the input, manipulates the model's uncertainty without affecting the final output. Specifically, the proposed backdoor attack method can alter an LLM's output probability distribution, causing the probability distribution to converge towards an attacker-predefined distribution while ensuring that the top-1 prediction remains unchanged. Our experimental results demonstrate that this attack effectively undermines the model's self-evaluation reliability in multiple-choice questions. For instance, we achieved a 100 attack success rate (ASR) across three different triggering strategies in four models. Further, we investigate whether this manipulation generalizes across different prompts and domains. This work highlights a significant threat to the reliability of LLMs and underscores the need for future defenses against such attacks. The code is available at https://github.com/qcznlp/uncertainty_attack.
Community
The research explores the vulnerability of uncertainty estimation in Large Language Models (LLMs) by demonstrating how attackers can manipulate the model's confidence in its predictions without altering the actual output. This is achieved through a backdoor attack that modifies the model's output probability distribution based on specific triggers, causing it to align with a distribution predetermined by the attacker, while keeping the top prediction unchanged. The study found a 100% attack success rate across various models and triggering strategies. It highlights a significant threat to LLM reliability and stresses the need for defensive mechanisms against such attacks.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Chain-of-Scrutiny: Detecting Backdoor Attacks for Large Language Models (2024)
- A Survey of Backdoor Attacks and Defenses on Large Language Models: Implications for Security Measures (2024)
- Exploring Backdoor Attacks against Large Language Model-based Decision Making (2024)
- CleanGen: Mitigating Backdoor Attacks for Generation Tasks in Large Language Models (2024)
- Adversarial Tuning: Defending Against Jailbreak Attacks for LLMs (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper