PRISM: Robust VLM Alignment with Principled Reasoning for Integrated Safety in Multimodality
This repository hosts the Qwen2-VL-PRISM-SFT model, a key component of the PRISM (Principled Reasoning for Integrated Safety in Multimodality) framework. PRISM is a system2-like framework designed to align Vision-Language Models (VLMs) by embedding a structured, safety-aware reasoning process.
The model was presented in the paper: PRISM: Robust VLM Alignment with Principled Reasoning for Integrated Safety in Multimodality
You can find the code and further documentation on the official GitHub repository: GitHub Repository
Abstract
Safeguarding vision-language models (VLMs) is a critical challenge, as existing methods often suffer from over-defense, which harms utility, or rely on shallow alignment, failing to detect complex threats that require deep reasoning. To this end, we introduce PRISM (Principled Reasoning for Integrated Safety in Multimodality), a system2-like framework that aligns VLMs by embedding a structured, safety-aware reasoning process. Our framework consists of two key components: PRISM-CoT, a dataset that teaches safety-aware chain-of-thought reasoning, and PRISM-DPO, generated via Monte Carlo Tree Search (MCTS) to further refine this reasoning through Direct Preference Optimization to help obtain a delicate safety boundary. Comprehensive evaluations demonstrate PRISM's effectiveness, achieving remarkably low attack success rates including 0.15% on JailbreakV-28K for Qwen2-VL and 90% improvement over the previous best method on VLBreak for LLaVA-1.5. PRISM also exhibits strong robustness against adaptive attacks, significantly increasing computational costs for adversaries, and generalizes effectively to out-of-distribution challenges, reducing attack success rates to just 8.70% on the challenging multi-image MIS benchmark. Remarkably, this robust defense is achieved while preserving, and in some cases enhancing, model utility. To promote reproducibility, we have made our code, data, and model weights available at this https URL.
Model Training and Weights
The PRISM framework involves training with two datasets:
- PRISM-CoT: https://huggingface.co/datasets/andyc03/PRISM-CoT
- PRISM-DPO: https://huggingface.co/datasets/andyc03/PRISM-DPO
The model weights used in our experiments, including this Qwen2-VL-PRISM-SFT checkpoint, are available on Hugging Face:
- Qwen2-VL-PRISM-SFT: https://huggingface.co/andyc03/Qwen2-VL-PRISM-SFT
- Qwen2-VL-PRISM-DPO: https://huggingface.co/andyc03/Qwen2-VL-PRISM-DPO
Usage
This model is compatible with the Hugging Face transformers
library. For detailed instructions on installation, model training, MCTS data generation, and inference, please refer to the comprehensive documentation and scripts available in the official GitHub repository.
Citation
If you use PRISM in your research, please consider citing our paper:
@misc{li2025prismrobustvlmalignment,
title={PRISM: Robust VLM Alignment with Principled Reasoning for Integrated Safety in Multimodality},
author={Nanxi Li and Zhengyue Zhao and Chaowei Xiao},
year={2025},
eprint={2508.18649},
archivePrefix={arXiv},
primaryClass={cs.CR},
url={https://arxiv.org/abs/2508.18649},
}
- Downloads last month
- 23