We have just uploaded a new version of ALERT π¨ on ArXiv with novel insights into the weaknesses and vulnerabilities of LLMs! π https://arxiv.org/abs/2404.08676
As a key design principle for ALERT, we developed a fine-grained safety risk taxonomy (Fig. 2). This taxonomy serves as the foundation for the benchmark to provide detailed insights about a modelβs weaknesses and vulnerabilities as well as inform targeted safety enhancements π‘οΈ
For collecting our prompts, we started from the popular Anthropic's HH-RLHF data, and used automated strategies to filter/classify prompts. We then designed templates to create new prompts (providing sufficient support for each category, cf. Fig. 3) and implemented adversarial attacks.
In our experiments, we extensively evaluated several open- and closed-source LLMs (e.g. #ChatGPT, #Llama and #Mistral), highlighting their strengths and weaknesses (Table 1).
Huge thanks to @felfri, @PSaiml, Kristian Kersting, @navigli, @huu-ontocord and @BoLi-aisecure (and all the organizations involved: Babelscape, Sapienza NLP, TU Darmstadt, Hessian.AI, DFKI, Ontocord.AI, UChicago and UIUC)π«