Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up
RishabhBhardwaj 's Collections
LLM Safety

LLM Safety

updated Aug 8, 2024

Our research on LLM safety: red-teaming, value alignment, realignment.

Upvote
1

  • Language Models are Homer Simpson! Safety Re-Alignment of Fine-tuned Language Models through Task Arithmetic

    Paper • 2402.11746 • Published Feb 19, 2024 • 2

  • Red-Teaming Large Language Models using Chain of Utterances for Safety-Alignment

    Paper • 2308.09662 • Published Aug 18, 2023 • 3

  • Language Model Unalignment: Parametric Red-Teaming to Expose Hidden Harms and Biases

    Paper • 2310.14303 • Published Oct 22, 2023 • 1

  • declare-lab/starling-7B

    Text Generation • Updated Mar 4, 2024 • 39 • 10

  • declare-lab/HarmfulQA

    Viewer • Updated Feb 27, 2024 • 1.96k • 360 • 35

  • Ruby Teaming: Improving Quality Diversity Search with Memory for Automated Red Teaming

    Paper • 2406.11654 • Published Jun 17, 2024 • 6

  • WalledEval: A Comprehensive Safety Evaluation Toolkit for Large Language Models

    Paper • 2408.03837 • Published Aug 7, 2024 • 18
Upvote
1
  • Collection guide
  • Browse collections
Company
TOS Privacy About Jobs
Website
Models Datasets Spaces Pricing Docs