Llama-Prompt-Guard-2-22M-onnx
This repository provides a ONNX converted and quantized version of meta-llama/Llama-Prompt-Guard-2-22M
π§ Built With
- Meta LLaMA β Foundation model powering the classifier
- π€ Hugging Face Transformers β Model and tokenizer loading
- ONNX β Model export and runtime format
- ONNX Runtime β Efficient inference backend
π₯ Evaluation Dataset
We use jackhhao/jailbreak-classification
for the evaluation
π§ͺ Evaluation Results
Model | Accuracy | Precision | Recall | F1 Score | AUC-ROC | Inference Time |
---|---|---|---|---|---|---|
Llama-Prompt-Guard-2-22M | 0.9569 | 0.9879 | 0.9260 | 0.9559 | 0.9259 | 33s |
Llama-Prompt-Guard-2-22M-q | 0.9473 | 1.0000 | 0.8956 | 0.9449 | 0.9032 | 29s |
Llama-Prompt-Guard-2-86M | 0.9770 | 0.9980 | 0.9564 | 0.9767 | 0.9523 | 1m29s |
Llama-Prompt-Guard-2-86M-q | 0.8937 | 1.0000 | 0.7894 | 0.8823 | 0.7263 | 1m15s |
π€ Usage
from transformers import AutoTokenizer
from optimum.onnxruntime import ORTModelForSequenceClassification
import numpy as np
# Load model and tokenizer using optimum
model = ORTModelForSequenceClassification.from_pretrained("gravitee-io/Llama-Prompt-Guard-2-22M-onnx", file_name="model.quant.onnx")
tokenizer = AutoTokenizer.from_pretrained("gravitee-io/Llama-Prompt-Guard-2-22M-onnx")
# Tokenize input
text = "Your comment here"
inputs = tokenizer(text, return_tensors="pt", padding=True, truncation=True)
# Run inference
outputs = model(**inputs)
logits = outputs.logits
# Optional: convert to probabilities
probs = 1 / (1 + np.exp(-logits))
print(probs)
π GitHub Repository:
You can find the full source code, CLI tools, and evaluation scripts in the official GitHub repository.
- Downloads last month
- 50
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
Model tree for gravitee-io/Llama-Prompt-Guard-2-22M-onnx
Base model
meta-llama/Llama-Prompt-Guard-2-22M