Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -1,92 +1,207 @@
|
|
| 1 |
---
|
| 2 |
-
base_model:
|
|
|
|
|
|
|
| 3 |
library_name: model2vec
|
| 4 |
license: mit
|
| 5 |
-
model_name:
|
| 6 |
tags:
|
| 7 |
-
- embeddings
|
| 8 |
- static-embeddings
|
| 9 |
-
-
|
|
|
|
| 10 |
---
|
| 11 |
|
| 12 |
-
#
|
|
|
|
|
|
|
| 13 |
|
| 14 |
-
This [Model2Vec](https://github.com/MinishLab/model2vec) model is a distilled version of the unknown(https://huggingface.co/unknown) Sentence Transformer. It uses static embeddings, allowing text embeddings to be computed orders of magnitude faster on both GPU and CPU. It is designed for applications where computational resources are limited or where real-time performance is critical. Model2Vec models are the smallest, fastest, and most performant static embedders available. The distilled models are up to 50 times smaller and 500 times faster than traditional Sentence Transformers.
|
| 15 |
|
| 16 |
|
| 17 |
## Installation
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
pip install model2vec
|
| 22 |
```
|
| 23 |
|
| 24 |
## Usage
|
| 25 |
|
| 26 |
-
### Using Model2Vec
|
| 27 |
-
|
| 28 |
-
The [Model2Vec library](https://github.com/MinishLab/model2vec) is the fastest and most lightweight way to run Model2Vec models.
|
| 29 |
-
|
| 30 |
-
Load this model using the `from_pretrained` method:
|
| 31 |
```python
|
| 32 |
-
from model2vec import
|
| 33 |
-
|
| 34 |
-
# Load a pretrained Model2Vec model
|
| 35 |
-
model = StaticModel.from_pretrained("tmp1m0sv0n_")
|
| 36 |
-
|
| 37 |
-
# Compute text embeddings
|
| 38 |
-
embeddings = model.encode(["Example sentence"])
|
| 39 |
-
```
|
| 40 |
|
| 41 |
-
|
|
|
|
|
|
|
| 42 |
|
| 43 |
-
You can also use the [Sentence Transformers library](https://github.com/UKPLab/sentence-transformers) to load and use the model:
|
| 44 |
|
| 45 |
-
|
| 46 |
-
|
| 47 |
|
| 48 |
-
|
| 49 |
-
model
|
| 50 |
|
| 51 |
-
# Compute text embeddings
|
| 52 |
-
embeddings = model.encode(["Example sentence"])
|
| 53 |
```
|
| 54 |
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
```
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
| 76 |
-
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 86 |
|
| 87 |
## Citation
|
| 88 |
|
| 89 |
-
|
|
|
|
| 90 |
```
|
| 91 |
@software{minishlab2024model2vec,
|
| 92 |
author = {Stephan Tulkens and {van Dongen}, Thomas},
|
|
|
|
| 1 |
---
|
| 2 |
+
base_model: minishlab/potion-base-4m
|
| 3 |
+
datasets:
|
| 4 |
+
- enguard/multi-lingual-prompt-moderation
|
| 5 |
library_name: model2vec
|
| 6 |
license: mit
|
| 7 |
+
model_name: enguard/tiny-guard-4m-en-prompt-harmfulness-binary-moderation
|
| 8 |
tags:
|
|
|
|
| 9 |
- static-embeddings
|
| 10 |
+
- text-classification
|
| 11 |
+
- model2vec
|
| 12 |
---
|
| 13 |
|
| 14 |
+
# enguard/tiny-guard-4m-en-prompt-harmfulness-binary-moderation
|
| 15 |
+
|
| 16 |
+
This model is a fine-tuned Model2Vec classifier based on [minishlab/potion-base-4m](https://huggingface.co/minishlab/potion-base-4m) for the prompt-harmfulness-binary found in the [enguard/multi-lingual-prompt-moderation](https://huggingface.co/datasets/enguard/multi-lingual-prompt-moderation) dataset.
|
| 17 |
|
|
|
|
| 18 |
|
| 19 |
|
| 20 |
## Installation
|
| 21 |
|
| 22 |
+
```bash
|
| 23 |
+
pip install model2vec[inference]
|
|
|
|
| 24 |
```
|
| 25 |
|
| 26 |
## Usage
|
| 27 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 28 |
```python
|
| 29 |
+
from model2vec.inference import StaticModelPipeline
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 30 |
|
| 31 |
+
model = StaticModelPipeline.from_pretrained(
|
| 32 |
+
"enguard/tiny-guard-4m-en-prompt-harmfulness-binary-moderation"
|
| 33 |
+
)
|
| 34 |
|
|
|
|
| 35 |
|
| 36 |
+
# Supports single texts. Format input as a single text:
|
| 37 |
+
text = "Example sentence"
|
| 38 |
|
| 39 |
+
model.predict([text])
|
| 40 |
+
model.predict_proba([text])
|
| 41 |
|
|
|
|
|
|
|
| 42 |
```
|
| 43 |
|
| 44 |
+
## Why should you use these models?
|
| 45 |
+
|
| 46 |
+
- Optimized for precision to reduce false positives.
|
| 47 |
+
- Extremely fast inference: up to x500 faster than SetFit.
|
| 48 |
+
|
| 49 |
+
## This model variant
|
| 50 |
+
|
| 51 |
+
Below is a quick overview of the model variant and core metrics.
|
| 52 |
+
|
| 53 |
+
| Field | Value |
|
| 54 |
+
|---|---|
|
| 55 |
+
| Classifies | prompt-harmfulness-binary |
|
| 56 |
+
| Base Model | [minishlab/potion-base-4m](https://huggingface.co/minishlab/potion-base-4m) |
|
| 57 |
+
| Precision | 0.8705 |
|
| 58 |
+
| Precision | 0.8705 |
|
| 59 |
+
| Recall | 0.7157 |
|
| 60 |
+
| F1 | 0.7855 |
|
| 61 |
+
|
| 62 |
+
### Confusion Matrix
|
| 63 |
+
|
| 64 |
+
| | FAIL | PASS |
|
| 65 |
+
| --- | --- | --- |
|
| 66 |
+
| **FAIL** | 1948 | 778 |
|
| 67 |
+
| **PASS** | 288 | 2438 |
|
| 68 |
+
|
| 69 |
+
<details>
|
| 70 |
+
<summary><b>Full metrics (JSON)</b></summary>
|
| 71 |
+
|
| 72 |
+
```json
|
| 73 |
+
{
|
| 74 |
+
"FAIL": {
|
| 75 |
+
"precision": 0.8705197132616488,
|
| 76 |
+
"recall": 0.7156537753222836,
|
| 77 |
+
"f1-score": 0.7855265817667273,
|
| 78 |
+
"support": 2715.0
|
| 79 |
+
},
|
| 80 |
+
"PASS": {
|
| 81 |
+
"precision": 0.7573852922690132,
|
| 82 |
+
"recall": 0.8929233049277511,
|
| 83 |
+
"f1-score": 0.819588505356232,
|
| 84 |
+
"support": 2699.0
|
| 85 |
+
},
|
| 86 |
+
"accuracy": 0.8040265977096417,
|
| 87 |
+
"macro avg": {
|
| 88 |
+
"precision": 0.8139525027653309,
|
| 89 |
+
"recall": 0.8042885401250173,
|
| 90 |
+
"f1-score": 0.8025575435614796,
|
| 91 |
+
"support": 5414.0
|
| 92 |
+
},
|
| 93 |
+
"weighted avg": {
|
| 94 |
+
"precision": 0.814119675903111,
|
| 95 |
+
"recall": 0.8040265977096417,
|
| 96 |
+
"f1-score": 0.8025072119418424,
|
| 97 |
+
"support": 5414.0
|
| 98 |
+
}
|
| 99 |
+
}
|
| 100 |
```
|
| 101 |
+
</details>
|
| 102 |
+
|
| 103 |
+
|
| 104 |
+
<details>
|
| 105 |
+
<summary><b>Sample Predictions</b></summary>
|
| 106 |
+
|
| 107 |
+
| Text | True Label | Predicted Label |
|
| 108 |
+
|------|------------|-----------------|
|
| 109 |
+
| If you can fold your penis into the shape of a vulva, you deserve not to be caught. In fact you deserve an ace origami prize. | FAIL | FAIL |
|
| 110 |
+
| I'm surprised they let you stay without buying anything. | PASS | PASS |
|
| 111 |
+
| Pirates flew the [Jolly Roger](http://i.imgur.com/o9pyLSe.jpg).\n\nPimps blew the [roger](http://en.wikipedia.org/wiki/Roger#Slang) jolly. | PASS | PASS |
|
| 112 |
+
| Seriously?<br><br>You're "depressed" because of a story? Oh, Reddit. | PASS | PASS |
|
| 113 |
+
| Yeah what kind of community manager publicly mocks their players achievements? With no clarification, straight douche. | FAIL | FAIL |
|
| 114 |
+
| If you can fold your penis into the shape of a vulva, you deserve not to be caught. In fact you deserve an ace origami prize. | FAIL | FAIL |
|
| 115 |
+
</details>
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
<details>
|
| 119 |
+
<summary><b>Prediction Speed Benchmarks</b></summary>
|
| 120 |
+
|
| 121 |
+
| Dataset Size | Time (seconds) | Predictions/Second |
|
| 122 |
+
|--------------|----------------|---------------------|
|
| 123 |
+
| 1 | 0.0002 | 5178.15 |
|
| 124 |
+
| 1000 | 0.0589 | 16990.69 |
|
| 125 |
+
| 5452 | 0.2874 | 18967.45 |
|
| 126 |
+
</details>
|
| 127 |
+
|
| 128 |
+
|
| 129 |
+
## Other model variants
|
| 130 |
+
|
| 131 |
+
Below is a general overview of the best-performing models for each dataset variant.
|
| 132 |
+
|
| 133 |
+
| Classifies | Model | Precision | Recall | F1 |
|
| 134 |
+
| --- | --- | --- | --- | --- |
|
| 135 |
+
| prompt-harassment-binary | [enguard/tiny-guard-2m-en-prompt-harassment-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-harassment-binary-moderation) | 0.8597 | 0.7620 | 0.8079 |
|
| 136 |
+
| prompt-harmfulness-binary | [enguard/tiny-guard-2m-en-prompt-harmfulness-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-harmfulness-binary-moderation) | 0.8826 | 0.6479 | 0.7472 |
|
| 137 |
+
| prompt-hate-speech-alt-binary | [enguard/tiny-guard-2m-en-prompt-hate-speech-alt-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-hate-speech-alt-binary-moderation) | 1.0000 | 0.6364 | 0.7778 |
|
| 138 |
+
| prompt-hate-speech-binary | [enguard/tiny-guard-2m-en-prompt-hate-speech-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-hate-speech-binary-moderation) | 0.8436 | 0.8233 | 0.8333 |
|
| 139 |
+
| prompt-hate-speech-combined-binary | [enguard/tiny-guard-2m-en-prompt-hate-speech-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-hate-speech-combined-binary-moderation) | 0.9141 | 0.7269 | 0.8098 |
|
| 140 |
+
| prompt-self-harm-binary | [enguard/tiny-guard-2m-en-prompt-self-harm-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-self-harm-binary-moderation) | 0.8929 | 0.7143 | 0.7937 |
|
| 141 |
+
| prompt-sexual-content-binary | [enguard/tiny-guard-2m-en-prompt-sexual-content-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-sexual-content-binary-moderation) | 0.9042 | 0.8510 | 0.8768 |
|
| 142 |
+
| prompt-sexual-content-combined-binary | [enguard/tiny-guard-2m-en-prompt-sexual-content-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-sexual-content-combined-binary-moderation) | 0.9264 | 0.8021 | 0.8598 |
|
| 143 |
+
| prompt-sexual-content-minors-binary | [enguard/tiny-guard-2m-en-prompt-sexual-content-minors-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-sexual-content-minors-binary-moderation) | 0.9123 | 0.8387 | 0.8739 |
|
| 144 |
+
| prompt-violence-alt-binary | [enguard/tiny-guard-2m-en-prompt-violence-alt-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-violence-alt-binary-moderation) | 0.8621 | 0.8065 | 0.8333 |
|
| 145 |
+
| prompt-violence-binary | [enguard/tiny-guard-2m-en-prompt-violence-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-violence-binary-moderation) | 0.8953 | 0.8370 | 0.8652 |
|
| 146 |
+
| prompt-violence-combined-binary | [enguard/tiny-guard-2m-en-prompt-violence-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-2m-en-prompt-violence-combined-binary-moderation) | 0.9056 | 0.7645 | 0.8291 |
|
| 147 |
+
| prompt-harassment-binary | [enguard/tiny-guard-4m-en-prompt-harassment-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-harassment-binary-moderation) | 0.8885 | 0.7165 | 0.7933 |
|
| 148 |
+
| prompt-harmfulness-binary | [enguard/tiny-guard-4m-en-prompt-harmfulness-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-harmfulness-binary-moderation) | 0.8705 | 0.7157 | 0.7855 |
|
| 149 |
+
| prompt-hate-speech-alt-binary | [enguard/tiny-guard-4m-en-prompt-hate-speech-alt-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-hate-speech-alt-binary-moderation) | 1.0000 | 0.6364 | 0.7778 |
|
| 150 |
+
| prompt-hate-speech-binary | [enguard/tiny-guard-4m-en-prompt-hate-speech-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-hate-speech-binary-moderation) | 0.8589 | 0.8313 | 0.8449 |
|
| 151 |
+
| prompt-hate-speech-combined-binary | [enguard/tiny-guard-4m-en-prompt-hate-speech-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-hate-speech-combined-binary-moderation) | 0.9198 | 0.7831 | 0.8460 |
|
| 152 |
+
| prompt-self-harm-binary | [enguard/tiny-guard-4m-en-prompt-self-harm-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-self-harm-binary-moderation) | 0.9062 | 0.8286 | 0.8657 |
|
| 153 |
+
| prompt-sexual-content-binary | [enguard/tiny-guard-4m-en-prompt-sexual-content-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-sexual-content-binary-moderation) | 0.9316 | 0.8735 | 0.9016 |
|
| 154 |
+
| prompt-sexual-content-combined-binary | [enguard/tiny-guard-4m-en-prompt-sexual-content-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-sexual-content-combined-binary-moderation) | 0.9356 | 0.8503 | 0.8909 |
|
| 155 |
+
| prompt-sexual-content-minors-binary | [enguard/tiny-guard-4m-en-prompt-sexual-content-minors-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-sexual-content-minors-binary-moderation) | 0.9138 | 0.8548 | 0.8833 |
|
| 156 |
+
| prompt-violence-alt-binary | [enguard/tiny-guard-4m-en-prompt-violence-alt-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-violence-alt-binary-moderation) | 0.9231 | 0.7742 | 0.8421 |
|
| 157 |
+
| prompt-violence-binary | [enguard/tiny-guard-4m-en-prompt-violence-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-violence-binary-moderation) | 0.8902 | 0.8225 | 0.8550 |
|
| 158 |
+
| prompt-violence-combined-binary | [enguard/tiny-guard-4m-en-prompt-violence-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-4m-en-prompt-violence-combined-binary-moderation) | 0.8851 | 0.8370 | 0.8603 |
|
| 159 |
+
| prompt-harassment-binary | [enguard/tiny-guard-8m-en-prompt-harassment-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-harassment-binary-moderation) | 0.8787 | 0.8166 | 0.8465 |
|
| 160 |
+
| prompt-harmfulness-binary | [enguard/tiny-guard-8m-en-prompt-harmfulness-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-harmfulness-binary-moderation) | 0.8641 | 0.7632 | 0.8105 |
|
| 161 |
+
| prompt-hate-speech-alt-binary | [enguard/tiny-guard-8m-en-prompt-hate-speech-alt-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-hate-speech-alt-binary-moderation) | 1.0000 | 0.7273 | 0.8421 |
|
| 162 |
+
| prompt-hate-speech-binary | [enguard/tiny-guard-8m-en-prompt-hate-speech-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-hate-speech-binary-moderation) | 0.8612 | 0.8474 | 0.8543 |
|
| 163 |
+
| prompt-hate-speech-combined-binary | [enguard/tiny-guard-8m-en-prompt-hate-speech-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-hate-speech-combined-binary-moderation) | 0.9152 | 0.8233 | 0.8668 |
|
| 164 |
+
| prompt-self-harm-binary | [enguard/tiny-guard-8m-en-prompt-self-harm-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-self-harm-binary-moderation) | 0.9667 | 0.8286 | 0.8923 |
|
| 165 |
+
| prompt-sexual-content-binary | [enguard/tiny-guard-8m-en-prompt-sexual-content-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-sexual-content-binary-moderation) | 0.9228 | 0.9116 | 0.9172 |
|
| 166 |
+
| prompt-sexual-content-combined-binary | [enguard/tiny-guard-8m-en-prompt-sexual-content-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-sexual-content-combined-binary-moderation) | 0.9411 | 0.8795 | 0.9093 |
|
| 167 |
+
| prompt-sexual-content-minors-binary | [enguard/tiny-guard-8m-en-prompt-sexual-content-minors-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-sexual-content-minors-binary-moderation) | 0.8871 | 0.8871 | 0.8871 |
|
| 168 |
+
| prompt-violence-alt-binary | [enguard/tiny-guard-8m-en-prompt-violence-alt-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-violence-alt-binary-moderation) | 0.9032 | 0.9032 | 0.9032 |
|
| 169 |
+
| prompt-violence-binary | [enguard/tiny-guard-8m-en-prompt-violence-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-violence-binary-moderation) | 0.8826 | 0.8442 | 0.8630 |
|
| 170 |
+
| prompt-violence-combined-binary | [enguard/tiny-guard-8m-en-prompt-violence-combined-binary-moderation](https://huggingface.co/enguard/tiny-guard-8m-en-prompt-violence-combined-binary-moderation) | 0.9046 | 0.8587 | 0.8810 |
|
| 171 |
+
| prompt-harassment-binary | [enguard/small-guard-32m-en-prompt-harassment-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-harassment-binary-moderation) | 0.8680 | 0.8307 | 0.8490 |
|
| 172 |
+
| prompt-harmfulness-binary | [enguard/small-guard-32m-en-prompt-harmfulness-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-harmfulness-binary-moderation) | 0.8717 | 0.7808 | 0.8238 |
|
| 173 |
+
| prompt-hate-speech-alt-binary | [enguard/small-guard-32m-en-prompt-hate-speech-alt-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-hate-speech-alt-binary-moderation) | 1.0000 | 0.8182 | 0.9000 |
|
| 174 |
+
| prompt-hate-speech-binary | [enguard/small-guard-32m-en-prompt-hate-speech-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-hate-speech-binary-moderation) | 0.8601 | 0.8394 | 0.8496 |
|
| 175 |
+
| prompt-hate-speech-combined-binary | [enguard/small-guard-32m-en-prompt-hate-speech-combined-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-hate-speech-combined-binary-moderation) | 0.9200 | 0.8313 | 0.8734 |
|
| 176 |
+
| prompt-self-harm-binary | [enguard/small-guard-32m-en-prompt-self-harm-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-self-harm-binary-moderation) | 0.9333 | 0.8000 | 0.8615 |
|
| 177 |
+
| prompt-sexual-content-binary | [enguard/small-guard-32m-en-prompt-sexual-content-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-sexual-content-binary-moderation) | 0.9205 | 0.9029 | 0.9116 |
|
| 178 |
+
| prompt-sexual-content-combined-binary | [enguard/small-guard-32m-en-prompt-sexual-content-combined-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-sexual-content-combined-binary-moderation) | 0.9327 | 0.8830 | 0.9072 |
|
| 179 |
+
| prompt-sexual-content-minors-binary | [enguard/small-guard-32m-en-prompt-sexual-content-minors-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-sexual-content-minors-binary-moderation) | 0.8852 | 0.8710 | 0.8780 |
|
| 180 |
+
| prompt-violence-alt-binary | [enguard/small-guard-32m-en-prompt-violence-alt-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-violence-alt-binary-moderation) | 0.9333 | 0.9032 | 0.9180 |
|
| 181 |
+
| prompt-violence-binary | [enguard/small-guard-32m-en-prompt-violence-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-violence-binary-moderation) | 0.8714 | 0.8841 | 0.8777 |
|
| 182 |
+
| prompt-violence-combined-binary | [enguard/small-guard-32m-en-prompt-violence-combined-binary-moderation](https://huggingface.co/enguard/small-guard-32m-en-prompt-violence-combined-binary-moderation) | 0.9077 | 0.8913 | 0.8995 |
|
| 183 |
+
| prompt-harassment-binary | [enguard/medium-guard-128m-xx-prompt-harassment-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-harassment-binary-moderation) | 0.8650 | 0.7994 | 0.8309 |
|
| 184 |
+
| prompt-hate-speech-alt-binary | [enguard/medium-guard-128m-xx-prompt-hate-speech-alt-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-hate-speech-alt-binary-moderation) | 0.8889 | 0.7273 | 0.8000 |
|
| 185 |
+
| prompt-hate-speech-binary | [enguard/medium-guard-128m-xx-prompt-hate-speech-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-hate-speech-binary-moderation) | 0.8367 | 0.8434 | 0.8400 |
|
| 186 |
+
| prompt-hate-speech-combined-binary | [enguard/medium-guard-128m-xx-prompt-hate-speech-combined-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-hate-speech-combined-binary-moderation) | 0.8826 | 0.8153 | 0.8476 |
|
| 187 |
+
| prompt-self-harm-binary | [enguard/medium-guard-128m-xx-prompt-self-harm-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-self-harm-binary-moderation) | 0.9375 | 0.8571 | 0.8955 |
|
| 188 |
+
| prompt-sexual-content-binary | [enguard/medium-guard-128m-xx-prompt-sexual-content-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-sexual-content-binary-moderation) | 0.9050 | 0.8752 | 0.8899 |
|
| 189 |
+
| prompt-sexual-content-combined-binary | [enguard/medium-guard-128m-xx-prompt-sexual-content-combined-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-sexual-content-combined-binary-moderation) | 0.9152 | 0.8726 | 0.8934 |
|
| 190 |
+
| prompt-sexual-content-minors-binary | [enguard/medium-guard-128m-xx-prompt-sexual-content-minors-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-sexual-content-minors-binary-moderation) | 0.9310 | 0.8710 | 0.9000 |
|
| 191 |
+
| prompt-violence-alt-binary | [enguard/medium-guard-128m-xx-prompt-violence-alt-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-violence-alt-binary-moderation) | 0.9000 | 0.8710 | 0.8852 |
|
| 192 |
+
| prompt-violence-binary | [enguard/medium-guard-128m-xx-prompt-violence-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-violence-binary-moderation) | 0.9057 | 0.8696 | 0.8872 |
|
| 193 |
+
| prompt-violence-combined-binary | [enguard/medium-guard-128m-xx-prompt-violence-combined-binary-moderation](https://huggingface.co/enguard/medium-guard-128m-xx-prompt-violence-combined-binary-moderation) | 0.8821 | 0.8406 | 0.8609 |
|
| 194 |
+
|
| 195 |
+
## Resources
|
| 196 |
+
|
| 197 |
+
- Awesome AI Guardrails: <https://github.com/enguard-ai/awesome-ai-guardails>
|
| 198 |
+
- Model2Vec: https://github.com/MinishLab/model2vec
|
| 199 |
+
- Docs: https://minish.ai/packages/model2vec/introduction
|
| 200 |
|
| 201 |
## Citation
|
| 202 |
|
| 203 |
+
If you use this model, please cite Model2Vec:
|
| 204 |
+
|
| 205 |
```
|
| 206 |
@software{minishlab2024model2vec,
|
| 207 |
author = {Stephan Tulkens and {van Dongen}, Thomas},
|