eliasalbouzidi
commited on
Commit
•
24f0d11
1
Parent(s):
c3fe027
Update README.md
Browse files
README.md
CHANGED
@@ -27,7 +27,7 @@ tags:
|
|
27 |
- ' PyTorch'
|
28 |
- safety
|
29 |
- innapropriate
|
30 |
-
-
|
31 |
datasets:
|
32 |
- eliasalbouzidi/NSFW-Safe-Dataset
|
33 |
model-index:
|
@@ -49,8 +49,6 @@ model-index:
|
|
49 |
---
|
50 |
# Model Card
|
51 |
|
52 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
53 |
-
|
54 |
This model is designed to categorize text into two classes: "safe", or "nsfw" (not safe for work), which makes it suitable for content moderation and filtering applications.
|
55 |
|
56 |
The model was trained using a dataset containing 190,000 labeled text samples, distributed among the two classes of "safe" and "nsfw".
|
|
|
27 |
- ' PyTorch'
|
28 |
- safety
|
29 |
- innapropriate
|
30 |
+
- distilroberta
|
31 |
datasets:
|
32 |
- eliasalbouzidi/NSFW-Safe-Dataset
|
33 |
model-index:
|
|
|
49 |
---
|
50 |
# Model Card
|
51 |
|
|
|
|
|
52 |
This model is designed to categorize text into two classes: "safe", or "nsfw" (not safe for work), which makes it suitable for content moderation and filtering applications.
|
53 |
|
54 |
The model was trained using a dataset containing 190,000 labeled text samples, distributed among the two classes of "safe" and "nsfw".
|