File size: 2,754 Bytes
710b35e
d7abbfc
 
 
 
a0e06f3
 
 
 
 
 
 
 
 
0e7a564
 
 
 
 
 
 
 
a0e06f3
 
 
 
 
d65cf4e
a0e06f3
d65cf4e
a0e06f3
d65cf4e
 
 
a0e06f3
d65cf4e
 
 
 
 
a0e06f3
 
 
 
 
 
 
 
 
 
 
 
 
c6ec0d3
a0e06f3
7344702
 
a4627ea
 
 
 
 
 
 
 
 
 
9178c5d
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
---
license: apache-2.0
language:
- en
pipeline_tag: text-classification
---

# DeTexD-RoBERTa-base delicate text detection

This is a baseline RoBERTa-base model for the delicate text detection task.

* Paper: [DeTexD: A Benchmark Dataset for Delicate Text Detection](TODO)
* [GitHub repository](https://github.com/grammarly/detexd)

The labels meaning according to the paper:
 - LABEL_0 -> non-delicate (0)
 - LABEL_1 -> very low risk (1)
 - LABEL_2 -> low risk (2)
 - LABEL_3 -> medium risk (3)
 - LABEL_4 -> high risk (4)
 - LABEL_5 -> very high risk (5)

## Classification example code

Here's a short usage example with the torch library in a binary classification task:

```python
from transformers import pipeline

classifier = pipeline("text-classification", model="grammarly/detexd-roberta-base")

def predict_binary_score(text: str):
    # get multiclass probability scores
    scores = classifier(text, top_k=None)

    # convert to a single score by summing the probability scores
    # for the higher-index classes
    return sum(score['score']
               for score in scores
               if score['label'] in ('LABEL_3', 'LABEL_4', 'LABEL_5'))

def predict_delicate(text: str, threshold=0.72496545):
    return predict_binary_score(text) > threshold

print(predict_delicate("Time flies like an arrow. Fruit flies like a banana."))
```

Expected output:

```
False
```

## Citation Information

```
@inproceedings{chernodub-etal-2023-detexd,
    title = "{D}e{T}ex{D}: A Benchmark Dataset for Delicate Text Detection",
    author = "Yavnyi, Serhii and Sliusarenko, Oleksii  and Razzaghi, Jade  and Mo, Yichen  and Hovakimyan, Knar and Chernodub, Artem",
    booktitle = "The 7th Workshop on Online Abuse and Harms (WOAH)",
    month = jul,
    year = "2023",
    address = "Toronto, Canada",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2023.woah-1.2",
    pages = "14--28",
    abstract = "Over the past few years, much research has been conducted to identify and regulate toxic language. However, few studies have addressed a broader range of sensitive texts that are not necessarily overtly toxic. In this paper, we introduce and define a new category of sensitive text called {``}delicate text.{''} We provide the taxonomy of delicate text and present a detailed annotation scheme. We annotate DeTexD, the first benchmark dataset for delicate text detection. The significance of the difference in the definitions is highlighted by the relative performance deltas between models trained each definitions and corpora and evaluated on the other. We make publicly available the DeTexD Benchmark dataset, annotation guidelines, and baseline model for delicate text detection.",
}
```