Datasets:
Commit
·
2bd007c
1
Parent(s):
6d7205e
Delete job_scams/README.md
Browse files- job_scams/README.md +0 -18
job_scams/README.md
DELETED
|
@@ -1,18 +0,0 @@
|
|
| 1 |
-
# Job Scams
|
| 2 |
-
|
| 3 |
-
We post-process and split Job Scams dataset to ensure uniformity with Political Statements 2.0 and Twitter Rumours as they all go into form GDDS-2.0
|
| 4 |
-
|
| 5 |
-
## Cleaning
|
| 6 |
-
|
| 7 |
-
Each dataset has been cleaned using Cleanlab. Non-english entries, erroneous (parser error) entries, empty entries, duplicate entries, entries of length less than 2 characters or exceeding 1000000 characters were all removed.
|
| 8 |
-
|
| 9 |
-
## Preprocessing
|
| 10 |
-
|
| 11 |
-
Whitespace, quotes, bulletpoints, unicode is normalized.
|
| 12 |
-
|
| 13 |
-
## Data
|
| 14 |
-
|
| 15 |
-
The dataset consists of "text" (string) and "is_deceptive" (1,0). 1 means the text is deceptive, 0 indicates otherwise.
|
| 16 |
-
|
| 17 |
-
There are 14295 samples in the dataset, contained in `job_scams.jsonl`. For reproduceability, the data is also split into training, test, and validation sets in 80/10/10 ratio. They are named `train.jsonl`, `test.jsonl`, `valid.jsonl`. The sampling process was stratified. The training set contains 11436 samples, the validation and the test sets have 1429 and 1430 samles, respectively.
|
| 18 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|