Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -6,4 +6,23 @@ language:
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 10K<n<100K
|
9 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- en
|
7 |
size_categories:
|
8 |
- 10K<n<100K
|
9 |
+
---
|
10 |
+
|
11 |
+
## LIAR2
|
12 |
+
|
13 |
+
The [LIAR](https://doi.org/10.18653/v1/P17-2067) dataset has been widely followed by fake news detection researchers since its release, and along with a great deal of research, the community has provided a variety of feedback on the dataset to improve it. We adopted these feedbacks and released the LIAR2 dataset, a new benchmark dataset of ~23k manually labeled by professional fact-checkers for fake news detection tasks. We have used a split ratio of 8:1:1 to distinguish between the training set, the test set, and the validation set, details of which are provided in the paper of "[An Enhanced Fake News Detection System With Fuzzy Deep Learning](https://doi.org/10.1109/ACCESS.2024.3418340)". The LIAR2 dataset can be accessed at [Huggingface](https://huggingface.co/datasets/chengxuphd/liar2) and [Github](https://github.com/chengxuphd/LIAR2),
|
14 |
+
|
15 |
+
## Example Usage
|
16 |
+
|
17 |
+
You can load each of the subset as follows:
|
18 |
+
|
19 |
+
```python
|
20 |
+
import datasets
|
21 |
+
|
22 |
+
dataset = "chengxuphd/liar2"
|
23 |
+
dataset = datasets.load_dataset(dataset)
|
24 |
+
|
25 |
+
statement_train, y_train = dataset["train"]["statement"], dataset["train"]["label"]
|
26 |
+
statement_val, y_train = dataset["val"]["statement"], dataset["val"]["label"]
|
27 |
+
statement_test, y_test = dataset["test"]["statement"], dataset["test"]["label"]
|
28 |
+
```
|