Update README.md
Browse files
README.md
CHANGED
@@ -44,7 +44,7 @@ it might be also biased toward certain names.
|
|
44 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
45 |
To get better overall results, I decided to make a title truncation in training. Though it increased the overall result for both longer and
|
46 |
shorter text, one should not give less than 6 and more than 12 words for predictions, excluding stopwords. For the preprocess operations look below.
|
47 |
-
One can translate news from language into English, though it may not give the expected results.
|
48 |
|
49 |
## How to Get Started with the Model
|
50 |
|
@@ -148,8 +148,26 @@ https://arxiv.org/pdf/1806.00749v1, the dataset download link: https://drive.goo
|
|
148 |
Accuracy
|
149 |
|
150 |
### Results
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
151 |
|
152 |
-
[More Information Needed]
|
153 |
|
154 |
#### Summary
|
155 |
|
|
|
44 |
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
45 |
To get better overall results, I decided to make a title truncation in training. Though it increased the overall result for both longer and
|
46 |
shorter text, one should not give less than 6 and more than 12 words for predictions, excluding stopwords. For the preprocess operations look below.
|
47 |
+
One can translate news from the language into English, though it may not give the expected results.
|
48 |
|
49 |
## How to Get Started with the Model
|
50 |
|
|
|
148 |
Accuracy
|
149 |
|
150 |
### Results
|
151 |
+
For testing on GonzaloA/fake_news test split dataset
|
152 |
+
precision recall f1-score support
|
153 |
+
|
154 |
+
0 0.93 0.94 0.94 3782
|
155 |
+
1 0.95 0.94 0.95 4335
|
156 |
+
|
157 |
+
accuracy 0.94 8117
|
158 |
+
macro avg 0.94 0.94 0.94 8117
|
159 |
+
weighted avg 0.94 0.94 0.94 8117
|
160 |
+
|
161 |
+
For testing on https://github.com/GeorgeMcIntire/fake_real_news_dataset
|
162 |
+
precision recall f1-score support
|
163 |
+
|
164 |
+
0 0.81 0.87 0.84 2297
|
165 |
+
1 0.86 0.80 0.83 2297
|
166 |
+
|
167 |
+
accuracy 0.83 4594
|
168 |
+
macro avg 0.83 0.83 0.83 4594
|
169 |
+
weighted avg 0.83 0.83 0.83 4594
|
170 |
|
|
|
171 |
|
172 |
#### Summary
|
173 |
|