Update README.md
Browse files
README.md
CHANGED
@@ -12,9 +12,6 @@ widget:
|
|
12 |
example_title: "Negative"
|
13 |
---
|
14 |
|
15 |
-
<!-- This model card has been generated automatically according to the information Keras had access to. You should
|
16 |
-
probably proofread and complete it, then remove this comment. -->
|
17 |
-
|
18 |
# distilbert-base-future
|
19 |
## Table of Contents
|
20 |
|
@@ -23,7 +20,7 @@ probably proofread and complete it, then remove this comment. -->
|
|
23 |
- [Training and evaluation data](#training_and_evaluation_data)
|
24 |
- [Training procedure](#training_procedure)
|
25 |
|
26 |
-
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on
|
27 |
It achieves the following results on the evaluation set:
|
28 |
- Train Loss: 0.1142
|
29 |
- Train Sparse Categorical Accuracy: 0.9613
|
@@ -33,15 +30,24 @@ It achieves the following results on the evaluation set:
|
|
33 |
|
34 |
## Model description
|
35 |
|
36 |
-
|
|
|
|
|
|
|
|
|
37 |
|
38 |
## Intended uses & limitations
|
39 |
|
40 |
-
|
|
|
41 |
|
42 |
## Training and evaluation data
|
43 |
|
44 |
-
|
|
|
|
|
|
|
|
|
45 |
|
46 |
## Training procedure
|
47 |
|
|
|
12 |
example_title: "Negative"
|
13 |
---
|
14 |
|
|
|
|
|
|
|
15 |
# distilbert-base-future
|
16 |
## Table of Contents
|
17 |
|
|
|
20 |
- [Training and evaluation data](#training_and_evaluation_data)
|
21 |
- [Training procedure](#training_procedure)
|
22 |
|
23 |
+
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the [future-statements dataset](https://huggingface.co/datasets/fidsinn/future-statements).
|
24 |
It achieves the following results on the evaluation set:
|
25 |
- Train Loss: 0.1142
|
26 |
- Train Sparse Categorical Accuracy: 0.9613
|
|
|
30 |
|
31 |
## Model description
|
32 |
|
33 |
+
- The model was created by graduate students [D. Baradari](https://huggingface.co/Dunya), [F. Bartels](https://huggingface.co/fidsinn), A. Dewald, [J. Peters](https://huggingface.co/jpeters92) as part of a data science module of the University of Leipzig.
|
34 |
+
- Model was created on 11/08/22.
|
35 |
+
- This is version 1.0
|
36 |
+
- The model is a text classification model which is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)
|
37 |
+
- Questions and comments can be send via the [community tab](https://huggingface.co/fidsinn/distilbert-base-future/discussions)
|
38 |
|
39 |
## Intended uses & limitations
|
40 |
|
41 |
+
- The primary intended use is the classification of input into a future or non-future sentence/statement.
|
42 |
+
- The model is primarily intended to be used by researchers to filter or label a large number of sentences according to the grammatical tense of the input.
|
43 |
|
44 |
## Training and evaluation data
|
45 |
|
46 |
+
- [Distilbert-base-future model](https://huggingface.co/fidsinn/distilbert-base-future) was trained and evaluated on the [future-statements dataset](https://huggingface.co/datasets/fidsinn/future-statements).
|
47 |
+
- [future-statements](https://huggingface.co/datasets/fidsinn/future-statements) is a dataset collected manually and automatically by graduate students [D. Baradari](https://huggingface.co/Dunya), [F. Bartels](https://huggingface.co/fidsinn), A. Dewald, [J. Peters](https://huggingface.co/jpeters92) of the University of Leipzig.
|
48 |
+
- We collected 2500 statements, 50% of which relate to future events and 50% of which relate to non-future events.
|
49 |
+
- The sole purpose of the dataset was the fine-tuning process of this model.
|
50 |
+
- Additional information on the dataset can be found on Huggingface: [future-statements dataset](https://huggingface.co/datasets/fidsinn/future-statements).
|
51 |
|
52 |
## Training procedure
|
53 |
|