Commit
路
8939d80
1
Parent(s):
0cab8aa
Update README.md
Browse files
README.md
CHANGED
@@ -7,7 +7,7 @@ datasets:
|
|
7 |
- squad_v2
|
8 |
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
|
9 |
model-index:
|
10 |
-
- name:
|
11 |
results:
|
12 |
- task:
|
13 |
type: question-answering
|
@@ -31,7 +31,7 @@ model-index:
|
|
31 |
---
|
32 |
|
33 |
## Overview
|
34 |
-
**Language model:**
|
35 |
**Language:** English
|
36 |
**Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation
|
37 |
**Eval data:** SQuAD 2.0 dev set
|
@@ -39,7 +39,7 @@ model-index:
|
|
39 |
**Published**: Dec 8th, 2021
|
40 |
|
41 |
## Details
|
42 |
-
- haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)).
|
43 |
|
44 |
## Hyperparameters
|
45 |
### Intermediate layer distillation
|
@@ -68,24 +68,3 @@ distillation_loss_weight = 1.0
|
|
68 |
"exact": 71.87736882001179
|
69 |
"f1": 76.36111895973675
|
70 |
```
|
71 |
-
|
72 |
-
## Authors
|
73 |
-
- Timo M枚ller: `timo.moeller [at] deepset.ai`
|
74 |
-
- Julian Risch: `julian.risch [at] deepset.ai`
|
75 |
-
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
|
76 |
-
- Michel Bartels: `michel.bartels [at] deepset.ai`
|
77 |
-
## About us
|
78 |
-

|
79 |
-
We bring NLP to the industry via open source!
|
80 |
-
Our focus: Industry specific language models & large scale QA systems.
|
81 |
-
|
82 |
-
Some of our work:
|
83 |
-
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
|
84 |
-
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
|
85 |
-
- [FARM](https://github.com/deepset-ai/FARM)
|
86 |
-
- [Haystack](https://github.com/deepset-ai/haystack/)
|
87 |
-
|
88 |
-
Get in touch:
|
89 |
-
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
|
90 |
-
|
91 |
-
By the way: [we're hiring!](http://www.deepset.ai/jobs)
|
|
|
7 |
- squad_v2
|
8 |
thumbnail: https://thumb.tildacdn.com/tild3433-3637-4830-a533-353833613061/-/resize/720x/-/format/webp/germanquad.jpg
|
9 |
model-index:
|
10 |
+
- name: Shobhank-iiitdwd/DistBERT-squad2-QA-768d
|
11 |
results:
|
12 |
- task:
|
13 |
type: question-answering
|
|
|
31 |
---
|
32 |
|
33 |
## Overview
|
34 |
+
**Language model:** Shobhank-iiitdwd/DistBERT-squad2-QA
|
35 |
**Language:** English
|
36 |
**Training data:** SQuAD 2.0 training set x 20 augmented + SQuAD 2.0 training set without augmentation
|
37 |
**Eval data:** SQuAD 2.0 dev set
|
|
|
39 |
**Published**: Dec 8th, 2021
|
40 |
|
41 |
## Details
|
42 |
+
- haystack's intermediate layer and prediction layer distillation features were used for training (based on [TinyBERT](https://arxiv.org/pdf/1909.10351.pdf)). bert-base-uncased-squad2 was used as the teacher model and TinyBERT_General_6L_768D was used as the student model.
|
43 |
|
44 |
## Hyperparameters
|
45 |
### Intermediate layer distillation
|
|
|
68 |
"exact": 71.87736882001179
|
69 |
"f1": 76.36111895973675
|
70 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|