haritzpuerto commited on
Commit
8c38343
1 Parent(s): d736f42

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +6 -2
README.md CHANGED
@@ -10,14 +10,18 @@ library_name: adapter-transformers
10
  pipeline_tag: question-answering
11
  ---
12
 
13
- This is the MADE Adapter for SearchQA partition of the MRQA 2019 Shared Task Dataset. The adapter was created by Friedman et al. (2021) and should be used with this encoder: https://huggingface.co/UKP-SQuARE/MADE_Encoder
14
 
 
15
 
16
 
17
  The UKP-SQuARE team created this model repository to simplify the deployment of this model on the UKP-SQuARE platform. The GitHub repository of the original authors is https://github.com/princeton-nlp/MADE
18
 
 
 
19
  This model contains the same weights as https://huggingface.co/princeton-nlp/MADE/resolve/main/made_tuned_adapters/SearchQA/model.pt. The only difference is that our repository follows the standard format of AdapterHub. Therefore, you could load this model as follows:
20
 
 
21
  ```
22
  from transformers import RobertaForQuestionAnswering, RobertaTokenizerFast
23
 
@@ -36,5 +40,5 @@ Note you need the adapter-transformers library https://adapterhub.ml
36
 
37
  Please refer to the original publication for more information.
38
 
39
- Citation:
40
  Single-dataset Experts for Multi-dataset Question Answering (Friedman et al., EMNLP 2021)
 
10
  pipeline_tag: question-answering
11
  ---
12
 
13
+ # Description
14
 
15
+ This is the MADE Adapter for SearchQA partition of the MRQA 2019 Shared Task Dataset. The adapter was created by Friedman et al. (2021) and should be used with this encoder: https://huggingface.co/UKP-SQuARE/MADE_Encoder
16
 
17
 
18
  The UKP-SQuARE team created this model repository to simplify the deployment of this model on the UKP-SQuARE platform. The GitHub repository of the original authors is https://github.com/princeton-nlp/MADE
19
 
20
+
21
+ # Usage
22
  This model contains the same weights as https://huggingface.co/princeton-nlp/MADE/resolve/main/made_tuned_adapters/SearchQA/model.pt. The only difference is that our repository follows the standard format of AdapterHub. Therefore, you could load this model as follows:
23
 
24
+
25
  ```
26
  from transformers import RobertaForQuestionAnswering, RobertaTokenizerFast
27
 
 
40
 
41
  Please refer to the original publication for more information.
42
 
43
+ # Citation:
44
  Single-dataset Experts for Multi-dataset Question Answering (Friedman et al., EMNLP 2021)