Fairseq
English
McmanusChen commited on
Commit
da35efe
·
1 Parent(s): f19932c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +52 -1
README.md CHANGED
@@ -13,4 +13,55 @@ metrics:
13
  - matthews_correlation
14
  - pearsonr
15
  library_name: fairseq
16
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
13
  - matthews_correlation
14
  - pearsonr
15
  library_name: fairseq
16
+ ---
17
+
18
+ # MCL base model (cased)
19
+
20
+ Pretrained model on English language using a Multi-perspective Course Learning (MCL) objective. It was introduced in
21
+ [this paper](https://arxiv.org/abs/). This model is cased: it makes a difference between english and English.
22
+
23
+ ## Model description
24
+
25
+ MCL-base is an Electra-stype transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
26
+ was pretrained on the raw texts only, with no humans labeling them in any way (which is why it can use lots of
27
+ publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
28
+ was pretrained with three self-supervision courses and two self-correction courses under an encoder-decoder framework:
29
+
30
+ - Self-supervision Courses : including Replaced Token Detection (RTD), Swapped Token Detection (STD) and Inserted Token Detection (ITD). As for the RTD, taking a sentence, the model randomly masks 15% of the words in the input then run
31
+ the entire masked sentence through the encoder and has to predict the masked words. This is different from traditional
32
+ BERT models that only conduct pre-training on the encoder. It allows the decoder to futher discriminate the output sentence from the encoder.
33
+ - Self-correction Courses: According to the above self-supervision courses, a competition mechanism between $G$ and $D$ seems to shape up. Facing the same piece of data, $G$ tries to reform the sequence in many ways, while $D$ yearns to figure out
34
+ all the jugglery caused previously. However, the shared embedding layer of these two encoders becomes the only bridge of communication, which is apparently insufficient. To strengthen the link between the two components, and to provide more supervisory information on pre-training, we conduct an intimate dissection of the relationship between $G$ and $D$.
35
+
36
+ This way, the model learns an inner representation of the English language that can then be used to extract features
37
+ useful for downstream tasks: if you have a dataset of labeled sentences, for instance, you can train a standard
38
+ classifier using the features produced by the MCL model as inputs.
39
+
40
+
41
+ ### Pretraining
42
+
43
+ We implement the experiments on two settings: \textit{base} and \textit{tiny}.
44
+ \textit{Base} is the standard training configuration of BERT$_\text{Base}$.
45
+ The model is pre-trained on English Wikipedia and BookCorpus, containing 16 GB of text with 256 million samples.
46
+ We set the maximum length of the input sequence to 512, and the learning rates are 5e-4.
47
+ Training lasts 125K steps with a 2048 batch size.
48
+ We use the same corpus as with CoCo-LM and 64K cased SentencePiece vocabulary.
49
+ \textit{Tiny} conducts the ablation experiments on the same corpora with the same configuration as the \textit{base} setting, except that the batch size is 512.
50
+
51
+ ### Model Architecture
52
+
53
+ The layout of our model architecture maintains the same as CoCo-LM both on \textit{base} and \textit{tiny} settings. $D$ consists of 12-layer Transformer, 768 hidden size, plus T5 relative position encoding. $G$ is a shallow 4-layer Transformer with the same hidden size and position encoding.
54
+ After pre-training, we discard $G$ and use $D$ in the same way as BERT, with a classification layer for downstream tasks.
55
+
56
+ ## Evaluation results
57
+
58
+ When fine-tuned on downstream tasks, this model achieves the following results:
59
+
60
+ Glue test results:
61
+
62
+ | Task | MNLI-(m/mm) | QQP | QNLI | SST-2 | CoLA | STS-B | MRPC | RTE | Average |
63
+ |:----:|:-----------:|:----:|:----:|:-----:|:----:|:-----:|:----:|:----:|:-------:|
64
+ | | 88.5/88.5 | 92.2 | 93.4 | 94.1 | 70.8 | 91.3 | 91.6 | 84.0 | 88.3 |
65
+
66
+
67
+ ### BibTeX entry and citation info