Update README.md
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ widget:
|
|
12 |
|
13 |
## Model description
|
14 |
|
15 |
-
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://
|
16 |
|
17 |
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
|
18 |
|
@@ -103,7 +103,7 @@ output = model(encoded_input)
|
|
103 |
|
104 |
## Training procedure
|
105 |
|
106 |
-
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512.
|
107 |
|
108 |
Taking the case of RoBERTa-Medium
|
109 |
|
|
|
12 |
|
13 |
## Model description
|
14 |
|
15 |
+
This is the set of 24 Chinese RoBERTa models pre-trained by [UER-py](https://arxiv.org/abs/1909.05658).
|
16 |
|
17 |
[Turc et al.](https://arxiv.org/abs/1908.08962) have shown that the standard BERT recipe is effective on a wide range of model sizes. Following their paper, we released the 24 Chinese RoBERTa models. In order to facilitate users to reproduce the results, we used the publicly available corpus and provided all training details.
|
18 |
|
|
|
103 |
|
104 |
## Training procedure
|
105 |
|
106 |
+
Models are pre-trained by [UER-py](https://github.com/dbiir/UER-py/) on [Tencent Cloud TI-ONE](https://cloud.tencent.com/product/tione/). We pre-train 1,000,000 steps with a sequence length of 128 and then pre-train 250,000 additional steps with a sequence length of 512. We use the same hyper-parameters on different model sizes.
|
107 |
|
108 |
Taking the case of RoBERTa-Medium
|
109 |
|