Update README.md
Browse files
README.md
CHANGED
@@ -122,7 +122,7 @@ The model was trained for a total of 36.000 updates. Weights were saved every 10
|
|
122 |
|
123 |
### Variable and metrics
|
124 |
|
125 |
-
We use the BLEU score for evaluation on the [Flores-101](https://github.com/facebookresearch/flores), and [NTREX](https://github.com/MicrosoftTranslator/NTREX) evaluation datasets.
|
126 |
|
127 |
### Evaluation results
|
128 |
|
@@ -134,7 +134,7 @@ Below are the evaluation results on the machine translation from Catalan to Ital
|
|
134 |
| Flores 101 devtest |25,3 | **29** | 28,1 |
|
135 |
| NTEU | 41,8 | 44,8 | **53,2** |
|
136 |
| NTREX | 28 | **31,5** | 30,1 |
|
137 |
-
| Average | 30 | 33,5 | **34,85** |
|
138 |
|
139 |
## Additional information
|
140 |
|
|
|
122 |
|
123 |
### Variable and metrics
|
124 |
|
125 |
+
We use the BLEU score for evaluation on the [Flores-101](https://github.com/facebookresearch/flores), NTEU (unpublished) and [NTREX](https://github.com/MicrosoftTranslator/NTREX) evaluation datasets.
|
126 |
|
127 |
### Evaluation results
|
128 |
|
|
|
134 |
| Flores 101 devtest |25,3 | **29** | 28,1 |
|
135 |
| NTEU | 41,8 | 44,8 | **53,2** |
|
136 |
| NTREX | 28 | **31,5** | 30,1 |
|
137 |
+
| **Average** | 30 | 33,5 | **34,85** |
|
138 |
|
139 |
## Additional information
|
140 |
|