Datasets:
Update README.md
Browse files
README.md
CHANGED
|
@@ -1,4 +1,64 @@
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
dataset_info:
|
| 3 |
- config_name: en-ar
|
| 4 |
features:
|
|
@@ -1094,3 +1154,49 @@ configs:
|
|
| 1094 |
- split: dev
|
| 1095 |
path: en-zh-tw/dev-*
|
| 1096 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
language:
|
| 3 |
+
- en
|
| 4 |
+
- multilingual
|
| 5 |
+
- ar
|
| 6 |
+
- bg
|
| 7 |
+
- ca
|
| 8 |
+
- cs
|
| 9 |
+
- da
|
| 10 |
+
- de
|
| 11 |
+
- el
|
| 12 |
+
- es
|
| 13 |
+
- et
|
| 14 |
+
- fa
|
| 15 |
+
- fi
|
| 16 |
+
- fr
|
| 17 |
+
- gl
|
| 18 |
+
- gu
|
| 19 |
+
- he
|
| 20 |
+
- hi
|
| 21 |
+
- hr
|
| 22 |
+
- hu
|
| 23 |
+
- hy
|
| 24 |
+
- id
|
| 25 |
+
- it
|
| 26 |
+
- ja
|
| 27 |
+
- ka
|
| 28 |
+
- ko
|
| 29 |
+
- ku
|
| 30 |
+
- lt
|
| 31 |
+
- lv
|
| 32 |
+
- mk
|
| 33 |
+
- mn
|
| 34 |
+
- mr
|
| 35 |
+
- ms
|
| 36 |
+
- my
|
| 37 |
+
- nb
|
| 38 |
+
- nl
|
| 39 |
+
- pl
|
| 40 |
+
- pt
|
| 41 |
+
- ro
|
| 42 |
+
- ru
|
| 43 |
+
- sk
|
| 44 |
+
- sl
|
| 45 |
+
- sq
|
| 46 |
+
- sr
|
| 47 |
+
- sv
|
| 48 |
+
- th
|
| 49 |
+
- tr
|
| 50 |
+
- uk
|
| 51 |
+
- ur
|
| 52 |
+
- vi
|
| 53 |
+
- zh
|
| 54 |
+
size_categories:
|
| 55 |
+
- 1M<n<10M
|
| 56 |
+
task_categories:
|
| 57 |
+
- feature-extraction
|
| 58 |
+
- sentence-similarity
|
| 59 |
+
pretty_name: Talks
|
| 60 |
+
tags:
|
| 61 |
+
- sentence-transformers
|
| 62 |
dataset_info:
|
| 63 |
- config_name: en-ar
|
| 64 |
features:
|
|
|
|
| 1154 |
- split: dev
|
| 1155 |
path: en-zh-tw/dev-*
|
| 1156 |
---
|
| 1157 |
+
|
| 1158 |
+
|
| 1159 |
+
# Dataset Card for Parallel Sentences - Talks
|
| 1160 |
+
|
| 1161 |
+
This dataset contains parallel sentences (i.e. English sentence + the same sentences in another language) for numerous other languages. Most of the sentences originate from the [OPUS website](https://opus.nlpl.eu/).
|
| 1162 |
+
In particular, this dataset contains the [Talks](https://huggingface.co/datasets/sentence-transformers/parallel-sentences) dataset.
|
| 1163 |
+
|
| 1164 |
+
## Related Datasets
|
| 1165 |
+
|
| 1166 |
+
The following datasets are also a part of the Parallel Sentences collection:
|
| 1167 |
+
* [parallel-sentences-europarl](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-europarl)
|
| 1168 |
+
* [parallel-sentences-global-voices](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-global-voices)
|
| 1169 |
+
* [parallel-sentences-muse](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-muse)
|
| 1170 |
+
* [parallel-sentences-jw300](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-jw300)
|
| 1171 |
+
* [parallel-sentences-news-commentary](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-news-commentary)
|
| 1172 |
+
* [parallel-sentences-opensubtitles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-opensubtitles)
|
| 1173 |
+
* [parallel-sentences-talks](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-talks)
|
| 1174 |
+
* [parallel-sentences-tatoeba](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-tatoeba)
|
| 1175 |
+
* [parallel-sentences-wikimatrix](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikimatrix)
|
| 1176 |
+
* [parallel-sentences-wikititles](https://huggingface.co/datasets/sentence-transformers/parallel-sentences-wikititles)
|
| 1177 |
+
|
| 1178 |
+
These datasets can be used to train multilingual sentence embedding models. For more information, see [sbert.net - Multilingual Models](https://www.sbert.net/examples/training/multilingual/README.html).
|
| 1179 |
+
|
| 1180 |
+
## Dataset Subsets
|
| 1181 |
+
|
| 1182 |
+
### `all` subset
|
| 1183 |
+
|
| 1184 |
+
* Columns: "english", "non_english"
|
| 1185 |
+
* Column types: `str`, `str`
|
| 1186 |
+
* Examples:
|
| 1187 |
+
```python
|
| 1188 |
+
|
| 1189 |
+
```
|
| 1190 |
+
* Collection strategy: Combining all other subsets from this dataset.
|
| 1191 |
+
* Deduplified: No
|
| 1192 |
+
|
| 1193 |
+
### `en-...` subsets
|
| 1194 |
+
|
| 1195 |
+
* Columns: "english", "non_english"
|
| 1196 |
+
* Column types: `str`, `str`
|
| 1197 |
+
* Examples:
|
| 1198 |
+
```python
|
| 1199 |
+
|
| 1200 |
+
```
|
| 1201 |
+
* Collection strategy: Processing the raw data from [parallel-sentences](https://huggingface.co/datasets/sentence-transformers/parallel-sentences) and formatting it in Parquet, followed by deduplication.
|
| 1202 |
+
* Deduplified: Yes
|