Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
@@ -1,34 +1,34 @@
|
|
1 |
-
---
|
2 |
-
language:
|
3 |
-
-
|
4 |
-
-
|
5 |
-
-
|
6 |
-
-
|
7 |
-
-
|
8 |
-
-
|
9 |
-
-
|
10 |
-
-
|
11 |
-
-
|
12 |
-
-
|
13 |
-
-
|
14 |
-
-
|
15 |
-
multilinguality:
|
16 |
-
- multilingual
|
17 |
-
viewer: false
|
18 |
-
---
|
19 |
-
|
20 |
-
|
21 |
-
> [!NOTE]
|
22 |
-
> Dataset origin: https://www.kaggle.com/datasets/dhruvildave/wikibooks-dataset
|
23 |
-
|
24 |
-
|
25 |
-
This is the complete dataset of contents of all the Wikibooks in 12 languages.
|
26 |
-
The content contains books of the following languages: English, French, German, Spanish, Portuguese, Italian and Russian, Japanese, Dutch, Polish, Hungarian, and Hebrew;
|
27 |
-
each in its own directory.
|
28 |
-
Wikibooks are divided into chapters and each chapter has its own webpage.
|
29 |
-
This dataset can be used for tasks like Machine Translation, Text Generation, Text Parsing, and Sematic Understanding of Natural Language.
|
30 |
-
Body contents are provided in both newline delimited textual format as would be visible on the page along with its HTML for better semantic parsing.
|
31 |
-
|
32 |
-
Refer to the starter notebook: [Starter: Wikibooks dataset](https://www.kaggle.com/code/dhruvildave/starter-wikibooks-dataset)
|
33 |
-
|
34 |
Data as of October 22, 2021.
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- fra
|
4 |
+
- eng
|
5 |
+
- deu
|
6 |
+
- spa
|
7 |
+
- por
|
8 |
+
- ita
|
9 |
+
- rus
|
10 |
+
- jpn
|
11 |
+
- nld
|
12 |
+
- pol
|
13 |
+
- hun
|
14 |
+
- heb
|
15 |
+
multilinguality:
|
16 |
+
- multilingual
|
17 |
+
viewer: false
|
18 |
+
---
|
19 |
+
|
20 |
+
|
21 |
+
> [!NOTE]
|
22 |
+
> Dataset origin: https://www.kaggle.com/datasets/dhruvildave/wikibooks-dataset
|
23 |
+
|
24 |
+
|
25 |
+
This is the complete dataset of contents of all the Wikibooks in 12 languages.
|
26 |
+
The content contains books of the following languages: English, French, German, Spanish, Portuguese, Italian and Russian, Japanese, Dutch, Polish, Hungarian, and Hebrew;
|
27 |
+
each in its own directory.
|
28 |
+
Wikibooks are divided into chapters and each chapter has its own webpage.
|
29 |
+
This dataset can be used for tasks like Machine Translation, Text Generation, Text Parsing, and Sematic Understanding of Natural Language.
|
30 |
+
Body contents are provided in both newline delimited textual format as would be visible on the page along with its HTML for better semantic parsing.
|
31 |
+
|
32 |
+
Refer to the starter notebook: [Starter: Wikibooks dataset](https://www.kaggle.com/code/dhruvildave/starter-wikibooks-dataset)
|
33 |
+
|
34 |
Data as of October 22, 2021.
|