|
This dataset obtains genealogical and typological information for the 104 languages used for pre-training of the language model multilingual BERT (Devlin et al., 2019). |
|
The genealogical information covers the language family and the genus for each language. |
|
For typological description of the pre-training languages, 36 features from WALS (Dryer & Haspelmath, 2013) were used. |
|
|
|
The information provided here can be used, among other things, to investigate how the pre-training corpus is structured from a genealogical and typological perspective and to what extent, if any, this structure is related to the performance of the language model. |
|
|
|
In addition to the table of linguistic features, a pdf file was uploaded listing all the grammars and language descriptive materials used to compile the linguistic information. |