XLM-RoBERTa large model whole word masking finetuned on SQuAD
Pretrained model using a masked language modeling (MLM) objective. Fine tuned on English and Russian QA datasets
Used QA Datasets
SQuAD + SberQuAD
SberQuAD original paper is here! Recommend to read!
Evaluation results
The results obtained are the following (SberQUaD):
f1 = 84.3
exact_match = 65.3
- Downloads last month
- 310,859
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
the model is not deployed on the HF Inference API.