Question Answering
Transformers
PyTorch
English
electra
Eval Results
Inference Endpoints
electra-large-synqa / README.md
lewtun's picture
lewtun HF staff
Add evaluation results on the plain_text config of squad
9463ddb
|
raw
history blame
1.29 kB
---
language:
- en
tags:
- question-answering
license: apache-2.0
datasets:
- adversarial_qa
- mbartolo/synQA
- squad
metrics:
- exact_match
- f1
model-index:
- name: mbartolo/electra-large-synqa
results:
- task:
type: question-answering
name: Question Answering
dataset:
name: squad
type: squad
config: plain_text
split: validation
metrics:
- name: Exact Match
type: exact_match
value: 89.4158
verified: true
- name: F1
type: f1
value: 94.7851
verified: true
---
# Model Overview
This is an ELECTRA-Large QA Model trained from https://huggingface.co/google/electra-large-discriminator in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.
# Data
Training data: SQuAD + AdversarialQA
Evaluation data: SQuAD + AdversarialQA
# Training Process
Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.
# Additional Information
Please refer to https://arxiv.org/abs/2104.08678 for full details. You can interact with the model on Dynabench here: https://dynabench.org/models/109