Question Answering
Transformers
PyTorch
English
roberta
Eval Results
Inference Endpoints
roberta-large-synqa / README.md
mbartolo's picture
Add evaluation results on the adversarialQA config of adversarial_qa (#2)
1ae8322
metadata
language:
  - en
tags:
  - question-answering
license: apache-2.0
datasets:
  - adversarial_qa
  - mbartolo/synQA
  - squad
metrics:
  - exact_match
  - f1
model-index:
  - name: mbartolo/roberta-large-synqa
    results:
      - task:
          type: question-answering
          name: Question Answering
        dataset:
          name: squad
          type: squad
          config: plain_text
          split: validation
        metrics:
          - name: Exact Match
            type: exact_match
            value: 89.6529
            verified: true
          - name: F1
            type: f1
            value: 94.8172
            verified: true
      - task:
          type: question-answering
          name: Question Answering
        dataset:
          name: adversarial_qa
          type: adversarial_qa
          config: adversarialQA
          split: validation
        metrics:
          - name: Exact Match
            type: exact_match
            value: 55.3333
            verified: true
          - name: F1
            type: f1
            value: 66.7464
            verified: true

Model Overview

This is a RoBERTa-Large QA Model trained from https://huggingface.co/roberta-large in two stages. First, it is trained on synthetic adversarial data generated using a BART-Large question generator on Wikipedia passages from SQuAD, and then it is trained on SQuAD and AdversarialQA (https://arxiv.org/abs/2002.00293) in a second stage of fine-tuning.

Data

Training data: SQuAD + AdversarialQA Evaluation data: SQuAD + AdversarialQA

Training Process

Approx. 1 training epoch on the synthetic data and 2 training epochs on the manually-curated data.

Additional Information

Please refer to https://arxiv.org/abs/2104.08678 for full details.