Datasets:

Modalities:
Text
Formats:
parquet
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
jfleg / README.md
albertvillanova's picture
Add ArXiv paper URL (#3)
8b9fb0e verified
metadata
annotations_creators:
  - expert-generated
language_creators:
  - found
language:
  - en
license:
  - cc-by-nc-sa-4.0
multilinguality:
  - monolingual
  - other-language-learner
size_categories:
  - 1K<n<10K
source_datasets:
  - extended|other-GUG-grammaticality-judgements
task_categories:
  - text2text-generation
task_ids: []
paperswithcode_id: jfleg
pretty_name: JHU FLuency-Extended GUG corpus
tags:
  - grammatical-error-correction
dataset_info:
  features:
    - name: sentence
      dtype: string
    - name: corrections
      sequence: string
  splits:
    - name: validation
      num_bytes: 379979
      num_examples: 755
    - name: test
      num_bytes: 379699
      num_examples: 748
  download_size: 289093
  dataset_size: 759678
configs:
  - config_name: default
    data_files:
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*

Dataset Card for JFLEG

Table of Contents

Dataset Description

Dataset Summary

JFLEG (JHU FLuency-Extended GUG) is an English grammatical error correction (GEC) corpus. It is a gold standard benchmark for developing and evaluating GEC systems with respect to fluency (extent to which a text is native-sounding) as well as grammaticality. For each source document, there are four human-written corrections.

Supported Tasks and Leaderboards

Grammatical error correction.

Languages

English (native as well as L2 writers)

Dataset Structure

Data Instances

Each instance contains a source sentence and four corrections. For example:

{
  'sentence': "They are moved by solar energy ."
  'corrections': [
    "They are moving by solar energy .",
    "They are moved by solar energy .",
    "They are moved by solar energy .",
    "They are propelled by solar energy ." 
  ]
}

Data Fields

  • sentence: original sentence written by an English learner
  • corrections: corrected versions by human annotators. The order of the annotations are consistent (eg first sentence will always be written by annotator "ref0").

Data Splits

  • This dataset contains 1511 examples in total and comprise a dev and test split.
  • There are 754 and 747 source sentences for dev and test, respectively.
  • Each sentence has 4 corresponding corrected versions.

Dataset Creation

Curation Rationale

[More Information Needed]

Source Data

Initial Data Collection and Normalization

[More Information Needed]

Who are the source language producers?

[More Information Needed]

Annotations

Annotation process

[More Information Needed]

Who are the annotators?

[More Information Needed]

Personal and Sensitive Information

[More Information Needed]

Considerations for Using the Data

Social Impact of Dataset

[More Information Needed]

Discussion of Biases

[More Information Needed]

Other Known Limitations

[More Information Needed]

Additional Information

Dataset Curators

[More Information Needed]

Licensing Information

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Citation Information

This benchmark was proposed by Napoles et al., 2020.

@InProceedings{napoles-sakaguchi-tetreault:2017:EACLshort,
  author    = {Napoles, Courtney  and  Sakaguchi, Keisuke  and  Tetreault, Joel},
  title     = {JFLEG: A Fluency Corpus and Benchmark for Grammatical Error Correction},
  booktitle = {Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers},
  month     = {April},
  year      = {2017},
  address   = {Valencia, Spain},
  publisher = {Association for Computational Linguistics},
  pages     = {229--234},
  url       = {http://www.aclweb.org/anthology/E17-2037}
}

@InProceedings{heilman-EtAl:2014:P14-2,
  author    = {Heilman, Michael  and  Cahill, Aoife  and  Madnani, Nitin  and  Lopez, Melissa  and  Mulholland, Matthew  and  Tetreault, Joel},
  title     = {Predicting Grammaticality on an Ordinal Scale},
  booktitle = {Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers)},
  month     = {June},
  year      = {2014},
  address   = {Baltimore, Maryland},
  publisher = {Association for Computational Linguistics},
  pages     = {174--180},
  url       = {http://www.aclweb.org/anthology/P14-2029}
}

Contributions

Thanks to @j-chim for adding this dataset.