MultiPL-E-fixed / README.md
jsbyun121's picture
Ocaml Some char datatype issue fixed
ba8ec18 verified
metadata
dataset_info:
  - config_name: humaneval-jl
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 166847
        num_examples: 159
    download_size: 66024
    dataset_size: 166847
  - config_name: humaneval-lua
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 183781
        num_examples: 161
    download_size: 66506
    dataset_size: 183781
  - config_name: humaneval-ml
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 169678
        num_examples: 155
    download_size: 65419
    dataset_size: 169678
  - config_name: humaneval-r
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 198952
        num_examples: 161
    download_size: 68467
    dataset_size: 198952
  - config_name: humaneval-rkt
    features:
      - name: name
        dtype: string
      - name: language
        dtype: string
      - name: prompt
        dtype: string
      - name: doctests
        dtype: string
      - name: original
        dtype: string
      - name: prompt_terminology
        dtype: string
      - name: tests
        dtype: string
      - name: stop_tokens
        dtype: string
    splits:
      - name: test
        num_bytes: 195422
        num_examples: 161
    download_size: 66881
    dataset_size: 195422
configs:
  - config_name: humaneval-jl
    data_files:
      - split: test
        path: humaneval-jl/test-*
  - config_name: humaneval-lua
    data_files:
      - split: test
        path: humaneval-lua/test-*
  - config_name: humaneval-ml
    data_files:
      - split: test
        path: humaneval-ml/test-*
  - config_name: humaneval-r
    data_files:
      - split: test
        path: humaneval-r/test-*
  - config_name: humaneval-rkt
    data_files:
      - split: test
        path: humaneval-rkt/test-*

Dataset Card for MultiPL-E-fixed (OCaml, Lua, R, Racket, Julia)

This dataset provides corrections for the OCaml, Lua, R, Racket, and Julia portions of the nuprl/MultiPL-E benchmark.

Original Dataset Information

This Version


Dataset Summary

MultiPL-E is a large-scale dataset for evaluating code generation models across 22 programming languages.

However, analysis of the dataset revealed several logical errors, inconsistencies, and language-specific issues in the generated prompts and test cases. These issues can lead to inaccurate evaluation scores by unfairly penalizing models for correctly identifying flaws in the prompts.

This repository provides a corrected version of the dataset specifically for OCaml, Lua, R, Racket, and Julia. The goal of this version is to provide a more reliable and accurate benchmark for evaluating Large Language Models on these languages.

Summary of Corrections

The following modifications were made to address issues in the original dataset.

1. Logical Problems in Prompts and Test Cases

Several problems in the HumanEval portion of the dataset were corrected for the following issues:

  • HumanEval_75_is_multiply_prime: Resolved a mismatch between problem instructions and test cases.
  • HumanEval_92_any_int: Fixed an incorrect test case that did not align with the problem's requirements.
  • HumanEval_116_sort_array: Corrected a discrepancy between the sorting criteria in the instructions and the test cases.
  • HumanEval_128_prod_signs: Amended an incorrect example in the prompt's docstring.
  • HumanEval_140_fix_spaces: Corrected a faulty test case.
  • HumanEval_142_sum_squares: Repaired corrupted or syntactically incorrect examples.
  • HumanEval_145_order_by_points: Clarified vague and ambiguous logic in the question to provide a more precise problem statement.
  • HumanEval_148_bf: Fixed a contradiction between the provided examples and the main instructions.
  • HumanEval_151_double_the_difference: Replaced an incorrect test case that produced an invalid result.
  • HumanEval_162_string_to_md5: Addressed incorrect handling for language-specific None/null data types required by the test cases.

2. General Prompt Ambiguities

  • 0-Based Indexing: Added clarifications to prompts where array/list index interpretation was ambiguous, explicitly enforcing a 0-based convention to ensure consistent behavior.

3. Language-Specific Fixes

  • R: Corrected issues related to the handling of empty vectors, a common edge case.
  • OCaml: Fixed incorrect usage of unary operators to align with OCaml's syntax.
  • Julia: Resolved parsing issues caused by the triple-quote (""") docstring character.

Using This Dataset

This corrected dataset is designed to be a drop-in replacement for the official MultiPL-E data for OCaml, Lua, R, Racket, and Julia.

To use it, simply replace the original humaneval-[lang] files with the corrected versions provided in this repository. The data structure remains compatible with standard evaluation frameworks.

Citation and Attribution

If you use this corrected version of the dataset in your work, we ask that you please cite the original MultiPL-E paper and also acknowledge this repository for the corrections.

Original Paper:

@inproceedings{cassano2023multipl,
  title={MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation},
  author={Cassano, Federico and Gouwar, John and Nguyen, Daniel and Nguyen, Tuan and Phothilimthana, Phitchaya and Pinckney, David and Anderson, Carolyn and Feldman, Michael and Guha, Arjun},
  booktitle={2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)},
  pages={707--719},
  year={2023},
  organization={IEEE}
}