File size: 5,363 Bytes
15254d1
 
afd61d6
15254d1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d1bc69e
15254d1
d1bc69e
 
afd61d6
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fcca0e5
afd61d6
fcca0e5
 
b630bcb
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36ab200
b630bcb
36ab200
 
b74e038
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
869e15d
b74e038
869e15d
 
6053cf5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
004e89e
6053cf5
004e89e
 
15254d1
 
 
 
 
afd61d6
 
 
 
b630bcb
 
 
 
b74e038
 
 
 
6053cf5
 
 
 
15254d1
faa55bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f6ac04
faa55bd
 
 
 
 
 
 
a76d0fd
faa55bd
a76d0fd
faa55bd
 
 
 
 
 
6f6ac04
faa55bd
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
---
dataset_info:
- config_name: humaneval-jl
  features:
  - name: name
    dtype: string
  - name: language
    dtype: string
  - name: prompt
    dtype: string
  - name: doctests
    dtype: string
  - name: original
    dtype: string
  - name: prompt_terminology
    dtype: string
  - name: tests
    dtype: string
  - name: stop_tokens
    dtype: string
  splits:
  - name: test
    num_bytes: 167490
    num_examples: 159
  download_size: 66247
  dataset_size: 167490
- config_name: humaneval-lua
  features:
  - name: name
    dtype: string
  - name: language
    dtype: string
  - name: prompt
    dtype: string
  - name: doctests
    dtype: string
  - name: original
    dtype: string
  - name: prompt_terminology
    dtype: string
  - name: tests
    dtype: string
  - name: stop_tokens
    dtype: string
  splits:
  - name: test
    num_bytes: 184572
    num_examples: 161
  download_size: 66774
  dataset_size: 184572
- config_name: humaneval-ml
  features:
  - name: name
    dtype: string
  - name: language
    dtype: string
  - name: prompt
    dtype: string
  - name: doctests
    dtype: string
  - name: original
    dtype: string
  - name: prompt_terminology
    dtype: string
  - name: tests
    dtype: string
  - name: stop_tokens
    dtype: string
  splits:
  - name: test
    num_bytes: 170283
    num_examples: 155
  download_size: 65815
  dataset_size: 170283
- config_name: humaneval-r
  features:
  - name: name
    dtype: string
  - name: language
    dtype: string
  - name: prompt
    dtype: string
  - name: doctests
    dtype: string
  - name: original
    dtype: string
  - name: prompt_terminology
    dtype: string
  - name: tests
    dtype: string
  - name: stop_tokens
    dtype: string
  splits:
  - name: test
    num_bytes: 199744
    num_examples: 161
  download_size: 68771
  dataset_size: 199744
- config_name: humaneval-rkt
  features:
  - name: name
    dtype: string
  - name: language
    dtype: string
  - name: prompt
    dtype: string
  - name: doctests
    dtype: string
  - name: original
    dtype: string
  - name: prompt_terminology
    dtype: string
  - name: tests
    dtype: string
  - name: stop_tokens
    dtype: string
  splits:
  - name: test
    num_bytes: 196214
    num_examples: 161
  download_size: 67226
  dataset_size: 196214
configs:
- config_name: humaneval-jl
  data_files:
  - split: test
    path: humaneval-jl/test-*
- config_name: humaneval-lua
  data_files:
  - split: test
    path: humaneval-lua/test-*
- config_name: humaneval-ml
  data_files:
  - split: test
    path: humaneval-ml/test-*
- config_name: humaneval-r
  data_files:
  - split: test
    path: humaneval-r/test-*
- config_name: humaneval-rkt
  data_files:
  - split: test
    path: humaneval-rkt/test-*
---

# Dataset Card for MultiPL-E-fixed (OCaml, Lua, R, Racket, Julia)

This dataset provides corrections for the **OCaml, Lua, R, Racket, and Julia** portions of the [nuprl/MultiPL-E](https://github.com/nuprl/MultiPL-E) benchmark.

### Original Dataset Information
- **Repository:** https://github.com/nuprl/MultiPL-E
- **Paper:** https://ieeexplore.ieee.org/abstract/document/10103177
- **Original Point of Contact:** [email protected], [email protected], [email protected]

### This Version
- **Repository:** https://github.com/jsbyun121/MultiPL-E-fixed
---

## Dataset Summary

MultiPL-E is a large-scale dataset for evaluating code generation models across 22 programming languages. 

However, analysis of the dataset revealed several logical errors, inconsistencies, and language-specific issues in the generated prompts and test cases. These issues can lead to inaccurate evaluation scores by unfairly penalizing models for correctly identifying flaws in the prompts.

This repository provides a **corrected version** of the dataset specifically for **OCaml, Lua, R, Racket, and Julia**. The goal of this version is to provide a more reliable and accurate benchmark for evaluating Large Language Models on these languages.

## Summary of Corrections

A detailed table of all corrections (logical problems, prompt ambiguities, and language-specific fixes) is available here:

🔗 [Google Sheet of Corrections](https://docs.google.com/spreadsheets/d/1lnDubSv39__ZuSFmnnXoXCUuPS85jcFScS9hlzI9ohI/edit?usp=sharing)


## Using This Dataset

This corrected dataset is designed to be a **drop-in replacement** for the official MultiPL-E data for OCaml, Lua, R, Racket, and Julia.

To use it, simply replace the original `humaneval-[lang]` files with the corrected versions provided in this repository. The data structure remains compatible with standard evaluation frameworks.

## Citation and Attribution

If you use this corrected version of the dataset in your work, we ask that you please cite the original MultiPL-E paper and also acknowledge this repository for the corrections.

**Original Paper:**
```bibtex
@inproceedings{cassano2023multipl,
  title={MultiPL-E: A Scalable and Extensible Approach to Benchmarking Neural Code Generation},
  author={Cassano, Federico and Gouwar, John and Nguyen, Daniel and Nguyen, Tuan and Phothilimthana, Phitchaya and Pinckney, David and Anderson, Carolyn and Feldman, Michael and Guha, Arjun},
  booktitle={2023 IEEE/ACM 20th International Conference on Mining Software Repositories (MSR)},
  pages={707--719},
  year={2023},
  organization={IEEE}
}