Datasets:
The viewer is disabled because this dataset repo requires arbitrary Python code execution. Please consider
removing the
loading script
and relying on
automated data support
(you can use
convert_to_parquet
from the datasets
library). If this is not possible, please
open a discussion
for direct help.
MxEval
Multilingual Execution Evaluation
Dataset Summary
This repository contains data and code to perform execution-based multi-lingual evaluation of code generation capabilities and the corresponding data,
namely, a multi-lingual benchmark MBXP, multi-lingual MathQA and multi-lingual HumanEval.
Results and findings can be found in the paper "Multi-lingual Evaluation of Code Generation Models".
Supported Tasks and Leaderboards
Languages
The programming problems are written in multiple programming languages and contain English natural text in comments and docstrings.
Dataset Structure
To lookup currently supported datasets
get_dataset_config_names("AmazonScience/mxeval")
['mathqa-x', 'mbxp', 'multi-humaneval']
To load a specific dataset and language
from datasets import load_dataset
load_dataset("AmazonScience/mxeval", "mbxp", split="python")
Dataset({
features: ['task_id', 'language', 'prompt', 'test', 'entry_point', 'description', 'canonical_solution'],
num_rows: 974
})
Data Instances
An example of a dataset instance:
{
"task_id": "MBSCP/6",
"language": "scala",
"prompt": "object Main extends App {\n /**\n * You are an expert Scala programmer, and here is your task.\n * * Write a Scala function to check whether the two numbers differ at one bit position only or not.\n *\n * >>> differAtOneBitPos(13, 9)\n * true\n * >>> differAtOneBitPos(15, 8)\n * false\n * >>> differAtOneBitPos(2, 4)\n * false\n */\n def differAtOneBitPos(a : Int, b : Int) : Boolean = {\n",
"test": "\n\n var arg00 : Int = 13\n var arg01 : Int = 9\n var x0 : Boolean = differAtOneBitPos(arg00, arg01)\n var v0 : Boolean = true\n assert(x0 == v0, \"Exception -- test case 0 did not pass. x0 = \" + x0)\n\n var arg10 : Int = 15\n var arg11 : Int = 8\n var x1 : Boolean = differAtOneBitPos(arg10, arg11)\n var v1 : Boolean = false\n assert(x1 == v1, \"Exception -- test case 1 did not pass. x1 = \" + x1)\n\n var arg20 : Int = 2\n var arg21 : Int = 4\n var x2 : Boolean = differAtOneBitPos(arg20, arg21)\n var v2 : Boolean = false\n assert(x2 == v2, \"Exception -- test case 2 did not pass. x2 = \" + x2)\n\n\n}\n",
"entry_point": "differAtOneBitPos",
"description": "Write a Scala function to check whether the two numbers differ at one bit position only or not."
}
Data Fields
task_id
: identifier for the data sampleprompt
: input for the model containing function header and docstringscanonical_solution
: solution for the problem in theprompt
description
: task descriptiontest
: contains function to test generated code for correctnessentry_point
: entry point for testlanguage
: programming lanuage identifier to call the appropriate subprocess call for program execution
Data Splits
- HumanXEval
- Python
- Java
- JavaScript
- Csharp
- CPP
- Go
- Kotlin
- PHP
- Perl
- Ruby
- Swift
- Scala
- MBXP
- Python
- Java
- JavaScript
- TypeScript
- Csharp
- CPP
- Go
- Kotlin
- PHP
- Perl
- Ruby
- Swift
- Scala
- MathQA
- Python
- Java
- JavaScript
Dataset Creation
Curation Rationale
Since code generation models are often trained on dumps of GitHub a dataset not included in the dump was necessary to properly evaluate the model. However, since this dataset was published on GitHub it is likely to be included in future dumps.
Personal and Sensitive Information
None.
Social Impact of Dataset
With this dataset code generating models can be better evaluated which leads to fewer issues introduced when using such models.
Dataset Curators
AWS AI Labs
Execution
Execution Example
Install the repo mxeval to execute generations or canonical solutions for the prompts from this dataset.
>>> from datasets import load_dataset
>>> from mxeval.execution import check_correctness
>>> mbxp_python = load_dataset("AmazonScience/mxeval", "mbxp", split="python")
>>> example_problem = mbxp_python[0]
>>> check_correctness(example_problem, example_problem["canonical_solution"], timeout=20.0)
{'task_id': 'MBPP/1', 'passed': True, 'result': 'passed', 'completion_id': None, 'time_elapsed': 10.582208633422852}
Considerations for Using the Data
Make sure to sandbox the execution environment since generated code samples can be harmful.
Licensing Information
Citation Information
@article{mbxp_athiwaratkun2022,
title = {Multi-lingual Evaluation of Code Generation Models},
author = {Athiwaratkun, Ben and
Gouda, Sanjay Krishna and
Wang, Zijian and
Li, Xiaopeng and
Tian, Yuchen and
Tan, Ming
and Ahmad, Wasi Uddin and
Wang, Shiqi and
Sun, Qing and
Shang, Mingyue and
Gonugondla, Sujan Kumar and
Ding, Hantian and
Kumar, Varun and
Fulton, Nathan and
Farahani, Arash and
Jain, Siddhartha and
Giaquinto, Robert and
Qian, Haifeng and
Ramanathan, Murali Krishna and
Nallapati, Ramesh and
Ray, Baishakhi and
Bhatia, Parminder and
Sengupta, Sudipta and
Roth, Dan and
Xiang, Bing},
doi = {10.48550/ARXIV.2210.14868},
url = {https://arxiv.org/abs/2210.14868},
keywords = {Machine Learning (cs.LG), Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
Contributions
- Downloads last month
- 18