File size: 1,312 Bytes
0577ee8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f163138
0577ee8
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
---
dataset_info:
  features:
  - name: task_id
    dtype: string
  - name: prompt
    dtype: string
  - name: canonical_solution
    dtype: string
  - name: test
    dtype: string
  - name: entry_point
    dtype: string
  - name: difficulty_scale
    dtype: string
  splits:
  - name: test
    num_bytes: 156939
    num_examples: 151
  download_size: 62547
  dataset_size: 156939
configs:
- config_name: default
  data_files:
  - split: test
    path: data/test-*
---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/66376b02dd21d441c54d57c6/iPEnMteHcq2XWUP-3gp4N.png)

# qiskit_humaneval_hard

`qiskit_humaneval_hard` is a dataset for evaluating LLM's at writing Qiskit code. It contains the same problems as in [Qiskit/qiskit_humaneval](https://huggingface.co/datasets/Qiskit/qiskit_humaneval) but is not formatted as a completion exercise. Compared
with [Qiskit/qiskit_humaneval](https://huggingface.co/datasets/Qiskit/qiskit_humaneval) the benchmark is more challenging as the LLM needs to decide which python modules to use for solving the problems.

## Terms of use

* Terms of use: [https://quantum.ibm.com/terms](https://quantum.ibm.com/terms)
* Privacy policy: [https://quantum.ibm.com/terms/privacy](https://quantum.ibm.com/terms/privacy)

## License

[Apache License 2.0](LICENSE)