--- dataset_info: features: - name: task_id dtype: string - name: prompt dtype: string - name: canonical_solution dtype: string - name: test dtype: string - name: entry_point dtype: string - name: difficulty_scale dtype: string splits: - name: test num_bytes: 156939 num_examples: 151 download_size: 62547 dataset_size: 156939 configs: - config_name: default data_files: - split: test path: data/test-* --- ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66376b02dd21d441c54d57c6/iPEnMteHcq2XWUP-3gp4N.png) # qiskit_humaneval_hard `qiskit_humaneval_hard` is a dataset for evaluating LLM's at writing Qiskit code. It contains the same problems as in [Qiskit/qiskit_humaneval](https://huggingface.co/datasets/Qiskit/qiskit_humaneval) but is not formatted as a completion exercise. Compared with [Qiskit/qiskit_humaneval](https://huggingface.co/datasets/Qiskit/qiskit_humaneval) the benchmark is more challenging as the LLM needs to decide which python modules to use for solving the problems. ## Terms of use * Terms of use: [https://quantum.ibm.com/terms](https://quantum.ibm.com/terms) * Privacy policy: [https://quantum.ibm.com/terms/privacy](https://quantum.ibm.com/terms/privacy) ## License [Apache License 2.0](LICENSE)