khulnasoft/cm-notebooks
Updated
Error code: DatasetGenerationCastError Exception: DatasetGenerationCastError Message: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'context_type', 'context', 'context_id'}) and 2 missing columns ({'query', 'query_id'}). This happened while the json dataset builder was generating data using hf://datasets/khulnasoft/duo-chat-evaluation/data/duo_chat/v1/jsonl/context/b35050edc0f74974b90a5bd12813523d.jsonl (at revision 0d914ece246b878bd25c08874ce76931781ff908) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations) Traceback: Traceback (most recent call last): File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single writer.write_table(table) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table pa_table = table_cast(pa_table, self._schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast return cast_table_to_schema(table, schema) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema raise CastError( datasets.table.CastError: Couldn't cast context_id: string context: string context_type: string to {'query_id': Value(dtype='string', id=None), 'query': Value(dtype='string', id=None)} because column names don't match During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1577, in compute_config_parquet_and_info_response parquet_operations = convert_to_parquet(builder) File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1191, in convert_to_parquet builder.download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare self._download_and_prepare( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split for job_id, done, content in self._prepare_split_single( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single raise DatasetGenerationCastError.from_cast_error( datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset All the data files must have the same columns, but at some point there are 3 new columns ({'context_type', 'context', 'context_id'}) and 2 missing columns ({'query', 'query_id'}). This happened while the json dataset builder was generating data using hf://datasets/khulnasoft/duo-chat-evaluation/data/duo_chat/v1/jsonl/context/b35050edc0f74974b90a5bd12813523d.jsonl (at revision 0d914ece246b878bd25c08874ce76931781ff908) Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
query_id
string | query
string |
---|---|
9fac71b7575041cca8519da583e6aee7 | Can you write Python code that sums up two numbers? |
7ed54c92d0b5490eb547bcb957c548df | write me python code to query the rest api server at example.com/api |
18d84a973bf54d0f8b3695748c4a2ced | Write a PHP script that finds 4 hours in a run of sunny weather |
dc992a57ecef4902bee4daf61baea4cb | can you write the regex that will match any value with "pg" in it, "postgres" or "patroni" |
794e34170ecf4dadae439433ef7883c3 | can you make a mermaid diagram where there is a central box or square with a specific title, two arrows pointing at the box as an input and one arrow getting out the box as an output |
19ab19f137b64cc689599376692285f9 | How do I create a new Next JS application named nextjs-support-portal with tailwindcss? |
9f01fb5f9c7c4c4ba4455209cbdb07f9 | Create a login page using Next.js, typescript, and tailwindcss |
3e627b091d9c49f69685313007b71fad | Share a gitlab-ci.yml file to cache node files |
da9a496741c243e0812be5b018be3bbf | Write a .gitlab-ci.yml file to test, build and and deploy a javascript application to AWS EC2 |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
null | null |
This repo contains datasets used to evaluate the quality of AI features we are developing at GitLab.
We follow a similar approach to HuggingFace and store the datasets as LFS objects under the data
folder.
Field name | Type | Additional notes |
---|---|---|
context_id |
STRING | Valid UUIDs, generated as uuid4().hex in Python |
context |
STRING | JSON string describing one of the GitLab resources |
context_type |
STRING | GitLab resource type, either issue or epic |
referer_url |
STRING, NULLABLE | Not used in this iteration, added for Chat API compatibility |
Field name | Type | Additional notes |
---|---|---|
query_id |
STRING | Valid UUIDs, generated as uuid4().hex in Python |
query |
STRING | User query submitted to Duo Chat |
context |
STRING | JSON string describing one of the GitLab resources |
resource_id |
INT, NULLABLE | GitLab global resource identifier |
resource_type |
STRING | GitLab resource type, either issue or epic |
referer_url |
STRING, NULLABLE | Not used in this iteration, added for Chat API compatibility |
Field name | Type | Additional notes |
---|---|---|
query_id |
STRING | Valid UUIDs, generated as uuid4().hex in Python |
query |
STRING | User query submitted to Duo Chat |
Refer to a specific commit SHA1 if you need to get a concrete revision of the dataset.
GIT_LFS_SKIP_SMUDGE=1 git clone $URL
git reset --hard $SHA1
git lfs pull