Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: ReadTimeout Message: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 6335e25b-0d8d-43a6-8c9f-6a410397f1f9)') Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 165, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1663, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1620, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1018, in get_module data_files = DataFilesDict.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 690, in from_patterns else DataFilesList.from_patterns( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 593, in from_patterns origin_metadata = _get_origin_metadata(data_files, download_config=download_config) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 507, in _get_origin_metadata return thread_map( File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 69, in thread_map return _executor_map(ThreadPoolExecutor, fn, *iterables, **tqdm_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/contrib/concurrent.py", line 51, in _executor_map return list(tqdm_class(ex.map(fn, *iterables, chunksize=chunksize), **kwargs)) File "/src/services/worker/.venv/lib/python3.9/site-packages/tqdm/std.py", line 1169, in __iter__ for obj in iterable: File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 609, in result_iterator yield fs.pop().result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 446, in result return self.__get_result() File "/usr/local/lib/python3.9/concurrent/futures/_base.py", line 391, in __get_result raise self._exception File "/usr/local/lib/python3.9/concurrent/futures/thread.py", line 58, in run result = self.fn(*self.args, **self.kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/data_files.py", line 486, in _get_single_origin_metadata resolved_path = fs.resolve_path(data_file) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 198, in resolve_path repo_and_revision_exist, err = self._repo_and_revision_exist(repo_type, repo_id, revision) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_file_system.py", line 125, in _repo_and_revision_exist self._api.repo_info( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2704, in repo_info return method( File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_validators.py", line 114, in _inner_fn return fn(*args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/hf_api.py", line 2561, in dataset_info r = get_session().get(path, headers=headers, timeout=timeout, params=params) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 602, in get return self.request("GET", url, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 589, in request resp = self.send(prep, **send_kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/sessions.py", line 703, in send r = adapter.send(request, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/huggingface_hub/utils/_http.py", line 93, in send return super().send(request, *args, **kwargs) File "/src/services/worker/.venv/lib/python3.9/site-packages/requests/adapters.py", line 635, in send raise ReadTimeout(e, request=request) requests.exceptions.ReadTimeout: (ReadTimeoutError("HTTPSConnectionPool(host='huggingface.co', port=443): Read timed out. (read timeout=10)"), '(Request ID: 6335e25b-0d8d-43a6-8c9f-6a410397f1f9)')
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
Before ARC Dataset
This dataset contains .parquet files organized in nested subfolders under before-arc
, split into two main categories: generalization
and compositionality
. Each category contains data for different experiment settings and experiments, with JSON files for training, validation, and testing splits. The nested structure is intentional for clarity.
Dataset Structure
before_arc/
: Root foldergeneralization/
: Data for generalization experimentsexperiment_settings[1-5]/
: Five settings (e.g., different conditions or parameters)experiment[1-4]/
: Four experiments per settingtrain.parquet
: Training datatrain_val.parquet
: Training validation datatest_val.parquet
: Test validation datatest.parquet
: Test data
compositionality/
: Data for compositionality experimentsexperiment_settings[1-5]/
: Five settings (e.g. different combination of transformations)experiment[N]/
: N experiments per setting (N changes per experiment setting)train.parquet
: Training datatrain_val.parquet
: Training validation datatest_val.parquet
: Test validation datatest.parquet
: Test data
We provide instruction on how to read the JSON file on the open_data.ipynb notebook, as well as in the original repo which was used to create this dataset.
Content
Each .parquet file is a dict containing the following keys: 'input'
, 'output'
, 'transformation_suite'
, 'task_key'
. The input
is the input grid, while the output
is the output grid subject to the transformation_suite
. the task_key
is simply an identifier for the task instance. NOTE: In the compositionality study, we provide an additional demo_input
and demo_output
for demonstration examples of the task. This is in case the user would like to pass a demonstration example (in-context learning style) as opposed to simply the transformation_suite
to specify which transformation the model should apply.
Usage
Load the dataset using the datasets
library:
from datasets import load_dataset
gen_exps3_exp2_test = load_dataset("taratataw/before-arc", data_files={"data": "generalization/exp_setting_3/experiment_2/test.parquet"})
print(dataset["data"][0].keys()) # Prints the keys of the first sample from the chosen dataset. Should output: dict_keys(['input', 'output', 'transformation_suite', 'task_key'])
- Downloads last month
- 433