The dataset viewer is not available for this dataset.
Cannot get the config names for the dataset.
Error code:   ConfigNamesError
Exception:    TypeError
Message:      list_() takes at least 1 positional argument (0 given)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response
                  config_names = get_dataset_config_names(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 164, in get_dataset_config_names
                  dataset_module = dataset_module_factory(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1729, in dataset_module_factory
                  raise e1 from None
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1686, in dataset_module_factory
                  return HubDatasetModuleFactoryWithoutScript(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1038, in get_module
                  dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 378, in from_dataset_card_data
                  {
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 379, in <dictcomp>
                  dataset_info_yaml_dict.get("config_name", "default"): DatasetInfo._from_yaml_dict(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 317, in _from_yaml_dict
                  yaml_data["features"] = Features._from_yaml_list(yaml_data["features"])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1979, in _from_yaml_list
                  return cls.from_dict(from_yaml_inner(yaml_data))
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1975, in from_yaml_inner
                  return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1975, in <dictcomp>
                  return {name: from_yaml_inner(_feature) for name, _feature in zip(names, obj)}
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 1964, in from_yaml_inner
                  Value(obj["dtype"])
                File "<string>", line 5, in __init__
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 519, in __post_init__
                  self.pa_type = string_to_arrow(self.dtype)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/features/features.py", line 147, in string_to_arrow
                  return pa.__dict__[datasets_dtype + "_"]()
                File "pyarrow/types.pxi", line 4398, in pyarrow.lib.list_
              TypeError: list_() takes at least 1 positional argument (0 given)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

What is it?

Swallow-code-v0.1 consists of 4 staged dataset subsets and are filtered from bigcode/the-stack-v2-train-smol-ids.

What is being released?

The dataset is released in four versions:

  • Swallow Code v0.1 stage 1: 36B tokens, 41M documents containing Python scripts.
  • Swallow Code v0.1 stage 2: 31B tokens, 37M documents containing Python scripts that are syntax error-free.
  • Swallow Code v0.1 stage 3: 20B tokens, 24M documents containing Python scripts that are filtered with pylint score.
  • Swallow Code v0.1 stage 4: 16B tokens, 21M documents containing Python scripts that are filtered with code comments and literal language detection(English and Japanese).

Results and Performance

Llama-3.1-8B Performance after Continual Pretraining on 50B tokens Japanese, English, and Code(= swallow-code-v0.1) datasets.

Dataset Schema

{
  "blob_id": string,
  "path": string,
  "content_id": string,
  "language": string,
  "length_bytes": int64,
  "detected_licenses": list,
  "license_type": string,
  "src_encoding": string,
  "is_vendor": bool,
  "is_generated": bool,
  "alphanum_fraction": float64,
  "alpha_fraction": float64,
  "num_lines": int64,
  "avg_line_length": float64,
  "max_line_length": int64,
  "text": string,
  "analysis_results": list,
  "has_issues": bool,
  "language_type_issue": list,
  "language_type": string,
  "pylint_score": int64,
  "pylint_output": string
}

Licensing information

Swallow-code-v0.1 follows the license of the stack v2. The following is the license of the stack v2.

The Stack v2 is a collection of source code from repositories with various licenses. Any use of all or part of the code gathered in The Stack v2 must abide by the terms of the original licenses, including attribution clauses when relevant. We facilitate this by providing provenance information for each data point.

Citation information

@misc{fujii2024swallowcode,  
    author       = { Kazuki Fujii, Rio Yokota },  
    title        = { Swallow-Code-v0.1 }, 
    year         = 2024,  
    url          = { https://huggingface.co/datasets/tokyotech-llm/swallow-code-v0.1 },  
    publisher    = { Swallow Project }
}
Downloads last month
80