--- dataset_info: - config_name: Raw_Java features: - name: file_name dtype: string - name: file_path dtype: string - name: content dtype: string - name: file_size dtype: int64 - name: language dtype: string - name: extension dtype: string - name: repo_name dtype: string - name: repo_stars dtype: int64 - name: repo_forks dtype: int64 - name: repo_open_issues dtype: int64 - name: repo_created_at dtype: string - name: repo_pushed_at dtype: string splits: - name: train num_bytes: 59165182788 num_examples: 7798053 download_size: 15597123595 dataset_size: 59165182788 - config_name: Stackless_Java_V2 features: - name: file_name dtype: string - name: file_path dtype: string - name: content dtype: string - name: file_size dtype: int64 - name: language dtype: string - name: extension dtype: string - name: repo_name dtype: string - name: repo_stars dtype: int64 - name: repo_forks dtype: int64 - name: repo_open_issues dtype: int64 - name: repo_created_at dtype: string - name: repo_pushed_at dtype: string - name: sha dtype: string - name: near_dups_stkv2_idx sequence: int64 splits: - name: test num_bytes: 4482283353 num_examples: 236735 - name: train num_bytes: 36781802102 num_examples: 1893880 download_size: 41917815131 dataset_size: 41264085455 configs: - config_name: Raw_Java data_files: - split: train path: data/Raw_Java/train-* - config_name: Stackless_Java_V2 data_files: - split: train path: Stackless_Java_V2/train-* - split: test path: Stackless_Java_V2/test-* --- # Dataset Summary We create a new Java dataset by scraping public repositories on GitHub. Our file-level dataset includes individual Java files rather than entire projects or code snippets such as functions or class definitions. Our approach combines techniques used in creating the [Stack](https://huggingface.co/bigcode) dataset family, which served as the training foundation for the [StarCoder](https://huggingface.co/bigcode) models. We specifically focus on the [Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2) for its latest release and public availability. Our dataset creation pipeline involves three key stages: collection, cleaning, and deduplication. # Collection We start the collection process by scraping **10,500** public repositories using the [GitHub API](https://docs.github.com/en/rest/search/search?apiVersion=2022-11-28). By selecting an extra **500** repositories we can ensure that we collected at least **10,000** repositories as some repositories can be deleted/made private between fetching the list of repositories, and the actual downloading of the content. We specifically look for repositories released under a strong copyleft license such as **GPL-2.0**, **GPL-3.0**, or **AGPL-3.0**. We use copyleft licenses to ensure our dataset is not contaminated with training data from Stack v2. This issue occurred with other publicly available file-level code datasets, including Stack v1, which claimed to contain only permissively licensed code, however, they were [contaminated with copyleft-licensed code](https://dl.acm.org/doi/10.1145/3650105.3652298). Stack v2 also [claims to exclude copyleft-licensed code](https://arxiv.org/abs/2402.19173) due to community stance uncertainty and its low volume. Nevertheless, we still deduplicated our dataset against Stack v2 to ensure there was no overlap and that our data was safe for training. We extract repositories **created** up until **April** **2024** in **decreasing order** of their **star counts**. To avoid **GitHub rate limits**, we use **timeouts** and **pagination** to fetch the repositories. The search is based on the **repository license type**, **star count**, and **creation date**. The features we extract for each repository are illustrated in the example below. ```json { "id": 126178683, "full_name": "halo-dev/halo", "html_url": "https://github.com/halo-dev/halo", "stargazers_count": 29115, "forks_count": 8985, "watchers_count": 29115, "open_issues_count": 278, "language": "Java", "created_at": "2018-03-21T12:56:52Z", "pushed_at": "2023-10-28T16:29:39Z", "license": { "key": "gpl-3.0", "name": "GNU General Public License v3.0", "spdx_id": "GPL-3.0", "url": "https://api.github.com/licenses/gpl-3.0", "node_id": "MDc6TGljZW5zZTk=" }, "retrieval_date": "10/30/2023, 3:24:57 PM (Europe/Amsterdam)" } ``` ### Repository Fields - **id**: unique id of the repo - **full_name**: complete name of the repo - **html_url**: URL to the repo - **stargazers_count**: number of stars of the repo - **forks_count**: number of forks of the repo - **watchers_count**: number of watchers of the repo - **open_issues_count**: number of open issues of the repo at the extraction time - **language**: main language of the repo - **created_at**: creation date of the repo - **pushed_at**: date of the most recent push to the repo until the extraction date - **license**: license type of the repo - **retrieval_date**: date when the repo was scraped from GitHub We start by retrieving repositories with more than **900** stars using **two-month tumbling windows**. If we hit the **1000** repository limit per window (for a personal GitHub account), we shorten the search space to a **one-month window** and restart the iteration. Otherwise, the window advances by two months. Once the timeframe until April 2024 is covered, we reduce the star search space: between **900** and **100** stars, we decrease the interval by **50** (e.g. search between [900, 850]), between **100** and **10** stars, we decrease the interval by **10**, and for the last **10** stars, we decrease by **1**. Figure 1 showcases the distribution of repositories with up to **500** stars. Since most repositories fall within the **0-100 star range**, using the **creation date** and **star count** filters helps us avoid API limits and scrape more data by narrowing the search space. Although the creation date window can be reduced even further (week or day level), a one-month window was enough for our needs. After retrieving the repositories, we extract all the files with a *.java* extension. The final dataset structure is shown in the example below. ```json { "file_name": "Font.java", "file_path": ".../lateralgm/resources/Font.java", "content": "*/ package org.lateralgm.resources; import java.util.EnumMap; import org.lateralgm.main.Prefs; ...", "file_size": 1,985, "language": "Java", "extension": ".java", "repo_name": "lwizchz/GameMaker-HTML5-Player", "repo_stars": 22, "repo_forks": 9, "repo_open_issues": 0, "repo_created_at": "2011-09-10T16:05:20Z", "repo_pushed_at": "2013-05-06T23:00:17Z", "sha": "00046809b218b2c058f4be7...", "near_dups_stkv2_idx": [21192944, 106219595] } ``` ### Dataset Fields - **file_name**: name of the file extracted from its repo - **file_path**: path to the file in its repo - **content**: content of the file - **file_size**: size of the file - **language**: language of the file - **extension**: language extension of the file - **repo_name**: complete name of the file's repo - **repo_stars**: number of stars of the file's repo - **repo_forks**: number of forks of the file's repo - **repo_open_issues**: number of open issues of the file's repo at the extraction date - **repo_created_at**: creation date of the file's repo - **repo_pushed_at**: date of the most recent push to the file's repo until the extraction date - **sha**: sha value of the file's content - **near_dups_stkv2_idx**: IDs of files from Java-Stack v2 that are near-duplicates to the current file
Figure 1: Distribution of scraped repositories with at most 500 stars.