Datasets:
Tasks:
Text Generation
Formats:
json
Sub-tasks:
language-modeling
Languages:
code
Size:
100K - 1M
License:
| annotations_creators: | |
| - crowdsourced | |
| license: other | |
| language_creators: | |
| - crowdsourced | |
| language: | |
| - code | |
| task_categories: | |
| - text-generation | |
| tags: | |
| - code, kotlin, native Android development, curated | |
| size_categories: | |
| - 100K<n<1M | |
| source_datasets: [] | |
| pretty_name: iva-kotlin-codeint-clean | |
| task_ids: | |
| - language-modeling | |
| # IVA Kotlin GitHub Code Dataset | |
| ## Dataset Description | |
| This is the curated IVA Kotlin dataset extracted from GitHub. | |
| It contains curated Kotlin files gathered with the purpose to train a code generation model. | |
| The dataset consists of 383380 Kotlin code files from GitHub totaling ~542MB of data. | |
| The [uncurated](https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint) dataset was created from the public GitHub dataset on Google BiqQuery. | |
| ### How to use it | |
| To download the full dataset: | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean', split='train') | |
| ``` | |
| Other details are available for each field: | |
| ```python | |
| from datasets import load_dataset | |
| dataset = load_dataset('mvasiliniuc/iva-kotlin-codeint-clean', split='train') | |
| print(dataset[723]) | |
| #OUTPUT: | |
| { | |
| "repo_name":"oboenikui/UnivCoopFeliCaReader", | |
| "path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt", | |
| "copies":"1", | |
| "size":"5635", | |
| "content":"....public override fun onPause() {\n if (this.isFinishing) {\n adapter.disableForegroundDispatch(this)\n }\n super.onPause()\n }\n\n override ...}\n", | |
| "license":"apache-2.0", | |
| "hash":"e88cfd99346cbef640fc540aac3bf20b", | |
| "line_mean":37.8620689655, | |
| "line_max":199, | |
| "alpha_frac":0.5724933452, | |
| "ratio":5.0222816399, | |
| "autogenerated":false, | |
| "config_or_test":false, | |
| "has_no_keywords":false, | |
| "has_few_assignments":false | |
| } | |
| ``` | |
| ## Data Structure | |
| ### Data Fields | |
| |Field|Type|Description| | |
| |---|---|---| | |
| |repo_name|string|name of the GitHub repository| | |
| |path|string|path of the file in GitHub repository| | |
| |copies|string|number of occurrences in dataset| | |
| |content|string|content of source file| | |
| |size|string|size of the source file in bytes| | |
| |license|string|license of GitHub repository| | |
| |hash|string|Hash of content field.| | |
| |line_mean|number|Mean line length of the content. | |
| |line_max|number|Max line length of the content. | |
| |alpha_frac|number|Fraction between mean and max line length of content. | |
| |ratio|number|Character/token ratio of the file with tokenizer. | |
| |autogenerated|boolean|True if the content is autogenerated by looking for keywords in the first few lines of the file. | |
| |config_or_test|boolean|True if the content is a configuration file or a unit test. | |
| |has_no_keywords|boolean|True if a file has none of the keywords for Kotlin Programming Language. | |
| |has_few_assignments|boolean|True if file uses symbol '=' less than `minimum` times. | |
| ### Instance | |
| ```json | |
| { | |
| "repo_name":"oboenikui/UnivCoopFeliCaReader", | |
| "path":"app/src/main/java/com/oboenikui/campusfelica/ScannerActivity.kt", | |
| "copies":"1", | |
| "size":"5635", | |
| "content":"....", | |
| "license":"apache-2.0", | |
| "hash":"e88cfd99346cbef640fc540aac3bf20b", | |
| "line_mean":37.8620689655, | |
| "line_max":199, | |
| "alpha_frac":0.5724933452, | |
| "ratio":5.0222816399, | |
| "autogenerated":false, | |
| "config_or_test":false, | |
| "has_no_keywords":false, | |
| "has_few_assignments":false | |
| } | |
| ``` | |
| ## Languages | |
| The dataset contains only Kotlin files. | |
| ```json | |
| { | |
| "Kotlin": [".kt"] | |
| } | |
| ``` | |
| ## Licenses | |
| Each entry in the dataset contains the associated license. The following is a list of licenses involved and their occurrences. | |
| ```json | |
| { | |
| "agpl-3.0":4052, | |
| "apache-2.0":114641, | |
| "artistic-2.0":159, | |
| "bsd-2-clause":474, | |
| "bsd-3-clause":4571, | |
| "cc0-1.0":198, | |
| "epl-1.0":991, | |
| "gpl-2.0":5625, | |
| "gpl-3.0":25102, | |
| "isc":436, | |
| "lgpl-2.1":146, | |
| "lgpl-3.0":3406, | |
| "mit":39399, | |
| "mpl-2.0":1819, | |
| "unlicense":824 | |
| } | |
| ``` | |
| ## Dataset Statistics | |
| ```json | |
| { | |
| "Total size": "~261 MB", | |
| "Number of files": 201843, | |
| "Number of files under 500 bytes": 3697, | |
| "Average file size in bytes": 5205, | |
| } | |
| ``` | |
| ## Curation Process | |
| * Removal of duplication files based on file hash. | |
| * Removal of file templates. File containing the following: [${PACKAGE_NAME}, ${NAME}, ${VIEWHOLDER_CLASS}, ${ITEM_CLASS}] | |
| * Removal of the files containing the following words in the first 10 lines: `generated, auto-generated", "autogenerated", "automatically generated` | |
| * Removal of the files containing the following words in the first 10 lines with a probability of 0.7: `test", "unit test", "config", "XCTest", "JUnit` | |
| * Removal of file with the rate of alphanumeric characters below 0.3 of the file. | |
| * Removal of near duplication based MinHash and Jaccard similarity. | |
| * Removal of files with mean line length above 100. | |
| * Removal of files without mention of keywords with a probability of 0.7: [`"fun ", "val ", "var ", "if ", "else ", "while ", "for ", "return ", "class ", "data ", "struct ", "interface ", "when ", "catch "`] | |
| * Removal of files that use the assignment operator `=` less than 3 times. | |
| * Removal of files with the ratio between the number of characters and number of tokens after tokenization lower than 1.5. | |
| Curation process is a derivation of the one used in CodeParrot project: https://huggingface.co/codeparrot | |
| ## Data Splits | |
| The dataset only contains a train split which is separated into train and valid which can be found here: | |
| * Clean Version Train: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-train | |
| * Clean Version Valid: https://huggingface.co/datasets/mvasiliniuc/iva-kotlin-codeint-clean-valid | |
| # Considerations for Using the Data | |
| The dataset comprises source code from various repositories, potentially containing harmful or biased code, | |
| along with sensitive information such as passwords or usernames. | |