Datasets:
TheGreatRambler
commited on
Commit
•
1896793
1
Parent(s):
cc61e9d
Create README
Browse files- README.md +128 -0
- example.py +4 -0
README.md
ADDED
@@ -0,0 +1,128 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language:
|
3 |
+
- multilingual
|
4 |
+
license:
|
5 |
+
- cc-by-nc-sa-4.0
|
6 |
+
multilinguality:
|
7 |
+
- multilingual
|
8 |
+
size_categories:
|
9 |
+
- 10M<n<100M
|
10 |
+
source_datasets:
|
11 |
+
- original
|
12 |
+
task_categories:
|
13 |
+
- text-generation
|
14 |
+
- structure-prediction
|
15 |
+
- object-detection
|
16 |
+
- text-mining
|
17 |
+
- information-retrieval
|
18 |
+
- other
|
19 |
+
task_ids:
|
20 |
+
- other
|
21 |
+
pretty_name: Mario Maker 2 level played
|
22 |
+
---
|
23 |
+
|
24 |
+
# Mario Maker 2 level played
|
25 |
+
Part of the [Mario Maker 2 Dataset Collection](https://tgrcode.com/posts/mario_maker_2_datasets)
|
26 |
+
|
27 |
+
## Dataset Description
|
28 |
+
The Mario Maker 2 level played dataset consists of 564 million level plays from Nintendo's online service totaling around 38.5GB of data. The dataset was created using the self-hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api) over the course of 1 month in February 2022.
|
29 |
+
|
30 |
+
### How to use it
|
31 |
+
The Mario Maker 2 level played dataset is a very large dataset so for most use cases it is recommended to make use of the streaming API of `datasets`. You can load and iterate through the dataset with the following code:
|
32 |
+
|
33 |
+
```python
|
34 |
+
from datasets import load_dataset
|
35 |
+
|
36 |
+
ds = load_dataset("TheGreatRambler/mm2_level_played", streaming=True, split="train")
|
37 |
+
print(next(iter(ds)))
|
38 |
+
|
39 |
+
#OUTPUT:
|
40 |
+
{
|
41 |
+
'data_id': 3000004,
|
42 |
+
'pid': '6382913755133534321',
|
43 |
+
'cleared': 1,
|
44 |
+
'liked': 0
|
45 |
+
}
|
46 |
+
```
|
47 |
+
Each row is a unique play in the level denoted by the `data_id` done by the player denoted by the `pid`, `pid` is a 64 bit integer stored within a string from database limitations. `cleared` and `liked` denote if the player successfully cleared the level during their play and/or liked the level during their play. Every level has only one unique play per player.
|
48 |
+
|
49 |
+
You can also download the full dataset. Note that this will download ~38.5GB:
|
50 |
+
```python
|
51 |
+
ds = load_dataset("TheGreatRambler/mm2_level_played", split="train")
|
52 |
+
```
|
53 |
+
|
54 |
+
## Data Structure
|
55 |
+
|
56 |
+
### Data Instances
|
57 |
+
|
58 |
+
```python
|
59 |
+
{
|
60 |
+
'data_id': 3000004,
|
61 |
+
'pid': '6382913755133534321',
|
62 |
+
'cleared': 1,
|
63 |
+
'liked': 0
|
64 |
+
}
|
65 |
+
```
|
66 |
+
|
67 |
+
### Data Fields
|
68 |
+
|
69 |
+
|Field|Type|Description|
|
70 |
+
|---|---|---|
|
71 |
+
|data_id|int|The data ID of the level this play occured in|
|
72 |
+
|pid|string|Player ID of the player|
|
73 |
+
|cleared|bool|Whether the player cleared the level during their play|
|
74 |
+
|liked|bool|Whether the player liked the level during their play|
|
75 |
+
|
76 |
+
### Data Splits
|
77 |
+
|
78 |
+
The dataset only contains a train split.
|
79 |
+
|
80 |
+
<!-- TODO create detailed statistics -->
|
81 |
+
<!--
|
82 |
+
## Dataset Statistics
|
83 |
+
|
84 |
+
The dataset contains 115M files and the sum of all the source code file sizes is 873 GB (note that the size of the dataset is larger due to the extra fields). A breakdown per language is given in the plot and table below:
|
85 |
+
|
86 |
+
![dataset-statistics](https://huggingface.co/datasets/codeparrot/github-code/resolve/main/github-code-stats-alpha.png)
|
87 |
+
|
88 |
+
| | Language |File Count| Size (GB)|
|
89 |
+
|---:|:-------------|---------:|-------:|
|
90 |
+
| 0 | Java | 19548190 | 107.70 |
|
91 |
+
| 1 | C | 14143113 | 183.83 |
|
92 |
+
| 2 | JavaScript | 11839883 | 87.82 |
|
93 |
+
| 3 | HTML | 11178557 | 118.12 |
|
94 |
+
| 4 | PHP | 11177610 | 61.41 |
|
95 |
+
| 5 | Markdown | 8464626 | 23.09 |
|
96 |
+
| 6 | C++ | 7380520 | 87.73 |
|
97 |
+
| 7 | Python | 7226626 | 52.03 |
|
98 |
+
| 8 | C# | 6811652 | 36.83 |
|
99 |
+
| 9 | Ruby | 4473331 | 10.95 |
|
100 |
+
| 10 | GO | 2265436 | 19.28 |
|
101 |
+
| 11 | TypeScript | 1940406 | 24.59 |
|
102 |
+
| 12 | CSS | 1734406 | 22.67 |
|
103 |
+
| 13 | Shell | 1385648 | 3.01 |
|
104 |
+
| 14 | Scala | 835755 | 3.87 |
|
105 |
+
| 15 | Makefile | 679430 | 2.92 |
|
106 |
+
| 16 | SQL | 656671 | 5.67 |
|
107 |
+
| 17 | Lua | 578554 | 2.81 |
|
108 |
+
| 18 | Perl | 497949 | 4.70 |
|
109 |
+
| 19 | Dockerfile | 366505 | 0.71 |
|
110 |
+
| 20 | Haskell | 340623 | 1.85 |
|
111 |
+
| 21 | Rust | 322431 | 2.68 |
|
112 |
+
| 22 | TeX | 251015 | 2.15 |
|
113 |
+
| 23 | Batchfile | 236945 | 0.70 |
|
114 |
+
| 24 | CMake | 175282 | 0.54 |
|
115 |
+
| 25 | Visual Basic | 155652 | 1.91 |
|
116 |
+
| 26 | FORTRAN | 142038 | 1.62 |
|
117 |
+
| 27 | PowerShell | 136846 | 0.69 |
|
118 |
+
| 28 | Assembly | 82905 | 0.78 |
|
119 |
+
| 29 | Julia | 58317 | 0.29 |
|
120 |
+
-->
|
121 |
+
|
122 |
+
## Dataset Creation
|
123 |
+
|
124 |
+
The dataset was created over a little more than a month in Febuary 2022 using the self hosted [Mario Maker 2 api](https://tgrcode.com/posts/mario_maker_2_api). As requests made to Nintendo's servers require authentication the process had to be done with upmost care and limiting download speed as to not overload the API and risk a ban. There are no intentions to create an updated release of this dataset.
|
125 |
+
|
126 |
+
## Considerations for Using the Data
|
127 |
+
|
128 |
+
The dataset contains no harmful language or depictions.
|
example.py
ADDED
@@ -0,0 +1,4 @@
|
|
|
|
|
|
|
|
|
|
|
1 |
+
from datasets import load_dataset
|
2 |
+
|
3 |
+
ds = load_dataset("TheGreatRambler/mm2_level_played", streaming=True, split="train", use_auth_token=True)
|
4 |
+
print(next(iter(ds)))
|