starsofchance commited on
Commit
4930c82
Β·
verified Β·
1 Parent(s): c3730ea

Add dataset card

Browse files
Files changed (1) hide show
  1. README.md +198 -0
README.md ADDED
@@ -0,0 +1,198 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ # YAML Metadata Block
3
+ language:
4
+ - en
5
+ tags:
6
+ - vulnerability-detection
7
+ - cve
8
+ - code-changes
9
+ - software-security
10
+ - stratified-split
11
+ license: mit
12
+ dataset_info:
13
+ features: # Features in the *final split files*
14
+ - name: idx
15
+ dtype: int64
16
+ - name: func_before
17
+ dtype: string
18
+ - name: Vulnerability Classification
19
+ dtype: string
20
+ - name: vul
21
+ dtype: int64
22
+ - name: func_after
23
+ dtype: string
24
+ - name: patch
25
+ dtype: string
26
+ - name: CWE ID
27
+ dtype: string
28
+ - name: lines_before
29
+ dtype: string
30
+ - name: lines_after
31
+ dtype: string
32
+ splits:
33
+ - name: train
34
+ num_examples: 150909
35
+ - name: validation
36
+ num_examples: 18864
37
+ - name: test
38
+ num_examples: 18863
39
+
40
+
41
+ dataset_original_file_size: 10GB uuncompressed
42
+ ---
43
+
44
+ # MSR Data Cleaned - C/C++ Code Vulnerability Dataset
45
+
46
+ [![Dataset License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)
47
+
48
+
49
+
50
+ ## πŸ“Œ Dataset Description
51
+ A curated collection of C/C++ code vulnerabilities paired with:
52
+ - CVE details (scores, classifications, exploit status)
53
+ - Code changes (commit messages, added/deleted lines)
54
+ - File-level and function-level diffs
55
+
56
+ ## πŸ” Sample Data Structure from original file
57
+ ```python
58
+ +---------------+-----------------+----------------------+---------------------------+
59
+ | CVE ID | Attack Origin | Publish Date | Summary |
60
+ +===============+=================+======================+===========================+
61
+ | CVE-2015-8467 | Remote | 2015-12-29 | "The samldb_check_user..."|
62
+ +---------------+-----------------+----------------------+---------------------------+
63
+ | CVE-2016-1234 | Local | 2016-01-15 | "Buffer overflow in..." |
64
+ +---------------+-----------------+----------------------+---------------------------+
65
+
66
+ ```
67
+ Note: This is a simplified preview; the full dataset includes additional fields like commit_id, func_before, etc.
68
+
69
+
70
+ ### 1. Accessing in Colab
71
+ ```python
72
+ !pip install huggingface_hub -q
73
+ from huggingface_hub import snapshot_download
74
+
75
+ repo_id = "starsofchance/MSR_data_cleaned"
76
+ dataset_path = snapshot_download(repo_id=repo_id, repo_type="dataset")
77
+ ```
78
+
79
+ ### 2. Extracting the Dataset
80
+ ```python
81
+ !apt-get install unzip -qq
82
+ !unzip "/root/.cache/huggingface/.../MSR_data_cleaned.zip" -d "/content/extracted_data"
83
+ ```
84
+ **Note: Extracted size is 10GB (1.5GB compressed). Ensure sufficient disk space.
85
+
86
+ ### 3. Creating Splits (Colab Pro Recommended)
87
+ We used this memory-efficient approach:
88
+ ```python
89
+ from datasets import load_dataset
90
+ dataset = load_dataset("csv", data_files="MSR_data_cleaned.csv", streaming=True)
91
+
92
+ # Randomly distribute rows (80-10-10)
93
+ for row in dataset:
94
+ rand = random.random()
95
+ if rand < 0.8: write_to(train.csv)
96
+ elif rand < 0.9: write_to(validation.csv)
97
+ else: write_to(test.csv)
98
+ ```
99
+
100
+
101
+ **Hardware Requirements:**
102
+ - Minimum 25GB RAM
103
+ - Strong CPU (Colab Pro T4 GPU recommended)
104
+
105
+ ##πŸ“Š Dataset Statistics
106
+
107
+ - Number of Rows: 188,636
108
+ - Vulnerability Distribution:
109
+ - Vulnerable (1): 18,863 (~10%)
110
+ - Non-Vulnerable (0): 169,773 (~90%)
111
+ ##πŸ“‹ Data Fields Description
112
+ - CVE_ID: Unique identifier for the vulnerability (Common Vulnerabilities and Exposures).
113
+ - CWE_ID: Weakness category identifier (Common Weakness Enumeration).
114
+ - Score: CVSS score indicating severity (float, 0-10).
115
+ - Summary: Brief description of the vulnerability.
116
+ - commit_id: Git commit hash linked to the code change.
117
+ - codeLink: URL to the code repository or commit.
118
+ - file_name: Name of the file containing the vulnerability.
119
+ - func_after: Function code after the change.
120
+ - lines_after: Code lines after the change.
121
+ - Access_Gained: Type of access gained by exploiting the vulnerability.
122
+ - Attack_Origin: Source of the attack (e.g., Remote, Local).
123
+ - Authentication_Required: Whether authentication is needed to exploit.
124
+ - Availability: Impact on system availability.
125
+ - CVE_Page: URL to the CVE details page.
126
+ - Complexity: Complexity of exploiting the vulnerability.
127
+ - Confidentiality: Impact on data confidentiality.
128
+ - Integrity: Impact on data integrity.
129
+ - Known_Exploits: Details of known exploits, if any.
130
+ - Publish_Date: Date the vulnerability was published.
131
+ - Update_Date: Date of the last update to the vulnerability data.
132
+ - Vulnerability_Classification: Type or category of the vulnerability.
133
+ - add_lines: Lines added in the commit.
134
+ - del_lines: Lines deleted in the commit.
135
+ - commit_message: Description of the commit.
136
+ - files_changed: List of files modified in the commit.
137
+ - func_before: Function code before the change.
138
+ - lang: Programming language (e.g., C, C++).
139
+ - lines_before: Code lines before the change.
140
+
141
+
142
+ ## splits file for UltiVul project:
143
+
144
+ ## πŸ” Sample Data Structure (from train.csv)
145
+ ```python
146
+ {
147
+ 'idx': 0, # Unique ID within the train split
148
+ 'func_before': '...', # String containing function code before change
149
+ 'Vulnerability Classification': '...', # Original vulnerability type classification
150
+ 'vul': 0, # Integer: 0 for non-vulnerable, 1 for vulnerable (target label)
151
+ 'func_after': '...', # String containing function code after change
152
+ 'patch': '...', # String containing diff patch
153
+ 'CWE ID': '...', # String CWE ID, e.g., "CWE-119"
154
+ 'lines_before': '...', # String lines before change context
155
+ 'lines_after': '...' # String lines after change context
156
+ }
157
+ ```
158
+ **Note: This shows the structure of the final split files (train.csv, validation.csv, test.csv). The original MSR_data_cleaned.csv contains many more metadata fields.
159
+
160
+
161
+ ##πŸ“¦ Dataset New Files
162
+ The dataset is available as three CSV files (specially created for the UltiVul project) hosted on Hugging Face, uploaded via huggingface_hub:
163
+
164
+ - train.csv
165
+ Size: 667 MB
166
+ Description: Training split with 150,909 samples, approximately 80% of the data.
167
+ - validation.csv
168
+ Size: 86 MB
169
+ Description: Validation split with 18,864 samples, approximately 10% of the data.
170
+ - test.csv
171
+ Size: 84.8 MB
172
+ Description: Test split with 18,863 samples, approximately 10% of the data.
173
+
174
+
175
+
176
+ πŸ™ Acknowledgements
177
+ Original dataset provided by Fan et al., 2020
178
+ Thanks to the Hugging Face team for dataset hosting tools.
179
+
180
+ ## πŸ“œ Citation
181
+ ```bibtex
182
+ @inproceedings{fan2020ccode,
183
+ title={A C/C++ Code Vulnerability Dataset with Code Changes and CVE Summaries},
184
+ author={Fan, Jiahao and Li, Yi and Wang, Shaohua and Nguyen, Tien N},
185
+ booktitle={MSR '20: 17th International Conference on Mining Software Repositories},
186
+ pages={1--5},
187
+ year={2020},
188
+ doi={10.1145/3379597.3387501}
189
+ }
190
+ ```
191
+
192
+ ## 🌟 Dataset Creation
193
+ - **Source**: Original data from [MSR 2020 Paper](https://doi.org/10.1145/3379597.3387501)
194
+ - **Processing**:
195
+ - Cleaned and standardized CSV format
196
+ - Stream-based splitting to handle large size
197
+ - Preserved all original metadata
198
+