File size: 6,766 Bytes
4930c82
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
---
# YAML Metadata Block
language: 
  - en
tags:
  - vulnerability-detection
  - cve
  - code-changes
  - software-security
  - stratified-split
license: mit
dataset_info:
  features: # Features in the *final split files*
    - name: idx
      dtype: int64
    - name: func_before
      dtype: string
    - name: Vulnerability Classification
      dtype: string
    - name: vul
      dtype: int64
    - name: func_after
      dtype: string
    - name: patch
      dtype: string
    - name: CWE ID
      dtype: string
    - name: lines_before
      dtype: string
    - name: lines_after
      dtype: string
  splits:
    - name: train
      num_examples: 150909
    - name: validation
      num_examples: 18864
    - name: test
      num_examples: 18863
  
 
  dataset_original_file_size: 10GB uuncompressed
---

# MSR Data Cleaned - C/C++ Code Vulnerability Dataset

[![Dataset License](https://img.shields.io/badge/license-MIT-blue.svg)](LICENSE)



## πŸ“Œ Dataset Description
A curated collection of C/C++ code vulnerabilities paired with:
- CVE details (scores, classifications, exploit status)
- Code changes (commit messages, added/deleted lines)
- File-level and function-level diffs

## πŸ” Sample Data Structure from original file
```python
+---------------+-----------------+----------------------+---------------------------+
| CVE ID        | Attack Origin   | Publish Date         | Summary                   |
+===============+=================+======================+===========================+
| CVE-2015-8467 | Remote          | 2015-12-29           | "The samldb_check_user..."|
+---------------+-----------------+----------------------+---------------------------+
| CVE-2016-1234 | Local           | 2016-01-15           | "Buffer overflow in..."   |
+---------------+-----------------+----------------------+---------------------------+

```
Note: This is a simplified preview; the full dataset includes additional fields like commit_id, func_before, etc.


### 1. Accessing in Colab
```python
!pip install huggingface_hub -q
from huggingface_hub import snapshot_download

repo_id = "starsofchance/MSR_data_cleaned"
dataset_path = snapshot_download(repo_id=repo_id, repo_type="dataset")
```

### 2. Extracting the Dataset
```python
!apt-get install unzip -qq
!unzip "/root/.cache/huggingface/.../MSR_data_cleaned.zip" -d "/content/extracted_data"
```
**Note: Extracted size is 10GB (1.5GB compressed). Ensure sufficient disk space.

### 3. Creating Splits (Colab Pro Recommended)
We used this memory-efficient approach:
```python
from datasets import load_dataset
dataset = load_dataset("csv", data_files="MSR_data_cleaned.csv", streaming=True)

# Randomly distribute rows (80-10-10)
for row in dataset:
    rand = random.random()
    if rand < 0.8: write_to(train.csv)
    elif rand < 0.9: write_to(validation.csv)
    else: write_to(test.csv)
```


**Hardware Requirements:**
- Minimum 25GB RAM
- Strong CPU (Colab Pro T4 GPU recommended)

##πŸ“Š Dataset Statistics

- Number of Rows: 188,636
- Vulnerability Distribution:
	- Vulnerable (1): 18,863 (~10%)
	- Non-Vulnerable (0): 169,773 (~90%)
##πŸ“‹ Data Fields Description
- CVE_ID: Unique identifier for the vulnerability (Common Vulnerabilities and Exposures).
- CWE_ID: Weakness category identifier (Common Weakness Enumeration).
- Score: CVSS score indicating severity (float, 0-10).
- Summary: Brief description of the vulnerability.
- commit_id: Git commit hash linked to the code change.
- codeLink: URL to the code repository or commit.
- file_name: Name of the file containing the vulnerability.
- func_after: Function code after the change.
- lines_after: Code lines after the change.
- Access_Gained: Type of access gained by exploiting the vulnerability.
- Attack_Origin: Source of the attack (e.g., Remote, Local).
- Authentication_Required: Whether authentication is needed to exploit.
- Availability: Impact on system availability.
- CVE_Page: URL to the CVE details page.
- Complexity: Complexity of exploiting the vulnerability.
- Confidentiality: Impact on data confidentiality.
- Integrity: Impact on data integrity.
- Known_Exploits: Details of known exploits, if any.
- Publish_Date: Date the vulnerability was published.
- Update_Date: Date of the last update to the vulnerability data.
- Vulnerability_Classification: Type or category of the vulnerability.
- add_lines: Lines added in the commit.
- del_lines: Lines deleted in the commit.
- commit_message: Description of the commit.
- files_changed: List of files modified in the commit.
- func_before: Function code before the change.
- lang: Programming language (e.g., C, C++).
- lines_before: Code lines before the change.


## splits file for UltiVul project:

## πŸ” Sample Data Structure (from train.csv)
```python
{
 'idx': 0, # Unique ID within the train split
 'func_before': '...', # String containing function code before change
 'Vulnerability Classification': '...', # Original vulnerability type classification
 'vul': 0, # Integer: 0 for non-vulnerable, 1 for vulnerable (target label)
 'func_after': '...', # String containing function code after change
 'patch': '...', # String containing diff patch
 'CWE ID': '...', # String CWE ID, e.g., "CWE-119"
 'lines_before': '...', # String lines before change context
 'lines_after': '...' # String lines after change context
}
```
**Note: This shows the structure of the final split files (train.csv, validation.csv, test.csv). The original MSR_data_cleaned.csv contains many more metadata fields. 


##πŸ“¦ Dataset New Files
The dataset is available as three CSV files (specially created for the UltiVul project) hosted on Hugging Face, uploaded via huggingface_hub:

- train.csv
	Size: 667 MB
	Description: Training split with 150,909 samples, approximately 80% of the data.
- validation.csv
	Size: 86 MB
	Description: Validation split with 18,864 samples, approximately 10% of the data.
- test.csv
	Size: 84.8 MB
	Description: Test split with 18,863 samples, approximately 10% of the data.



πŸ™ Acknowledgements
Original dataset provided by Fan et al., 2020
Thanks to the Hugging Face team for dataset hosting tools.

## πŸ“œ Citation
```bibtex
@inproceedings{fan2020ccode,
  title={A C/C++ Code Vulnerability Dataset with Code Changes and CVE Summaries},
  author={Fan, Jiahao and Li, Yi and Wang, Shaohua and Nguyen, Tien N},
  booktitle={MSR '20: 17th International Conference on Mining Software Repositories},
  pages={1--5},
  year={2020},
  doi={10.1145/3379597.3387501}
}
```

## 🌟 Dataset Creation
- **Source**: Original data from [MSR 2020 Paper](https://doi.org/10.1145/3379597.3387501)
- **Processing**:
  - Cleaned and standardized CSV format
  - Stream-based splitting to handle large size
  - Preserved all original metadata