language:
- en
tags:
- vulnerability-detection
- cve
- code-changes
- software-security
- stratified-split
license: mit
dataset_info:
features:
- name: idx
dtype: int64
- name: func_before
dtype: string
- name: Vulnerability Classification
dtype: string
- name: vul
dtype: int64
- name: func_after
dtype: string
- name: patch
dtype: string
- name: CWE ID
dtype: string
- name: lines_before
dtype: string
- name: lines_after
dtype: string
splits:
- name: train
num_examples: 150909
- name: validation
num_examples: 18864
- name: test
num_examples: 18863
dataset_original_file_size: 10GB uuncompressed
MSR Data Cleaned - C/C++ Code Vulnerability Dataset
π Dataset Description
A curated collection of C/C++ code vulnerabilities paired with:
- CVE details (scores, classifications, exploit status)
- Code changes (commit messages, added/deleted lines)
- File-level and function-level diffs
π Sample Data Structure from original file
+---------------+-----------------+----------------------+---------------------------+
| CVE ID | Attack Origin | Publish Date | Summary |
+===============+=================+======================+===========================+
| CVE-2015-8467 | Remote | 2015-12-29 | "The samldb_check_user..."|
+---------------+-----------------+----------------------+---------------------------+
| CVE-2016-1234 | Local | 2016-01-15 | "Buffer overflow in..." |
+---------------+-----------------+----------------------+---------------------------+
Note: This is a simplified preview; the full dataset includes additional fields like commit_id, func_before, etc.
1. Accessing in Colab
!pip install huggingface_hub -q
from huggingface_hub import snapshot_download
repo_id = "starsofchance/MSR_data_cleaned"
dataset_path = snapshot_download(repo_id=repo_id, repo_type="dataset")
2. Extracting the Dataset
!apt-get install unzip -qq
!unzip "/root/.cache/huggingface/.../MSR_data_cleaned.zip" -d "/content/extracted_data"
**Note: Extracted size is 10GB (1.5GB compressed). Ensure sufficient disk space.
3. Creating Splits (Colab Pro Recommended)
We used this memory-efficient approach:
from datasets import load_dataset
dataset = load_dataset("csv", data_files="MSR_data_cleaned.csv", streaming=True)
# Randomly distribute rows (80-10-10)
for row in dataset:
rand = random.random()
if rand < 0.8: write_to(train.csv)
elif rand < 0.9: write_to(validation.csv)
else: write_to(test.csv)
Hardware Requirements:
- Minimum 25GB RAM
- Strong CPU (Colab Pro T4 GPU recommended)
##π Dataset Statistics
- Number of Rows: 188,636
- Vulnerability Distribution:
- Vulnerable (1): 18,863 (~10%)
- Non-Vulnerable (0): 169,773 (~90%)
##π Data Fields Description
- CVE_ID: Unique identifier for the vulnerability (Common Vulnerabilities and Exposures).
- CWE_ID: Weakness category identifier (Common Weakness Enumeration).
- Score: CVSS score indicating severity (float, 0-10).
- Summary: Brief description of the vulnerability.
- commit_id: Git commit hash linked to the code change.
- codeLink: URL to the code repository or commit.
- file_name: Name of the file containing the vulnerability.
- func_after: Function code after the change.
- lines_after: Code lines after the change.
- Access_Gained: Type of access gained by exploiting the vulnerability.
- Attack_Origin: Source of the attack (e.g., Remote, Local).
- Authentication_Required: Whether authentication is needed to exploit.
- Availability: Impact on system availability.
- CVE_Page: URL to the CVE details page.
- Complexity: Complexity of exploiting the vulnerability.
- Confidentiality: Impact on data confidentiality.
- Integrity: Impact on data integrity.
- Known_Exploits: Details of known exploits, if any.
- Publish_Date: Date the vulnerability was published.
- Update_Date: Date of the last update to the vulnerability data.
- Vulnerability_Classification: Type or category of the vulnerability.
- add_lines: Lines added in the commit.
- del_lines: Lines deleted in the commit.
- commit_message: Description of the commit.
- files_changed: List of files modified in the commit.
- func_before: Function code before the change.
- lang: Programming language (e.g., C, C++).
- lines_before: Code lines before the change.
splits file for UltiVul project:
π Sample Data Structure (from train.csv)
{
'idx': 0, # Unique ID within the train split
'func_before': '...', # String containing function code before change
'Vulnerability Classification': '...', # Original vulnerability type classification
'vul': 0, # Integer: 0 for non-vulnerable, 1 for vulnerable (target label)
'func_after': '...', # String containing function code after change
'patch': '...', # String containing diff patch
'CWE ID': '...', # String CWE ID, e.g., "CWE-119"
'lines_before': '...', # String lines before change context
'lines_after': '...' # String lines after change context
}
**Note: This shows the structure of the final split files (train.csv, validation.csv, test.csv). The original MSR_data_cleaned.csv contains many more metadata fields.
##π¦ Dataset New Files The dataset is available as three CSV files (specially created for the UltiVul project) hosted on Hugging Face, uploaded via huggingface_hub:
- train.csv Size: 667 MB Description: Training split with 150,909 samples, approximately 80% of the data.
- validation.csv Size: 86 MB Description: Validation split with 18,864 samples, approximately 10% of the data.
- test.csv Size: 84.8 MB Description: Test split with 18,863 samples, approximately 10% of the data.
π Acknowledgements Original dataset provided by Fan et al., 2020 Thanks to the Hugging Face team for dataset hosting tools.
π Citation
@inproceedings{fan2020ccode,
title={A C/C++ Code Vulnerability Dataset with Code Changes and CVE Summaries},
author={Fan, Jiahao and Li, Yi and Wang, Shaohua and Nguyen, Tien N},
booktitle={MSR '20: 17th International Conference on Mining Software Repositories},
pages={1--5},
year={2020},
doi={10.1145/3379597.3387501}
}
π Dataset Creation
- Source: Original data from MSR 2020 Paper
- Processing:
- Cleaned and standardized CSV format
- Stream-based splitting to handle large size
- Preserved all original metadata