You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

W3SA - Stellar Codebase Benchmark

Overview

This repository contains the code for the W3SA Benchmark for Stellar.

Repo Structure

The benchmark contains two folders, a benchmark and an src folder. The benchmark folder has all the projects use for eval along with their audit findings in the ground_truth and the src folder contains the scripts used to generate these outputs, allowing for reproducibility and further analysis.

├── README.md
├── benchmark
│   ├── config/
│   ├── ground_truth/
│   └── repositories/
└── bm_src
    ├── dataset_transformation.py
    ├── eval.py
    ├── experiments.py
    ├── models.py
    ├── prompts.py
    └── metrics.py
 

Project Statistics

Project details with total number of vulnerabilities for each severity level

Project Name Critical/High Medium Low/ Informational
reflector 1 1 4
slender 12 9 7
soroswap 1 5 4
comet 2 4 9
allbridge-core 0 1 8
Total 16 20 32

Detection Rate

Projects Claude 3.5 4o o3-mini o1-mini o1 ALMX
reflector 0.33 0.33 0.33 0.16 0.5 0.5
slender 0.28 0.07 0.14 0.07 0.14 0.25
soroswap 0.1 0.1 0.1 0.2 0.3 0.3
comet 0.06 0.06 0.13 0.06 0.13 0.33
allbridge-core 0.22 0.22 0.22 0.22 0.11 0.33
Total 0.198 0.156 0.184 0.142 0.236 0.342
Stellar Codebase Benchmark

Set up

  • Install uv package manager if not yet available
  • Run uv sync

Run an experiment

  • Set your OPENAI_API_KEY as environmental variable
  • Launch your experiment by running:
uv run experiment.py --model o3-mini

Contact Us

For or questions, suggestions, or to learn more about Almanax.ai, reach out to us at https://www.almanax.ai/contact

Downloads last month
53

Collection including almanax/w3sa-bm-stellar