Migrate model card from transformers-repo
Browse filesRead announcement at https://discuss.huggingface.co/t/announcement-all-model-cards-will-be-migrated-to-hf-co-model-repos/2755
Original file history: https://github.com/huggingface/transformers/commits/master/model_cards/mrm8488/codebert-base-finetuned-detect-insecure-code/README.md
README.md
ADDED
@@ -0,0 +1,61 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
datasets:
|
4 |
+
- codexglue
|
5 |
+
---
|
6 |
+
|
7 |
+
# CodeBERT fine-tuned for Insecure Code Detection 💾⛔
|
8 |
+
|
9 |
+
|
10 |
+
[codebert-base](https://huggingface.co/microsoft/codebert-base) fine-tuned on [CodeXGLUE -- Defect Detection](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) dataset for **Insecure Code Detection** downstream task.
|
11 |
+
|
12 |
+
## Details of [CodeBERT](https://arxiv.org/abs/2002.08155)
|
13 |
+
|
14 |
+
We present CodeBERT, a bimodal pre-trained model for programming language (PL) and nat-ural language (NL). CodeBERT learns general-purpose representations that support downstream NL-PL applications such as natural language codesearch, code documentation generation, etc. We develop CodeBERT with Transformer-based neural architecture, and train it with a hybrid objective function that incorporates the pre-training task of replaced token detection, which is to detect plausible alternatives sampled from generators. This enables us to utilize both bimodal data of NL-PL pairs and unimodal data, where the former provides input tokens for model training while the latter helps to learn better generators. We evaluate CodeBERT on two NL-PL applications by fine-tuning model parameters. Results show that CodeBERT achieves state-of-the-art performance on both natural language code search and code documentation generation tasks. Furthermore, to investigate what type of knowledge is learned in CodeBERT, we construct a dataset for NL-PL probing, and evaluate in a zero-shot setting where parameters of pre-trained models are fixed. Results show that CodeBERT performs better than previous pre-trained models on NL-PL probing.
|
15 |
+
|
16 |
+
## Details of the downstream task (code classification) - Dataset 📚
|
17 |
+
|
18 |
+
Given a source code, the task is to identify whether it is an insecure code that may attack software systems, such as resource leaks, use-after-free vulnerabilities and DoS attack. We treat the task as binary classification (0/1), where 1 stands for insecure code and 0 for secure code.
|
19 |
+
|
20 |
+
The [dataset](https://github.com/microsoft/CodeXGLUE/tree/main/Code-Code/Defect-detection) used comes from the paper [*Devign*: Effective Vulnerability Identification by Learning Comprehensive Program Semantics via Graph Neural Networks](http://papers.nips.cc/paper/9209-devign-effective-vulnerability-identification-by-learning-comprehensive-program-semantics-via-graph-neural-networks.pdf). All projects are combined and splitted 80%/10%/10% for training/dev/test.
|
21 |
+
|
22 |
+
Data statistics of the dataset are shown in the below table:
|
23 |
+
|
24 |
+
| | #Examples |
|
25 |
+
| ----- | :-------: |
|
26 |
+
| Train | 21,854 |
|
27 |
+
| Dev | 2,732 |
|
28 |
+
| Test | 2,732 |
|
29 |
+
|
30 |
+
## Test set metrics 🧾
|
31 |
+
|
32 |
+
| Methods | ACC |
|
33 |
+
| -------- | :-------: |
|
34 |
+
| BiLSTM | 59.37 |
|
35 |
+
| TextCNN | 60.69 |
|
36 |
+
| [RoBERTa](https://arxiv.org/pdf/1907.11692.pdf) | 61.05 |
|
37 |
+
| [CodeBERT](https://arxiv.org/pdf/2002.08155.pdf) | 62.08 |
|
38 |
+
| [Ours](https://huggingface.co/mrm8488/codebert-base-finetuned-detect-insecure-code) | **65.30** |
|
39 |
+
|
40 |
+
|
41 |
+
## Model in Action 🚀
|
42 |
+
|
43 |
+
```python
|
44 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
45 |
+
import torch
|
46 |
+
import numpy as np
|
47 |
+
tokenizer = AutoTokenizer.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code')
|
48 |
+
model = AutoModelForSequenceClassification.from_pretrained('mrm8488/codebert-base-finetuned-detect-insecure-code')
|
49 |
+
|
50 |
+
inputs = tokenizer("your code here", return_tensors="pt", truncation=True, padding='max_length')
|
51 |
+
labels = torch.tensor([1]).unsqueeze(0) # Batch size 1
|
52 |
+
outputs = model(**inputs, labels=labels)
|
53 |
+
loss = outputs.loss
|
54 |
+
logits = outputs.logits
|
55 |
+
|
56 |
+
print(np.argmax(logits.detach().numpy()))
|
57 |
+
```
|
58 |
+
|
59 |
+
> Created by [Manuel Romero/@mrm8488](https://twitter.com/mrm8488) | [LinkedIn](https://www.linkedin.com/in/manuel-romero-cs/)
|
60 |
+
|
61 |
+
> Made with <span style="color: #e25555;">♥</span> in Spain
|