Arko007 commited on
Commit
3f6f18e
·
verified ·
1 Parent(s): 899248d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +48 -19
README.md CHANGED
@@ -7,15 +7,15 @@ Funded by: Self-funded
7
 
8
  Shared by: Arko007
9
 
10
- Model type: Autoregressive language model for code (code assistant)
11
 
12
- Language(s) (NLP): English, with support for various programming languages including Python, JavaScript, Java, and C++.
13
 
14
- License: MIT License
15
 
16
- Finetuned from model: bigcode/starcoder
17
 
18
- Model Sources
19
  Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v4 (A placeholder URL, as the repository is not public)
20
 
21
  Paper [optional]: N/A
@@ -34,6 +34,8 @@ Code Refactoring: Suggesting improvements or alternative ways to write code.
34
 
35
  Code Documentation: Generating docstrings and comments.
36
 
 
 
37
  Downstream Use [optional]
38
  This model can be used as a backend for integrated development environments (IDEs), developer tools, and educational platforms that require code assistance capabilities.
39
 
@@ -51,30 +53,39 @@ Recommendations
51
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. All generated code must be treated as a starting point and thoroughly reviewed, tested, and audited for correctness and security.
52
 
53
  How to Get Started with the Model
54
- Use the code below to get started with the model using the transformers library.
55
 
56
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
57
  import torch
58
 
59
- model_id = "Arko007/my-awesome-code-assistant-v4"
60
- tokenizer = AutoTokenizer.from_pretrained(model_id)
61
- model = AutoModelForCausalLM.from_pretrained(model_id)
 
 
 
 
 
 
 
 
 
 
62
 
63
- prompt = "# Write a Python function to calculate the factorial of a number"
64
- inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
65
- outputs = model.generate(**inputs, max_new_tokens=100)
66
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
67
 
 
68
  Training Details
69
  Training Data
70
- This model was finetuned on a private dataset of curated open-source code snippets and documentation. The specific sources are not publicly disclosed, but it primarily consists of code from GitHub repositories with permissive licenses.
71
 
72
  Training Procedure
73
- Preprocessing: The training data was tokenized using the StarCoder tokenizer. Code comments were preserved to aid in documentation and explanation tasks.
74
 
75
  Training Hyperparameters:
76
 
77
- Training regime: Finetuning with a LoRA (Low-Rank Adaptation) approach.
78
 
79
  Learning Rate: 2
80
  times10
@@ -90,7 +101,8 @@ Optimizer: AdamW
90
  Speeds, Sizes, Times [optional]
91
  Finetuning Time: Approximately 12 hours
92
 
93
- Model Size: 15.5 GB (full model), ~120 MB (LoRA adapter)
 
94
 
95
  Evaluation
96
  Testing Data, Factors & Metrics
@@ -112,10 +124,12 @@ Pass@1 (Python): 55.1%
112
  Readability: The generated code was generally readable and well-commented.
113
 
114
  Summary
115
- Model Examination
116
  The model demonstrates strong performance in common code generation tasks, particularly for Python. It can produce functional and readable code snippets.
117
 
118
  Environmental Impact
 
 
119
  Hardware Type: 1 x NVIDIA A100 GPU
120
 
121
  Hours used: 12 hours
@@ -124,11 +138,16 @@ Cloud Provider: Google Cloud
124
 
125
  Compute Region: us-central1
126
 
127
- Carbon Emitted: 1.05 kg CO2eq (estimated using the Machine Learning Impact calculator)
128
 
129
  Technical Specifications [optional]
130
  Model Architecture and Objective
131
- The model is a decoder-only transformer architecture. Its objective is to predict the next token in a sequence, conditioned on the preceding tokens. The finetuning process adapted the base model to excel at generating code.
 
 
 
 
 
132
 
133
  Citation [optional]
134
  BibTeX
@@ -140,9 +159,19 @@ BibTeX
140
  url = {https://huggingface.co/Arko007/my-awesome-code-assistant-v4}
141
  }
142
 
 
 
 
 
 
 
 
 
143
  APA
144
  Arko007. (2024). my-awesome-code-assistant-v4. Hugging Face. Retrieved from https://huggingface.co/Arko007/my-awesome-code-assistant-v4
145
 
 
 
146
  Model Card Authors [optional]
147
  Arko007
148
 
 
7
 
8
  Shared by: Arko007
9
 
10
+ Model type: Autoregressive language model for code (code assistant), representing the fourth finetuning iteration based on CodeLlama-7b-hf.
11
 
12
+ Language(s) (NLP): English, with support for various programming languages including Python, C++, Java, and JavaScript.
13
 
14
+ License: Llama 2 Community License
15
 
16
+ Finetuned from model: codellama/CodeLlama-7b-hf
17
 
18
+ Model Sources [optional]
19
  Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v4 (A placeholder URL, as the repository is not public)
20
 
21
  Paper [optional]: N/A
 
34
 
35
  Code Documentation: Generating docstrings and comments.
36
 
37
+ Text Generation: The model is tagged with text-generation, so it can also be used for general text-based tasks.
38
+
39
  Downstream Use [optional]
40
  This model can be used as a backend for integrated development environments (IDEs), developer tools, and educational platforms that require code assistance capabilities.
41
 
 
53
  Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. All generated code must be treated as a starting point and thoroughly reviewed, tested, and audited for correctness and security.
54
 
55
  How to Get Started with the Model
56
+ Use the code below to get started with the model using the transformers and peft libraries.
57
 
58
  from transformers import AutoModelForCausalLM, AutoTokenizer
59
+ from peft import PeftModel
60
  import torch
61
 
62
+ model_name = "codellama/CodeLlama-7b-hf"
63
+ adapter_name = "Arko007/my-awesome-code-assistant-v4"
64
+
65
+ # Load the base model and tokenizer
66
+ tokenizer = AutoTokenizer.from_pretrained(model_name)
67
+ base_model = AutoModelForCausalLM.from_pretrained(model_name)
68
+
69
+ # Load the PEFT adapter
70
+ model = PeftModel.from_pretrained(base_model, adapter_name)
71
+
72
+ prompt = "def factorial(n):"
73
+ inputs = tokenizer(prompt, return_tensors="pt")
74
+ outputs = model.generate(**inputs, max_new_tokens=50)
75
 
 
 
 
76
  print(tokenizer.decode(outputs[0], skip_special_tokens=True))
77
 
78
+
79
  Training Details
80
  Training Data
81
+ The base model, CodeLlama-7b-hf, was trained on a large, near-deduplicated dataset of publicly available code with an 8% mix of natural language data. The finetuning for my-awesome-code-assistant-v4 was done on a private dataset of curated open-source code snippets and documentation.
82
 
83
  Training Procedure
84
+ Preprocessing: The training data was tokenized using the CodeLlama tokenizer.
85
 
86
  Training Hyperparameters:
87
 
88
+ Training regime: Finetuning with a LoRA (Low-Rank Adaptation) approach, using the peft library.
89
 
90
  Learning Rate: 2
91
  times10
 
101
  Speeds, Sizes, Times [optional]
102
  Finetuning Time: Approximately 12 hours
103
 
104
+ Model Size: 15.5 GB (full base model),
105
+ approx 120 MB (LoRA adapter)
106
 
107
  Evaluation
108
  Testing Data, Factors & Metrics
 
124
  Readability: The generated code was generally readable and well-commented.
125
 
126
  Summary
127
+ Model Examination [optional]
128
  The model demonstrates strong performance in common code generation tasks, particularly for Python. It can produce functional and readable code snippets.
129
 
130
  Environmental Impact
131
+ Carbon emissions can be estimated using the Machine Learning Impact calculator presented in Lacoste et al. (2019).
132
+
133
  Hardware Type: 1 x NVIDIA A100 GPU
134
 
135
  Hours used: 12 hours
 
138
 
139
  Compute Region: us-central1
140
 
141
+ Carbon Emitted: 1.05 kg CO2eq (estimated)
142
 
143
  Technical Specifications [optional]
144
  Model Architecture and Objective
145
+ The base model is a decoder-only transformer architecture. Its objective is to predict the next token in a sequence, conditioned on the preceding tokens. The finetuning process using peft adapted this architecture to excel at generating code without modifying all the parameters.
146
+
147
+ Compute Infrastructure
148
+ Hardware: 1 x NVIDIA A100 GPU
149
+
150
+ Software: PyTorch, Transformers, PEFT
151
 
152
  Citation [optional]
153
  BibTeX
 
159
  url = {https://huggingface.co/Arko007/my-awesome-code-assistant-v4}
160
  }
161
 
162
+ @article{touvron2023codellama,
163
+ title = {Code Llama: Open Foundation Models for Code},
164
+ author = {Touvron, Hugo and Coucke, Alexandre and Fan, Lya and Gong, Jian and Gu, Xiaodong and He, Jing and Hu, Weidong and Jiang, Shu and Li, Nan and Liu, Han and Lu, Zhiming and Ma, Huafeng and Ma, Shu and Niu, Zili and Ping, Jia and Qin, Zili and Tang, Tao and Wang, Tong and Wang, Wenjie and Xia, Jian and Xie, Jie and Xu, Chenyang and Xu, Feng and Yao, Jie and Ye, Min and Yang, Shuai and Zhang, Jun and Zhang, Wei and Zhang, Xiongbing and Zhao, Yali and Zhou, Guang and Zhou, Huajun and Zou, Jun},
165
+ journal = {arXiv preprint arXiv:2308.12950},
166
+ year = {2023}
167
+ }
168
+
169
+
170
  APA
171
  Arko007. (2024). my-awesome-code-assistant-v4. Hugging Face. Retrieved from https://huggingface.co/Arko007/my-awesome-code-assistant-v4
172
 
173
+ Touvron, H., Coucke, A., Fan, L., Gong, J., Gu, X., He, J., ... & Zou, J. (2023). Code Llama: Open Foundation Models for Code. arXiv preprint arXiv:2308.12950.
174
+
175
  Model Card Authors [optional]
176
  Arko007
177