Update README.md
Browse files
README.md
CHANGED
@@ -1,207 +1,181 @@
|
|
1 |
-
|
2 |
-
|
|
|
|
|
3 |
library_name: peft
|
4 |
-
pipeline_tag: text-generation
|
5 |
tags:
|
6 |
-
- base_model:adapter:codellama/CodeLlama-7b-hf
|
7 |
-
- lora
|
8 |
-
- transformers
|
9 |
-
---
|
10 |
-
|
11 |
-
# Model Card for Model ID
|
12 |
-
|
13 |
-
<!-- Provide a quick summary of what the model is/does. -->
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
## Model Details
|
18 |
-
|
19 |
-
### Model Description
|
20 |
-
|
21 |
-
<!-- Provide a longer summary of what this model is. -->
|
22 |
-
|
23 |
-
|
24 |
-
|
25 |
-
- **Developed by:** [More Information Needed]
|
26 |
-
- **Funded by [optional]:** [More Information Needed]
|
27 |
-
- **Shared by [optional]:** [More Information Needed]
|
28 |
-
- **Model type:** [More Information Needed]
|
29 |
-
- **Language(s) (NLP):** [More Information Needed]
|
30 |
-
- **License:** [More Information Needed]
|
31 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
32 |
-
|
33 |
-
### Model Sources [optional]
|
34 |
-
|
35 |
-
<!-- Provide the basic links for the model. -->
|
36 |
|
37 |
-
-
|
38 |
-
- **Paper [optional]:** [More Information Needed]
|
39 |
-
- **Demo [optional]:** [More Information Needed]
|
40 |
|
41 |
-
|
42 |
|
43 |
-
|
44 |
|
45 |
-
|
46 |
|
47 |
-
|
48 |
-
|
49 |
-
[More Information Needed]
|
50 |
-
|
51 |
-
### Downstream Use [optional]
|
52 |
-
|
53 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
54 |
-
|
55 |
-
[More Information Needed]
|
56 |
-
|
57 |
-
### Out-of-Scope Use
|
58 |
-
|
59 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
60 |
-
|
61 |
-
[More Information Needed]
|
62 |
-
|
63 |
-
## Bias, Risks, and Limitations
|
64 |
-
|
65 |
-
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
|
66 |
-
|
67 |
-
[More Information Needed]
|
68 |
|
69 |
-
|
70 |
|
71 |
-
|
|
|
72 |
|
73 |
-
|
|
|
74 |
|
75 |
-
|
76 |
|
77 |
-
|
|
|
|
|
78 |
|
79 |
-
|
80 |
|
81 |
-
|
82 |
|
83 |
-
|
84 |
|
85 |
-
|
86 |
|
87 |
-
|
88 |
|
89 |
-
|
90 |
|
91 |
-
|
|
|
92 |
|
93 |
-
|
94 |
|
95 |
-
[More Information Needed]
|
96 |
|
|
|
|
|
|
|
97 |
|
98 |
-
|
99 |
|
100 |
-
|
101 |
|
102 |
-
|
103 |
|
104 |
-
|
105 |
|
106 |
-
|
107 |
-
|
108 |
-
## Evaluation
|
109 |
|
110 |
-
|
|
|
111 |
|
112 |
-
|
113 |
|
114 |
-
|
|
|
115 |
|
116 |
-
|
117 |
|
118 |
-
|
119 |
|
120 |
-
|
|
|
121 |
|
122 |
-
|
|
|
123 |
|
124 |
-
|
125 |
|
126 |
-
|
|
|
|
|
|
|
|
|
|
|
127 |
|
128 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
129 |
|
130 |
-
|
|
|
131 |
|
132 |
-
###
|
|
|
133 |
|
134 |
-
|
135 |
|
136 |
-
|
137 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
138 |
|
|
|
|
|
139 |
|
140 |
-
|
|
|
|
|
141 |
|
142 |
-
|
143 |
|
144 |
-
|
145 |
|
146 |
-
|
147 |
|
148 |
-
|
|
|
149 |
|
150 |
-
|
|
|
151 |
|
152 |
-
-
|
153 |
-
- **Hours used:** [More Information Needed]
|
154 |
-
- **Cloud Provider:** [More Information Needed]
|
155 |
-
- **Compute Region:** [More Information Needed]
|
156 |
-
- **Carbon Emitted:** [More Information Needed]
|
157 |
|
158 |
-
|
159 |
|
160 |
-
|
161 |
|
162 |
-
|
163 |
|
164 |
-
|
165 |
|
166 |
-
|
167 |
|
168 |
-
|
|
|
169 |
|
|
|
170 |
[More Information Needed]
|
171 |
|
172 |
-
|
|
|
173 |
|
174 |
-
|
175 |
|
176 |
-
|
177 |
|
178 |
-
|
179 |
|
180 |
-
|
181 |
|
182 |
-
[More Information Needed]
|
183 |
-
|
184 |
-
**APA:**
|
185 |
|
186 |
-
|
|
|
187 |
|
188 |
-
|
|
|
189 |
|
190 |
-
|
191 |
|
192 |
-
|
193 |
-
|
194 |
-
## More Information [optional]
|
195 |
-
|
196 |
-
[More Information Needed]
|
197 |
-
|
198 |
-
## Model Card Authors [optional]
|
199 |
-
|
200 |
-
[More Information Needed]
|
201 |
-
|
202 |
-
## Model Card Contact
|
203 |
-
|
204 |
-
[More Information Needed]
|
205 |
-
### Framework versions
|
206 |
|
207 |
-
|
|
|
1 |
+
license: apache-2.0
|
2 |
+
language:
|
3 |
+
|
4 |
+
en
|
5 |
library_name: peft
|
|
|
6 |
tags:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
7 |
|
8 |
+
text-generation
|
|
|
|
|
9 |
|
10 |
+
code-generation
|
11 |
|
12 |
+
instruction-following
|
13 |
|
14 |
+
finetuned
|
15 |
|
16 |
+
llama
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
17 |
|
18 |
+
codellama
|
19 |
|
20 |
+
magicoder
|
21 |
+
base_model: Arko007/my-awesome-code-assistant-v5
|
22 |
|
23 |
+
Model Card for My Awesome Code Assistant (v6 FINAL BOSS)
|
24 |
+
<!-- Provide a quick summary of what the model is/does. -->
|
25 |
|
26 |
+
my-awesome-code-assistant-v6 is the final, most powerful version of a series of fine-tuned Code Llama 7B models. This model is designed to be a high-performance AI coding partner, capable of writing, explaining, and discussing complex code across multiple programming languages.
|
27 |
|
28 |
+
Model Details
|
29 |
+
Model Description
|
30 |
+
my-awesome-code-assistant-v6 is the final, most powerful version of a series of fine-tuned Code Llama 7B models. This model has undergone multiple stages of training on high-quality, instruction-based coding datasets, culminating in a final training run on the powerful ise-uiuc/Magicoder-OSS-Instruct-75K dataset.
|
31 |
|
32 |
+
The model is designed to be a high-performance AI coding partner, capable of writing, explaining, and discussing complex code across multiple programming languages. It excels at following instructions and providing detailed, helpful responses.
|
33 |
|
34 |
+
Developed by: Arko007
|
35 |
|
36 |
+
Model type: Causal Language Model (Decoder-only Transformer)
|
37 |
|
38 |
+
Language(s) (NLP): English
|
39 |
|
40 |
+
License: Apache 2.0
|
41 |
|
42 |
+
Finetuned from model: Arko007/my-awesome-code-assistant-v5
|
43 |
|
44 |
+
Model Sources [optional]
|
45 |
+
Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v6
|
46 |
|
47 |
+
Paper [optional]: [More Information Needed]
|
48 |
|
49 |
+
Demo [optional]: [More Information Needed]
|
50 |
|
51 |
+
Uses
|
52 |
+
Direct Use
|
53 |
+
This model is intended for direct use as a coding assistant via a chat or instruction-based interface. It can be used for:
|
54 |
|
55 |
+
Generating code snippets or complete programs from natural language descriptions.
|
56 |
|
57 |
+
Explaining complex code and identifying potential issues (e.g., memory leaks, thread safety).
|
58 |
|
59 |
+
Tutoring and learning new programming concepts.
|
60 |
|
61 |
+
Debugging code.
|
62 |
|
63 |
+
Downstream Use [optional]
|
64 |
+
The LoRA adapters for this model can be used for further fine-tuning on more specialized, domain-specific coding tasks (e.g., web development, data science, game development).
|
|
|
65 |
|
66 |
+
Out-of-Scope Use
|
67 |
+
This model is not intended for generating non-coding related content. While it has some general knowledge from its base model, its expertise is in programming. Using it for tasks outside of this domain may produce unreliable or nonsensical results.
|
68 |
|
69 |
+
The model should not be used for any malicious purposes, including generating malware or exploiting security vulnerabilities.
|
70 |
|
71 |
+
Bias, Risks, and Limitations
|
72 |
+
Hallucinations: As a 7B parameter model, it can still generate incorrect or nonsensical code ("hallucinate"). All outputs should be carefully reviewed and tested by a human developer.
|
73 |
|
74 |
+
Knowledge Cutoff: The model's knowledge is limited to the data it was trained on. It will not have knowledge of new libraries, frameworks, or programming language features released after its training data was collected.
|
75 |
|
76 |
+
Bias in Data: The training data is sourced from open-source code repositories, which may contain biases in terms of coding styles or the prevalence of certain programming languages.
|
77 |
|
78 |
+
How to Get Started with the Model
|
79 |
+
Use the code below to run the model. It's crucial to use the exact ### Instruction: and ### Response: format, as this is what the model was trained on.
|
80 |
|
81 |
+
import torch
|
82 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
83 |
|
84 |
+
MODEL_ID = "Arko007/my-awesome-code-assistant-v6"
|
85 |
|
86 |
+
# Use 4-bit quantization for efficient inference
|
87 |
+
quantization_config = BitsAndBytesConfig(
|
88 |
+
load_in_4bit=True,
|
89 |
+
bnb_4bit_quant_type="nf4",
|
90 |
+
bnb_4bit_compute_dtype=torch.float16,
|
91 |
+
)
|
92 |
|
93 |
+
model = AutoModelForCausalLM.from_pretrained(
|
94 |
+
MODEL_ID,
|
95 |
+
quantization_config=quantization_config,
|
96 |
+
device_map="auto",
|
97 |
+
trust_remote_code=True,
|
98 |
+
)
|
99 |
+
tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
|
100 |
|
101 |
+
# --- Create the Prompt ---
|
102 |
+
prompt_text = "Write a Python function to find the factorial of a number using recursion."
|
103 |
|
104 |
+
formatted_prompt = f"""### Instruction:
|
105 |
+
{prompt_text}
|
106 |
|
107 |
+
### Response:"""
|
108 |
|
109 |
+
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
|
110 |
|
111 |
+
outputs = model.generate(
|
112 |
+
**inputs,
|
113 |
+
max_new_tokens=256,
|
114 |
+
repetition_penalty=1.1,
|
115 |
+
do_sample=True,
|
116 |
+
top_p=0.9,
|
117 |
+
temperature=0.2,
|
118 |
+
eos_token_id=tokenizer.eos_token_id
|
119 |
+
)
|
120 |
|
121 |
+
response_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
122 |
+
print(response_text.split("### Response:")[1].strip())
|
123 |
|
124 |
+
Training Details
|
125 |
+
Training Data
|
126 |
+
The model was trained in multiple stages on a combination of high-quality instruction datasets:
|
127 |
|
128 |
+
Initial Stages (v1-v3): Trained on a combination of sahil2801/CodeAlpaca-20k and theblackcat102/evol-codealpaca-v1.
|
129 |
|
130 |
+
Intermediate Stage (v4-v5): Further trained on the ise-uiuc/Magicoder-OSS-Instruct-75K dataset.
|
131 |
|
132 |
+
Final Stage (v6): The final training stage used a large, high-quality subset of the ise-uiuc/Magicoder-OSS-Instruct-75K dataset to achieve a new level of performance.
|
133 |
|
134 |
+
Training Procedure
|
135 |
+
This model was created using Parameter-Efficient Fine-Tuning (PEFT) with Low-Rank Adaptation (LoRA). The base model (codellama/CodeLlama-7b-hf) was loaded in 4-bit precision, and LoRA adapters were trained and progressively updated through multiple stages.
|
136 |
|
137 |
+
Training Hyperparameters
|
138 |
+
Training regime: bf16 mixed precision
|
139 |
|
140 |
+
learning_rate: 1.5e-4 with a cosine scheduler and 100 warmup steps.
|
|
|
|
|
|
|
|
|
141 |
|
142 |
+
per_device_train_batch_size: 72 (on H200)
|
143 |
|
144 |
+
max_steps: 3000
|
145 |
|
146 |
+
lora_r: 24
|
147 |
|
148 |
+
lora_alpha: 48
|
149 |
|
150 |
+
target_modules: q_proj, v_proj, k_proj, o_proj
|
151 |
|
152 |
+
Speeds, Sizes, Times
|
153 |
+
The final training run was completed in 2.04 hours on a single NVIDIA H200 GPU.
|
154 |
|
155 |
+
Evaluation
|
156 |
[More Information Needed]
|
157 |
|
158 |
+
Environmental Impact
|
159 |
+
Carbon emissions can be estimated using the Machine Learning Impact calculator.
|
160 |
|
161 |
+
Hardware Type: NVIDIA H200 (141 GB) for the final stage; NVIDIA L40S for earlier stages.
|
162 |
|
163 |
+
Hours used: Approximately 3 hours for the final training run.
|
164 |
|
165 |
+
Cloud Provider: Lightning AI
|
166 |
|
167 |
+
Compute Region: [More Information Needed]
|
168 |
|
169 |
+
Carbon Emitted: [More Information Needed]
|
|
|
|
|
170 |
|
171 |
+
Model Card Contact
|
172 |
+
Arko007
|
173 |
|
174 |
+
Framework versions
|
175 |
+
PEFT 0.17.1
|
176 |
|
177 |
+
Transformers 4.41.0
|
178 |
|
179 |
+
PyTorch 2.3.0
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
180 |
|
181 |
+
Datasets 2.19.0
|