license: apache-2.0 language:
en library_name: peft tags:
text-generation
code-generation
instruction-following
finetuned
llama
codellama
magicoder base_model: Arko007/my-awesome-code-assistant-v5
Model Card for My Awesome Code Assistant (v6 FINAL BOSS)
my-awesome-code-assistant-v6 is the final, most powerful version of a series of fine-tuned Code Llama 7B models. This model is designed to be a high-performance AI coding partner, capable of writing, explaining, and discussing complex code across multiple programming languages.
Model Details Model Description my-awesome-code-assistant-v6 is the final, most powerful version of a series of fine-tuned Code Llama 7B models. This model has undergone multiple stages of training on high-quality, instruction-based coding datasets, culminating in a final training run on the powerful ise-uiuc/Magicoder-OSS-Instruct-75K dataset.
The model is designed to be a high-performance AI coding partner, capable of writing, explaining, and discussing complex code across multiple programming languages. It excels at following instructions and providing detailed, helpful responses.
Developed by: Arko007
Model type: Causal Language Model (Decoder-only Transformer)
Language(s) (NLP): English
License: Apache 2.0
Finetuned from model: Arko007/my-awesome-code-assistant-v5
Model Sources [optional] Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v6
Paper [optional]: [More Information Needed]
Demo [optional]: [More Information Needed]
Uses Direct Use This model is intended for direct use as a coding assistant via a chat or instruction-based interface. It can be used for:
Generating code snippets or complete programs from natural language descriptions.
Explaining complex code and identifying potential issues (e.g., memory leaks, thread safety).
Tutoring and learning new programming concepts.
Debugging code.
Downstream Use [optional] The LoRA adapters for this model can be used for further fine-tuning on more specialized, domain-specific coding tasks (e.g., web development, data science, game development).
Out-of-Scope Use This model is not intended for generating non-coding related content. While it has some general knowledge from its base model, its expertise is in programming. Using it for tasks outside of this domain may produce unreliable or nonsensical results.
The model should not be used for any malicious purposes, including generating malware or exploiting security vulnerabilities.
Bias, Risks, and Limitations Hallucinations: As a 7B parameter model, it can still generate incorrect or nonsensical code ("hallucinate"). All outputs should be carefully reviewed and tested by a human developer.
Knowledge Cutoff: The model's knowledge is limited to the data it was trained on. It will not have knowledge of new libraries, frameworks, or programming language features released after its training data was collected.
Bias in Data: The training data is sourced from open-source code repositories, which may contain biases in terms of coding styles or the prevalence of certain programming languages.
How to Get Started with the Model Use the code below to run the model. It's crucial to use the exact ### Instruction: and ### Response: format, as this is what the model was trained on.
import torch from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
MODEL_ID = "Arko007/my-awesome-code-assistant-v6"
Use 4-bit quantization for efficient inference
quantization_config = BitsAndBytesConfig( load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_compute_dtype=torch.float16, )
model = AutoModelForCausalLM.from_pretrained( MODEL_ID, quantization_config=quantization_config, device_map="auto", trust_remote_code=True, ) tokenizer = AutoTokenizer.from_pretrained("codellama/CodeLlama-7b-hf")
--- Create the Prompt ---
prompt_text = "Write a Python function to find the factorial of a number using recursion."
formatted_prompt = f"""### Instruction: {prompt_text}
Response:"""
inputs = tokenizer(formatted_prompt, return_tensors="pt").to(model.device)
outputs = model.generate( **inputs, max_new_tokens=256, repetition_penalty=1.1, do_sample=True, top_p=0.9, temperature=0.2, eos_token_id=tokenizer.eos_token_id )
response_text = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response_text.split("### Response:")[1].strip())
Training Details Training Data The model was trained in multiple stages on a combination of high-quality instruction datasets:
Initial Stages (v1-v3): Trained on a combination of sahil2801/CodeAlpaca-20k and theblackcat102/evol-codealpaca-v1.
Intermediate Stage (v4-v5): Further trained on the ise-uiuc/Magicoder-OSS-Instruct-75K dataset.
Final Stage (v6): The final training stage used a large, high-quality subset of the ise-uiuc/Magicoder-OSS-Instruct-75K dataset to achieve a new level of performance.
Training Procedure This model was created using Parameter-Efficient Fine-Tuning (PEFT) with Low-Rank Adaptation (LoRA). The base model (codellama/CodeLlama-7b-hf) was loaded in 4-bit precision, and LoRA adapters were trained and progressively updated through multiple stages.
Training Hyperparameters Training regime: bf16 mixed precision
learning_rate: 1.5e-4 with a cosine scheduler and 100 warmup steps.
per_device_train_batch_size: 72 (on H200)
max_steps: 3000
lora_r: 24
lora_alpha: 48
target_modules: q_proj, v_proj, k_proj, o_proj
Speeds, Sizes, Times The final training run was completed in 2.04 hours on a single NVIDIA H200 GPU.
Evaluation [More Information Needed]
Environmental Impact Carbon emissions can be estimated using the Machine Learning Impact calculator.
Hardware Type: NVIDIA H200 (141 GB) for the final stage; NVIDIA L40S for earlier stages.
Hours used: Approximately 3 hours for the final training run.
Cloud Provider: Lightning AI
Compute Region: [More Information Needed]
Carbon Emitted: [More Information Needed]
Model Card Contact Arko007
Framework versions PEFT 0.17.1
Transformers 4.41.0
PyTorch 2.3.0
Datasets 2.19.0