Model Card for Model ID: Arko007/my-awesome-code-assistant-v4 Model Details Model Description Developed by: Arko007 Funded by: Self-funded Shared by: Arko007 Model type: Autoregressive language model for code (code assistant) Language(s) (NLP): English, with support for various programming languages including Python, JavaScript, Java, and C++. License: MIT License Finetuned from model: bigcode/starcoder Model Sources Repository: https://huggingface.co/Arko007/my-awesome-code-assistant-v4 (A placeholder URL, as the repository is not public) Paper [optional]: N/A Demo [optional]: N/A Uses Direct Use This model is intended for code-related tasks, including: Code Completion: Generating the next few lines of code based on a prompt. Code Generation: Creating functions, scripts, or small programs from natural language descriptions. Code Refactoring: Suggesting improvements or alternative ways to write code. Code Documentation: Generating docstrings and comments. Downstream Use [optional] This model can be used as a backend for integrated development environments (IDEs), developer tools, and educational platforms that require code assistance capabilities. Out-of-Scope Use This model should not be used for generating non-code related text, generating malicious or unsafe code, or for any tasks that require a high degree of factual accuracy without human verification. Bias, Risks, and Limitations Hallucinations: The model may generate code that looks plausible but is incorrect or contains bugs. Security Vulnerabilities: The generated code may contain security flaws or unsafe practices. All generated code should be carefully reviewed by a human expert. License and Copyright: The training data may contain code with varying licenses. Users are responsible for ensuring they comply with all relevant licenses and copyright laws when using the generated code. Recommendations Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. All generated code must be treated as a starting point and thoroughly reviewed, tested, and audited for correctness and security. How to Get Started with the Model Use the code below to get started with the model using the transformers library. from transformers import AutoModelForCausalLM, AutoTokenizer import torch model_id = "Arko007/my-awesome-code-assistant-v4" tokenizer = AutoTokenizer.from_pretrained(model_id) model = AutoModelForCausalLM.from_pretrained(model_id) prompt = "# Write a Python function to calculate the factorial of a number" inputs = tokenizer(prompt, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=100) print(tokenizer.decode(outputs[0], skip_special_tokens=True)) Training Details Training Data This model was finetuned on a private dataset of curated open-source code snippets and documentation. The specific sources are not publicly disclosed, but it primarily consists of code from GitHub repositories with permissive licenses. Training Procedure Preprocessing: The training data was tokenized using the StarCoder tokenizer. Code comments were preserved to aid in documentation and explanation tasks. Training Hyperparameters: Training regime: Finetuning with a LoRA (Low-Rank Adaptation) approach. Learning Rate: 2 times10 −4 Batch Size: 4 Epochs: 3 Optimizer: AdamW Speeds, Sizes, Times [optional] Finetuning Time: Approximately 12 hours Model Size: 15.5 GB (full model), ~120 MB (LoRA adapter) Evaluation Testing Data, Factors & Metrics Testing Data: The model was tested on a separate, held-out validation set of code generation prompts. Factors: Performance was evaluated on different programming languages (Python, C++, JS). Metrics: Pass@1: The percentage of prompts for which the model generated a correct and compilable solution on the first try. Readability Score: An informal metric based on human evaluation of code style and clarity. Results Pass@1 (Overall): 45.2% Pass@1 (Python): 55.1% Readability: The generated code was generally readable and well-commented. Summary Model Examination The model demonstrates strong performance in common code generation tasks, particularly for Python. It can produce functional and readable code snippets. Environmental Impact Hardware Type: 1 x NVIDIA A100 GPU Hours used: 12 hours Cloud Provider: Google Cloud Compute Region: us-central1 Carbon Emitted: 1.05 kg CO2eq (estimated using the Machine Learning Impact calculator) Technical Specifications [optional] Model Architecture and Objective The model is a decoder-only transformer architecture. Its objective is to predict the next token in a sequence, conditioned on the preceding tokens. The finetuning process adapted the base model to excel at generating code. Citation [optional] BibTeX @misc{Arko007_my-awesome-code-assistant-v4, author = {Arko007}, title = {my-awesome-code-assistant-v4}, year = {2024}, publisher = {Hugging Face}, url = {https://huggingface.co/Arko007/my-awesome-code-assistant-v4} } APA Arko007. (2024). my-awesome-code-assistant-v4. Hugging Face. Retrieved from https://huggingface.co/Arko007/my-awesome-code-assistant-v4 Model Card Authors [optional] Arko007 Model Card Contact [Email or other contact information] Framework versions PEFT 0.17.0