Model Card for DeryFerd/phi2-finetuned-mbpp-clean
Model Details
Model Description
This model is a fine-tuned version of microsoft/phi-2
, specifically adapted for Python code generation tasks. It was trained on a high-quality, curated subset of the MBPP (Mostly Basic Python Programming) dataset.
The primary goal of this project was to distill the coding style and capabilities of a larger "teacher" model (Qwen/Qwen2.5-Coder-7B-Instruct
) into the much more compact and efficient Phi-2 architecture. The model is designed to generate Python functions based on natural language instructions, often including explanations and test cases in its output.
Developed by: DeryFerd
Model type: Causal Language Model
Language(s) (NLP): English
License: MIT
Finetuned from model:
microsoft/phi-2
Model Sources
Uses
Direct Use
This model is intended for direct use in generating Python code snippets, particularly for creating standalone functions based on a descriptive prompt. It can be used for educational purposes, as a coding assistant, or for rapid prototyping.
Intended Use: Generating Python functions from docstrings or natural language instructions.
Out-of-Scope Use
This is a specialized model. It will not perform well on tasks outside of Python code generation, such as general conversation, translation, or creative writing. It has not been trained or evaluated for safety and may produce incorrect or insecure code.
Bias, Risks, and Limitations
This model was trained on the MBPP dataset, which consists of basic programming problems. Its capabilities are limited to this domain. The model may generate code that is syntactically correct but logically flawed. Always review and test the generated code before use in production environments.
A notable limitation discovered during development is a potential low-level GPU memory conflict. When this model is loaded into the same runtime as a significantly larger and architecturally different model (like Qwen 7B), its fine-tuned capabilities can be silently overridden, causing it to revert to the base model's behavior. It is recommended to run this model in an isolated process.
How to Get Started with the Model
Use the code below to get started with the model using the transformers
library.
from transformers import pipeline, AutoModelForCausalLM, AutoTokenizer
model_id = "DeryFerd/phi2-finetuned-mbpp-clean"
# Load the tokenizer and model
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="auto",
device_map="auto",
trust_remote_code=True
)
# Create a text-generation pipeline
pipe = pipeline("text-generation", model=model, tokenizer=tokenizer)
# Define your instruction
instruction = "Write a Python function that takes a list of strings and returns a new list with all strings converted to uppercase."
prompt = f"Instruct: {instruction.strip()}\nOutput:"
# Generate the response
outputs = pipe(
prompt,
max_new_tokens=256,
do_sample=False,
pad_token_id=tokenizer.eos_token_id
)
response = outputs[0]['generated_text'].split("Output:")[1].strip()
print(response)
Training Details
Training Data
The model was fine-tuned on mbpp_974_final.jsonl
, a curated dataset containing 974 high-quality instruction-response pairs for Python programming problems, derived from the MBPP dataset. The data was generated using Qwen/Qwen2.5-Coder-7B-Instruct
.
Training Procedure
The model was fine-tuned using the LoRA (Low-Rank Adaptation) method for parameter-efficient fine-tuning (PEFT).
Training Hyperparameters
Framework:
trl.SFTTrainer
LoRA
r
: 16LoRA
alpha
: 32Target Modules:
q_proj
,k_proj
,v_proj
,dense
Learning Rate: 2e-4
LR Scheduler: Cosine
Epochs: 3
Batch Size: 1 (with gradient accumulation of 8)
Optimizer: Paged AdamW 8-bit
Compute Infrastructure
Hardware Type: Single NVIDIA T4 GPU
Cloud Provider: Kaggle Notebooks
Citation
If you use this model, please consider citing the original Phi-2 and MBPP papers.
- Downloads last month
- 18
Model tree for DeryFerd/Qwen2.5-Coder-7B-Instruct-Distill-Phi2-974mbpp
Base model
microsoft/phi-2