Model Card: Sai2076/LLLMA_FINETUNED_PROJEN

A LLaMA-3.2 based instruction-tuned model fine-tuned with Unsloth + QLoRA using 🤗 Transformers.
This model is part of the ProjGen project, aimed at enhancing developer productivity through automated project generation and structured code scaffolding.


Model Details

Model Description

  • Base model: meta-llama/Llama-3.2-<SIZE>-Instruct
  • Finetuning method: Unsloth + QLoRA (LoRA adapters)
  • Precision (train): 4-bit NF4 quantization (bitsandbytes) + bf16 compute
  • Context length: 4096
  • Task(s): Instruction following & project/code generation
  • License: Inherits from Meta’s LLaMA-3.2 license
  • Developed by: Sai Praneeth (UAB, ProjGen Project)
  • Finetuned from: meta-llama/Llama-3.2-<SIZE>-Instruct
  • Shared by: Sai2076

Model Sources

  • Repository: Sai2076/LLLMA_FINETUNED_PROJEN
  • Project Paper: ProjGen – Enhanced Developer Productivity for Flask Project Generation with a RAG-Enhanced Fine-Tuned Local LLM
  • Demo (optional): [link to demo if available]

Intended Uses & Limitations

Direct Use

  • Generating Flask/Django/Streamlit project structures automatically.
  • Instruction-following tasks related to software engineering and code generation.

Downstream Use

  • Further fine-tuning on domain-specific datasets (e.g., medical imaging, finance, etc.).
  • Integration into developer assistants and productivity tools.

Out-of-Scope / Limitations

  • Not suitable for medical, legal, or financial decision-making without human review.
  • May hallucinate or produce insecure/inefficient code if not monitored.

Bias, Risks, and Limitations

The model inherits risks from the base LLaMA-3.2 model:

  • Possible hallucinations and factual inaccuracies.
  • Dataset/domain biases reflected in responses.
  • Outputs should be validated before production deployment.

Recommendation: Always pair outputs with testing, validation, and human oversight.


Getting Started

Inference (PEFT adapter form)

from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig

model_id = "Sai2076/LLLMA_FINETUNED_PROJEN"

tok = AutoTokenizer.from_pretrained(model_id)

bnb = BitsAndBytesConfig(load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    quantization_config=bnb,
    device_map="auto",
    torch_dtype="auto"
)

prompt = "Generate a Flask project with login, dashboard, and reports."
inputs = tok(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512)
print(tok.decode(outputs[0], skip_special_tokens=True))

Training Details

Data

  • Dataset: Custom ProjGen dataset built from structured Flask/Django/Streamlit projects and instructions.
  • Size: [Fill in #samples / tokens]
  • Preprocessing: Chat-style instruction formatting (system/user/assistant), deduplication, truncation at 4096 tokens.

Training Procedure

  • Quantization: 4-bit NF4 + double quantization (bitsandbytes)
  • LoRA Config:
    • r: 16
    • alpha: 32
    • dropout: 0.05
    • Target modules: q_proj, k_proj, v_proj, o_proj, gate_up_proj, down_proj
  • Optimizer: Paged AdamW (32-bit)
  • LR / Schedule: 2e-4 with cosine decay + warmup
  • Batch size: [fill in effective batch size]
  • Epochs/Steps: [fill in from ipynb]
  • Precision: bf16 mixed precision
  • Grad checkpointing: Enabled
  • Flash attention: Enabled (Unsloth optimization)

Training Hardware

  • GPU: RTX 4070 (12GB VRAM) [replace with actual if different]
  • Training time: [fill in hours]
  • Checkpoint size: ~ (adapter size: ~200MB; merged model size depends on base LLaMA size)

Evaluation

Data & Metrics

  • Validation set: Held-out portion of ProjGen dataset
  • Metrics:
    • Instruction Following: Exact Match, ROUGE-L
    • Code Generation: Pass@k (via unit test evaluation)

Results

Metric Value Notes
Validation Loss ___ From training logs
Exact Match / F1 ___
ROUGE-L / BLEU ___
Pass@1 ___

Environmental Impact (estimate)

  • Hardware: RTX 4070 (12GB VRAM) [replace with actual]
  • Hours: [fill in H]
  • Region/Provider: [cloud/on-prem]
  • Estimated CO₂e: Use ML CO₂ Impact

Citation

If you use this model, please cite the base model and this project:

BibTeX (base, example):

@article{touvron2023llama,
  title={LLaMA: Open and Efficient Foundation Language Models},
  author={Touvron, Hugo and others},
  journal={arXiv preprint arXiv:XXXX.XXXXX},
  year={2023}
}

Your work (fill in):

@misc{projgen2025,
  title = {ProjGen: Enhanced Developer Productivity for Flask Project Generation with a RAG-Enhanced Fine-Tuned Local LLM},
  author = {Sai Praneeth, Renduchinthala},
  year = {2025},
  howpublished = {\url{https://huggingface.co/Sai2076/LLLMA_FINETUNED_PROJEN}}
}

Contact

  • Author: Sai Praneeth Kumar (UAB)
  • HF Profile: Sai2076
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support