library_name: transformers
tags:
- roblox
- luau
- code-generation
- fine-tuning
license: mit
Model Card for Roblox-Coder-Llama-7B-v1
This model is a fine-tuned version of codellama/CodeLlama-7b-instruct-hf
, specialized in generating and understanding Luau code for development on the Roblox platform. It has been trained on a custom dataset of instructions and responses in Spanish and English, with the goal of acting as an expert programming assistant for Roblox creators.
Model Details
Model Description
Roblox-Coder-Llama-7B-v1
is a language model designed to assist Roblox developers. It can generate Luau scripts from natural language descriptions, explain complex concepts of the Roblox API, and help optimize code. The goal of this project is to democratize game development on Roblox, making it more accessible for beginners and more efficient for experienced developers.
- Developed by: Sergio Belenguer, with the assistance of a conversational AI.
- Shared by: Sergio Belenguer (Hash0x)
- Model type: Causal Language Model (CLM)
- Language(s) (NLP): Spanish (es), English (en), Luau (code)
- License: Apache 2.0
- Finetuned from model:
codellama/CodeLlama-7b-instruct-hf
Model Sources
- Repository:
https://huggingface.co/Hash0x/Roblox-Coder-Llama-7B-v1
- Dataset Used:
https://huggingface.co/datasets/Hash0x/Roblox-Luau-Instruct-V1
Uses
Direct Use
This model is intended for direct use via a text-generation
pipeline for:
- Code Generation: Asking it to write complete scripts or specific functions in Luau.
- Tutoring and Explanation: Asking questions about how Roblox APIs work (
DataStoreService
,CFrame
, etc.). - Debugging: Asking it to find errors or suggest improvements in existing code snippets.
Downstream Use [optional]
The model can be the foundation for creating more complex tools, such as:
- A Roblox Studio plugin that acts as a "Copilot" for Roblox.
- A Discord bot for a developer community server.
- A Visual Studio Code extension offering intelligent autocompletion and suggestions for Luau.
Out-of-Scope Use
This model should not be used to generate malicious code, exploits, or scripts that violate the Roblox Terms of Service. The generated code must always be reviewed by a human, as it may contain unintentional errors or vulnerabilities.
Bias, Risks, and Limitations
The model was trained on a limited dataset, which entails certain risks and limitations:
- Dataset Bias: The model's knowledge is limited to the examples in the training dataset. It may have poor knowledge of areas of the Roblox API that were not well-represented.
- Hallucinations: The model may invent functions or methods that do not exist in Luau.
- Context Contamination: Due to the base model's pre-trained knowledge, it may occasionally become confused and generate code in other video game programming languages (like C# for Unity), especially if the instruction is ambiguous or the fine-tuning dataset is not large enough.
Recommendations
Never blindly trust the generated code! Treat the model as a very fast junior assistant. Always review, understand, and test the code it produces before implementing it in a real project. The best way to improve the model is by expanding the training dataset with more high-quality examples.
How to Get Started with the Model
Use the code below to get started with the model using the transformers
library.
import torch
from transformers import pipeline
# Make sure you are logged in with your HF token
# from huggingface_hub import login
# login()
pipe = pipeline(
"text-generation",
model="Hash0x/Roblox-Coder-Llama-7B-v1",
torch_dtype="auto",
device_map="auto"
)
prompt = "Create a script that makes a part spin constantly on its Y-axis."
# CodeLlama uses a specific prompt format
formatted_prompt = f"<s>[INST] {prompt} [/INST]"
result = pipe(
formatted_prompt,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_k=50,
top_p=0.95
)
print(result[0]['generated_text'])
Training Details
Training Data
The model was trained using the Hash0x/Roblox-Luau-Instruct-V1
dataset. This dataset was created from several sources:
- Official Roblox Documentation: Code samples and API explanations rewritten in an instruction format.
- Open-Source Projects: Code snippets from GitHub repositories with permissive licenses.
- Developer Community: Inspiration from real-world problems and solutions on the Roblox Developer Forum.
Training Procedure
The model was fine-tuned using the QLoRA (Quantized Low-Rank Adaptation) technique to make training efficient on a single GPU.
Preprocessing
The instructions and responses from the dataset were formatted into a prompt that follows the format expected by the base model: <s>[INST] {instruction} [/INST] {output}
.
Training Hyperparameters
per_device_train_batch_size
: 1gradient_accumulation_steps
: 4 (effective batch size of 4)learning_rate
: 2e-4num_train_epochs
: 1-3optim
: paged_adamw_32bit- QLoRA
r
: 64 - QLoRA
alpha
: 16
Evaluation
The model's evaluation to date has been qualitative, testing its ability to respond to a variety of prompts and analyzing the quality of the generated code. No formal quantitative evaluation with standard metrics has been performed.
Environmental Impact
- Hardware Type: NVIDIA T4
- Hours used: ~1-2 hours (including experimentation and troubleshooting)
- Cloud Provider: Google Colab
- Compute Region: Variable (assigned by Google)
- Carbon Emitted: Low estimate due to the use of QLoRA and a moderately-powered GPU.
Technical Specifications [optional]
Model Architecture and Objective
The base model, codellama/CodeLlama-7b-instruct-hf
, is a Causal Language Model based on the Llama 2 architecture. The fine-tuning objective was Causal Language Modeling optimization for Luau code generation.
Compute Infrastructure
Hardware
Training was performed in the Google Colab environment, using a single NVIDIA T4 GPU with ~15 GB of VRAM.
Software
- Libraries:
transformers
,datasets
,accelerate
,peft
,bitsandbytes
,trl
. - Framework: PyTorch
- Environment: Google Colaboratory
Model Card Authors
- Sergio Belenguer (Hash0x)
Model Card Contact
For questions or feedback, please contact through the Hash0x Hugging Face profile.