๐ฎ Qwen2.5 Coder 1.5B Roblox
A specialized code generation model fine-tuned for Roblox Luau programming
โก Run in Google Colab (Recommended)
No setup required Click the badge above and run the chatbot instantly in your browser.
- ๐ฏ Pre-configured environment
- ๐ฅ GPU-accelerated inference
- ๐ฌ Interactive chat interface
- โฑ๏ธ Ready in ~3 minutes
๐ Overview
Qwen2.5 Coder 1.5B Roblox is a parameter-efficient fine-tuned model specifically designed for Roblox Luau development. Built on top of Qwen2.5-Coder-1.5B-Instruct, this model excels at generating, completing, and understanding Luau code patterns commonly used in Roblox game development.
๐ฏ What Makes This Special?
- ๐ฎ Roblox-Native: Trained exclusively on authentic Luau code from the official Roblox corpus
- ๐ง Context-Aware: Understands Roblox-specific APIs, patterns, and best practices
๐๏ธ Model Architecture
| Component | Details |
|---|---|
| Base Model | Qwen/Qwen2.5-Coder-1.5B-Instruct |
| Adapter Type | LoRA (Low-Rank Adaptation) |
| LoRA Rank | 8 |
| LoRA Alpha | 32 |
| Target Modules | q_proj, v_proj |
| Training Hardware | TPU v5e-8 (Multi-core) |
๐ Training Details
Dataset
- Source: Roblox/luau_corpus
- Filtering: Quality-filtered for code length (20-5000 chars) and Luau keyword presence
- Split: 90% train / 10% validation
Training Configuration
{
"max_length": 1024,
"batch_size": 4,
"gradient_accumulation_steps": 32,
"learning_rate": 3e-5,
"scheduler": "cosine_annealing",
"epochs": 1,
"optimizer": "AdamW"
}
๐ Quick Start
Installation
pip install transformers peft torch
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-Coder-1.5B-Instruct",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")
# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "umjunsik1323/Qwen2.5-Coder-1.5B-roblox")
# Generate Luau code
messages = [
{"role": "system", "content": "You are a Roblox Luau programming expert."},
{"role": "user", "content": "Create a function to make a part glow"}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Advanced: Merge and Export
# Merge LoRA weights into base model
merged_model = model.merge_and_unload()
merged_model.save_pretrained("./qwen-luau-merged")
๐ก Features
Supported Tasks
- โจ Code Completion: Finish partial Luau scripts intelligently
- ๐ง Function Generation: Create Roblox-specific functions from descriptions
- ๐ Code Explanation: Understand and document existing Luau code
- ๐ Error Fixing: Suggest corrections for common Luau mistakes
- ๐ฏ API Usage: Generate proper Roblox API calls
Example Prompts
-- Completion
"local function teleportPlayer(player, position)"
โ Generates complete teleportation logic
-- Generation
"Create a tween that smoothly moves a part to a new position"
โ Generates TweenService implementation
-- Context-Aware
"Handle player damage with a cooldown system"
โ Generates debounce pattern with Humanoid health management
๐ฏ Use Cases
Game Development
- Quick prototyping of Roblox mechanics
- Learning Luau programming patterns
- Code review and suggestions
Education
- Teaching Roblox development
- Demonstrating best practices
- Interactive coding assistance
Productivity
- Accelerating development workflows
- Reducing boilerplate code
- Standardizing team coding styles
โ ๏ธ Limitations
- Scope: Specialized for Luau only, not general-purpose programming
- Context Window: Limited to 1024 tokens
- Recency: Training data may not include latest Roblox API updates
- Validation: Always test generated code in Roblox Studio
๐ Citation
@misc{youngseong_kim_2025,
author = { Youngseong Kim },
title = { Qwen2.5-Coder-1.5B-roblox (Revision 63e9452) },
year = 2025,
url = { https://huggingface.co/umjunsik1323/Qwen2.5-Coder-1.5B-roblox },
doi = { 10.57967/hf/7093 },
publisher = { Hugging Face }
}
๐ License
This LoRA adapter is released under Apache 2.0 License, maintaining compatibility with the base Qwen2.5-Coder model.
๐ค Acknowledgments
- Qwen Team at Alibaba Cloud for the base model
- Roblox for providing the Luau corpus dataset
- Kaggle for providing the computational resources
Made with โค๏ธ for the Roblox Developer Community
- Downloads last month
- 106
Model tree for umjunsik1323/Qwen2.5-Coder-1.5B-roblox
Base model
Qwen/Qwen2.5-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B
Finetuned
Qwen/Qwen2.5-Coder-1.5B-Instruct