๐ŸŽฎ Qwen2.5 Coder 1.5B Roblox

Roblox LoRA License

A specialized code generation model fine-tuned for Roblox Luau programming


โšก Run in Google Colab (Recommended)

Open In Colab

No setup required Click the badge above and run the chatbot instantly in your browser.

  • ๐ŸŽฏ Pre-configured environment
  • ๐Ÿ”ฅ GPU-accelerated inference
  • ๐Ÿ’ฌ Interactive chat interface
  • โฑ๏ธ Ready in ~3 minutes

๐Ÿ“– Overview

Qwen2.5 Coder 1.5B Roblox is a parameter-efficient fine-tuned model specifically designed for Roblox Luau development. Built on top of Qwen2.5-Coder-1.5B-Instruct, this model excels at generating, completing, and understanding Luau code patterns commonly used in Roblox game development.

๐ŸŽฏ What Makes This Special?

  • ๐ŸŽฎ Roblox-Native: Trained exclusively on authentic Luau code from the official Roblox corpus
  • ๐Ÿง  Context-Aware: Understands Roblox-specific APIs, patterns, and best practices

๐Ÿ—๏ธ Model Architecture

Component Details
Base Model Qwen/Qwen2.5-Coder-1.5B-Instruct
Adapter Type LoRA (Low-Rank Adaptation)
LoRA Rank 8
LoRA Alpha 32
Target Modules q_proj, v_proj
Training Hardware TPU v5e-8 (Multi-core)

๐Ÿ“š Training Details

Dataset

  • Source: Roblox/luau_corpus
  • Filtering: Quality-filtered for code length (20-5000 chars) and Luau keyword presence
  • Split: 90% train / 10% validation

Training Configuration

{
    "max_length": 1024,
    "batch_size": 4,
    "gradient_accumulation_steps": 32,
    "learning_rate": 3e-5,
    "scheduler": "cosine_annealing",
    "epochs": 1,
    "optimizer": "AdamW"
}

๐Ÿš€ Quick Start

Installation

pip install transformers peft torch

Basic Usage

from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel

# Load base model
base_model = AutoModelForCausalLM.from_pretrained(
    "Qwen/Qwen2.5-Coder-1.5B-Instruct",
    torch_dtype="auto",
    device_map="auto"
)

tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-Coder-1.5B-Instruct")

# Load LoRA adapter
model = PeftModel.from_pretrained(base_model, "umjunsik1323/Qwen2.5-Coder-1.5B-roblox")

# Generate Luau code
messages = [
    {"role": "system", "content": "You are a Roblox Luau programming expert."},
    {"role": "user", "content": "Create a function to make a part glow"}
]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Advanced: Merge and Export

# Merge LoRA weights into base model
merged_model = model.merge_and_unload()
merged_model.save_pretrained("./qwen-luau-merged")

๐Ÿ’ก Features

Supported Tasks

  • โœจ Code Completion: Finish partial Luau scripts intelligently
  • ๐Ÿ”ง Function Generation: Create Roblox-specific functions from descriptions
  • ๐Ÿ“ Code Explanation: Understand and document existing Luau code
  • ๐Ÿ› Error Fixing: Suggest corrections for common Luau mistakes
  • ๐ŸŽฏ API Usage: Generate proper Roblox API calls

Example Prompts

-- Completion
"local function teleportPlayer(player, position)"
โ†’ Generates complete teleportation logic

-- Generation
"Create a tween that smoothly moves a part to a new position"
โ†’ Generates TweenService implementation

-- Context-Aware
"Handle player damage with a cooldown system"
โ†’ Generates debounce pattern with Humanoid health management

๐ŸŽฏ Use Cases

Game Development

  • Quick prototyping of Roblox mechanics
  • Learning Luau programming patterns
  • Code review and suggestions

Education

  • Teaching Roblox development
  • Demonstrating best practices
  • Interactive coding assistance

Productivity

  • Accelerating development workflows
  • Reducing boilerplate code
  • Standardizing team coding styles

โš ๏ธ Limitations

  • Scope: Specialized for Luau only, not general-purpose programming
  • Context Window: Limited to 1024 tokens
  • Recency: Training data may not include latest Roblox API updates
  • Validation: Always test generated code in Roblox Studio

๐Ÿ“„ Citation

@misc{youngseong_kim_2025,
    author       = { Youngseong Kim },
    title        = { Qwen2.5-Coder-1.5B-roblox (Revision 63e9452) },
    year         = 2025,
    url          = { https://huggingface.co/umjunsik1323/Qwen2.5-Coder-1.5B-roblox },
    doi          = { 10.57967/hf/7093 },
    publisher    = { Hugging Face }
}

๐Ÿ“œ License

This LoRA adapter is released under Apache 2.0 License, maintaining compatibility with the base Qwen2.5-Coder model.


๐Ÿค Acknowledgments

  • Qwen Team at Alibaba Cloud for the base model
  • Roblox for providing the Luau corpus dataset
  • Kaggle for providing the computational resources

Made with โค๏ธ for the Roblox Developer Community

Downloads last month
106
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for umjunsik1323/Qwen2.5-Coder-1.5B-roblox

Base model

Qwen/Qwen2.5-1.5B
Adapter
(39)
this model