metadata
license: mit
license_link: >-
https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Qwen-14B/blob/main/LICENSE
base_model: deepseek-ai/DeepSeek-R1-Distill-Qwen-14B
finetuned_by: boatbomber
pipeline_tag: text-generation
tags:
- chat
- reasoning
- roblox
- luau
language:
- en
datasets:
- boatbomber/roblox-info-dump
- boatbomber/the-luau-stack
R1-Distill-Qwen-14B-Roblox-Luau
A fine tune of deepseek-ai/DeepSeek-R1-Distill-Qwen-14B using boatbomber/roblox-info-dump and boatbomber/the-luau-stack for Roblox domain knowledge.
Recommended inference settings:
Parameter | Value | Notes |
---|---|---|
System Prompt | You are an expert Roblox developer and Luau software engineer. |
Model was fine tuned with this prompt. |
temperature | 0.5-0.7 |
Underlying R1 Distill uses this. I've found best results with 0.55 . |
top_p | 0.95 |
Underlying R1 Distill uses this. |
Quantization done using Unsloth.
Available quants:
Quant | Size | Notes |
---|---|---|
F16 | 29.55GB | Retains 100% accuracy. Slow and memory hungry. |
Q8_O | 15.70GB | High resource use, but generally acceptable. Use when accuracy is crucial. |
Q6_K | 12.12GB | Uses Q6_K for all tensors. Good for high end GPUs. |
Q5_K_M | 10.51GB | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q5_K |
Q4_K_M | 8.99GB | Recommended. Uses Q6_K for half of the attention.wv and feed_forward.w2 tensors, else Q4_K |
Q3_K_M | 7.34GB | Uses Q4_K for the attention.wv, attention.wo, and feed_forward.w2 tensors, else Q3_K. Quality is noticeably degraded. |