YAML Metadata
Warning:
empty or missing yaml metadata in repo card
(https://huggingface.co/docs/hub/model-cards#model-card-metadata)
Qwen-Coder-14B-LoRA-16k - Fine-tuned for Programming
This model is a fine-tuned version of Qwen2.5-Coder-14B with Arma Reforger training data context handling capabilities up to 16k tokens. It has been trained using Low-Rank Adaptation (LoRA) on a curated dataset of ~43K Reforger programming tasks, modding inquiries and code examples.
π§ Model Description
- Base Model: Qwen2.5-Coder-14B
- Training Method: Fine-tuned with LoRA (rank 16, alpha 32)
- Context Length: Extended to support up to 16,384 tokens
- Quantization: Available in F16, can be quantized
- Training Hardware: RTX 5090 GPU training with gradient accumulation
π Capabilities
This fine-tuned model maintains all the capabilities of the base Qwen-Coder model while enhancing:
- Long code comprehension and generation related to Reforger modding
- Better understanding of project structures related to Reforger modding
- Improved task-specific coding assistance related to Reforger modding
π¦ Usage
The model is available aas follows:
- GGUF: Use with
llama.cpp
,text-generation-webui
,LM Studio
, etc. - LoRA Adapter Only: Can be applied to the original Qwen2.5-Coder-14B base model
Example (Hugging Face)
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained("your-username/qwen-coder-14b-lora-16k")
tokenizer = AutoTokenizer.from_pretrained("your-username/qwen-coder-14b-lora-16k")
# Generate text
prompt = "Write a Python function to calculate Fibonacci numbers using memoization:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_length=500)
print(tokenizer.decode(outputs[0]))
- Downloads last month
- 19
Hardware compatibility
Log In
to view the estimation
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support