Text Generation
Transformers
Safetensors
English
qwen2
code
coding
programming
algorithms
systems-programming
code-generation
complexity-analysis
qwen2.5
fine-tuned
vanta-research
vanta-research-entities
vanta-research-code-models
wraith
conversational
Eval Results
text-generation-inference
4-bit precision
bitsandbytes
Tyler Williams
Initial commit: Wraith Coder 7B - Concise code assistant via iterative fine-tuning
cc49567
Wraith Coder 7B
Signal-dense code generation model fine-tuned from Qwen2.5-Coder-7B-Instruct.
Quick Start
Installation
pip install transformers torch
Basic Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
model = AutoModelForCausalLM.from_pretrained(
"vanta-research/wraith-coder-7b",
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained("vanta-research/wraith-coder-7b")
messages = [
{"role": "user", "content": "Implement binary search with complexity analysis."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=512, temperature=0.7)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Ollama Deployment
# Convert to GGUF (Q4_K_M recommended)
ollama create wraith-coder:7b -f Modelfile
# Run inference
ollama run wraith-coder:7b "Implement a LRU cache with O(1) operations"
Key Features
- 62.6% more concise than base Qwen2.5-Coder-7B while maintaining correctness
- 60% complexity analysis coverage across diverse coding challenges
- Multiple solution approaches with trade-off discussions
- Systems programming knowledge integrated throughout
- Production-ready for senior engineering applications
Performance Highlights
| Metric | Base Qwen | Wraith Coder | Improvement |
|---|---|---|---|
| Avg Response Length | 2,900 chars | 1,084 chars | 62.6% shorter |
| Complexity Analysis | 40% | 60% | +50% coverage |
| Multiple Approaches | 35% | 65% | +86% frequency |
| Trade-off Discussion | 45% | 75% | +67% depth |
Documentation
Full documentation available in README.md
License
Apache 2.0
Citation
@misc{wraith-coder-7b,
author = {Vanta Research},
title = {Wraith Coder 7B: Signal-Dense Code Generation through Iterative Fine-Tuning},
year = {2025},
publisher = {Hugging Face}
}