Seed-Coder-8B-Instruct-exl2
Original model: Seed-Coder-8B-Instruct by ByteDance Seed
Quants
4bpw h6 (main)
4.5bpw h6
5bpw h6
6bpw h6
8bpw h8
Quantization notes
Made with Exllamav2 0.2.9 dev with default dataset.
Quants can be used with RTX GPU (Windows) or RTX/ROCm (Linux) with TabbyAPI or Text-Generation-WebUI.
Original model card
Seed-Coder-8B-Instruct
Introduction
We are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.
- Model-centric: Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
- Transparent: We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.
- Powerful: Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.
This repo contains the Seed-Coder-8B-Instruct model, which has the following features:
- Type: Causal language models
- Training Stage: Pretraining & Post-training
- Data Source: Public datasets, synthetic data
- Context Length: 32,768
Model Downloads
Model Name | Length | Download | Notes |
---|---|---|---|
Seed-Coder-8B-Base | 32K | 🤗 Model | Pretrained on our model-centric code data. |
👉 Seed-Coder-8B-Instruct | 32K | 🤗 Model | Instruction-tuned for alignment with user intent. |
Seed-Coder-8B-Reasoning | 32K | 🤗 Model | RL trained to boost reasoning capabilities. |
Requirements
You will need to install the latest versions of transformers
and accelerate
:
pip install -U transformers accelerate
Quickstart
Here is a simple example demonstrating how to load the model and generate code using the Hugging Face pipeline
API:
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
add_generation_prompt=True,
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
Evaluation
Seed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.
Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 – 2502) |
---|---|---|---|---|---|---|
CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 21.9 | 3.4 | 3.6 |
DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 35.5 | 10.1 | 9.6 |
CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 39.6 | 18.9 | 3.0 |
Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 38.1 | 11.5 | 17.5 |
Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 36.6 | 13.5 | 11.5 |
OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 40.3 | 16.9 | 17.1 |
Qwen2.5-Coder-7B-Instruct | 88.4 | 82.0 | 26.7 | 41.0 | 18.2 | 17.3 |
Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |
Seed-Coder-8B-Instruct | 84.8 | 85.2 | 36.2 | 53.3 | 20.5 | 24.7 |
For detailed benchmark performance, please refer to our 📑 Technical Report.
License
This project is licensed under the MIT License. See the LICENSE file for details.
- Downloads last month
- 3
Model tree for cgus/Seed-Coder-8B-Instruct-exl2
Base model
ByteDance-Seed/Seed-Coder-8B-Base