File size: 7,432 Bytes
7d0e4fe 68094ff 7d0e4fe 68094ff 7d0e4fe 68094ff 7d0e4fe |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 |
---
license: mit
library_name: exllamav2
base_model:
- ByteDance-Seed/Seed-Coder-8B-Instruct
---
# Seed-Coder-8B-Instruct-exl2
Original model: [Seed-Coder-8B-Instruct](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) by [ByteDance Seed](https://huggingface.co/ByteDance-Seed)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/main)
[4.5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/4.5bpw-h6)
[5bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/5bpw-h6)
[6bpw h6](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/6bpw-h6)
[8bpw h8](https://huggingface.co/cgus/Seed-Coder-8B-Instruct-exl2/tree/8bpw-h8)
## Quantization notes
Made with Exllamav2 0.2.9 dev with default dataset.
Quants can be used with RTX GPU (Windows) or RTX/ROCm (Linux) with TabbyAPI or Text-Generation-WebUI.
# Original model card
# Seed-Coder-8B-Instruct
<div align="left" style="line-height: 1;">
<a href="https://bytedance-seed-coder.github.io/" target="_blank" style="margin: 2px;">
<img alt="Homepage" src="https://img.shields.io/badge/Seed--Coder-Homepage-a468fe?color=a468fe&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf" target="_blank" style="margin: 2px;">
<img alt="Technical Report" src="https://img.shields.io/badge/(upcoming)-Technical%20Report-brightgreen?logo=arxiv&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://huggingface.co/ByteDance-Seed" target="_blank" style="margin: 2px;">
<img alt="Hugging Face" src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-ByteDance%20Seed-536af5?color=536af5&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
<a href="https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE" style="margin: 2px;">
<img alt="License" src="https://img.shields.io/badge/License-MIT-f5de53?color=f5de53&logoColor=white" style="display: inline-block; vertical-align: middle;"/>
</a>
</div>
## Introduction
We are thrilled to introduce Seed-Coder, a powerful, transparent, and parameter-efficient family of open-source code models at the 8B scale, featuring base, instruct, and reasoning variants. Seed-Coder contributes to promote the evolution of open code models through the following highlights.
- **Model-centric:** Seed-Coder predominantly leverages LLMs instead of hand-crafted rules for code data filtering, minimizing manual effort in pretraining data construction.
- **Transparent:** We openly share detailed insights into our model-centric data pipeline, including methods for curating GitHub data, commits data, and code-related web data.
- **Powerful:** Seed-Coder achieves state-of-the-art performance among open-source models of comparable size across a diverse range of coding tasks.
<p align="center">
<img width="100%" src="imgs/seed-coder_intro_performance.jpg">
</p>
This repo contains the **Seed-Coder-8B-Instruct** model, which has the following features:
- Type: Causal language models
- Training Stage: Pretraining & Post-training
- Data Source: Public datasets, synthetic data
- Context Length: 32,768
## Model Downloads
| Model Name | Length | Download | Notes |
|---------------------------------------------------------|--------|------------------------------------|-----------------------|
| Seed-Coder-8B-Base | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Base) | Pretrained on our model-centric code data. |
| 👉 **Seed-Coder-8B-Instruct** | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Instruct) | Instruction-tuned for alignment with user intent. |
| Seed-Coder-8B-Reasoning | 32K | 🤗 [Model](https://huggingface.co/ByteDance-Seed/Seed-Coder-8B-Reasoning) | RL trained to boost reasoning capabilities. |
## Requirements
You will need to install the latest versions of `transformers` and `accelerate`:
```bash
pip install -U transformers accelerate
```
## Quickstart
Here is a simple example demonstrating how to load the model and generate code using the Hugging Face `pipeline` API:
```python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "ByteDance-Seed/Seed-Coder-8B-Instruct"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True)
messages = [
{"role": "user", "content": "Write a quick sort algorithm."},
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
return_tensors="pt",
add_generation_prompt=True,
).to(model.device)
outputs = model.generate(input_ids, max_new_tokens=512)
response = tokenizer.decode(outputs[0][input_ids.shape[-1]:], skip_special_tokens=True)
print(response)
```
## Evaluation
Seed-Coder-8B-Instruct has been evaluated on a wide range of coding tasks, including code generation, code reasoning, code editing, and software engineering, achieving state-of-the-art performance among ~8B open-source models.
| Model | HumanEval | MBPP | MHPP | BigCodeBench (Full) | BigCodeBench (Hard) | LiveCodeBench (2410 – 2502) |
|:-----------------------------:|:---------:|:----:|:----:|:-------------------:|:-------------------:|:-------------------------:|
| CodeLlama-7B-Instruct | 40.9 | 54.0 | 6.7 | 21.9 | 3.4 | 3.6 |
| DeepSeek-Coder-6.7B-Instruct | 74.4 | 74.9 | 20.0 | 35.5 | 10.1 | 9.6 |
| CodeQwen1.5-7B-Chat | 83.5 | 77.7 | 17.6 | 39.6 | 18.9 | 3.0 |
| Yi-Coder-9B-Chat | 82.3 | 82.0 | 26.7 | 38.1 | 11.5 | 17.5 |
| Llama-3.1-8B-Instruct | 68.3 | 70.1 | 17.1 | 36.6 | 13.5 | 11.5 |
| OpenCoder-8B-Instruct | 83.5 | 79.1 | 30.5 | 40.3 | 16.9 | 17.1 |
| Qwen2.5-Coder-7B-Instruct | 88.4 | 82.0 | 26.7 | 41.0 | 18.2 | 17.3 |
| Qwen3-8B | 84.8 | 77.0 | 32.8 | 51.7 | 23.0 | 23.5 |
| Seed-Coder-8B-Instruct | 84.8 | 85.2 | 36.2 | 53.3 | 20.5 | 24.7 |
For detailed benchmark performance, please refer to our [📑 Technical Report](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/Seed-Coder.pdf).
## License
This project is licensed under the MIT License. See the [LICENSE file](https://github.com/ByteDance-Seed/Seed-Coder/blob/master/LICENSE) for details.
<!-- ## Citation
If you find our work helpful, feel free to give us a cite.
```
@article{zhang2025seedcoder,
title={Seed-Coder: Let the Code Model Curate Data for Itself},
author={Xxx},
year={2025},
eprint={2504.xxxxx},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/xxxx.xxxxx},
}
``` --> |