👋 Hi, everyone!
We are ByteDance Seed Team.

You can get to know us better through the following channels👇

seed logo

Seed-OSS Open-Source Models


This model card is dedicated to the Seed-OSS-36B-Base-woSyn model.

News

  • [2025/08/20]🔥We release Seed-OSS-36B-Base (both with and without synthetic data versions) and Seed-OSS-36B-Instruct.

Introduction

Seed-OSS is a series of open-source large language models developed by ByteDance's Seed Team, designed for powerful long-context, reasoning, agent and general capabilities, and versatile developer-friendly features. Although trained with only 12T tokens, Seed-OSS achieves excellent performance on several popular open benchmarks.

We release this series of models to the open-source community under the Apache-2.0 license.

Seed-OSS is primarily optimized for international (i18n) use cases.

Key Features

  • Flexible Control of Thinking Budget: Allowing users to flexibly adjust the reasoning length as needed. This capability of dynamically controlling the reasoning length enhances inference efficiency in practical application scenarios.
  • Enhanced Reasoning Capability: Specifically optimized for reasoning tasks while maintaining balanced and excellent general capabilities.
  • Agentic Intelligence: Performs exceptionally well in agentic tasks such as tool-using and issue resolving.
  • Research-Friendly: Given that the inclusion of synthetic instruction data in pre-training may affect the post-training research, we released pre-trained models both with and without instruction data, providing the research community with more diverse options.
  • Native Long Context: Trained with up-to-512K long context natively.

Model Summary

Seed-OSS adopts the popular causal language model architecture with RoPE, GQA attention, RMSNorm and SwiGLU activation.

Seed-OSS-36B
Parameters 36B
Attention GQA
Activation Function SwiGLU
Number of Layers 64
Number of QKV Heads 80 / 8 / 8
Head Size 128
Hidden Size 5120
Vocabulary Size 155K
Context Length 512K
RoPE Base Frequency 1e7

Evaluation Results

Seed-OSS-36B-Base

Incorporating synthetic instruction data into pretraining leads to improved performance on most benchmarks. We adopt the version augmented with synthetic instruction data (i.e., w/ syn.) as Seed-OSS-36B-Base. We also release Seed-OSS-36B-Base-woSyn trained without such data (i.e., w/o syn.), offering the community a high-performance foundation model unaffected by synthetic instruction data.

Benchmark Seed1.6-Base Qwen3-30B-A3B-Base-2507* Qwen2.5-32B-Base* Seed-OSS-36B-Base
(w/ syn.)
Seed-OSS-36B-Base-woSyn
(w/o syn.)
Knowledge
MMLU-Pro 70 59.8 58.5 (55.1) 65.1 60.4
MMLU 88.8 82.7 84 (83.3) 84.9 84.8
TriviaQA 91 76.2 76 82.1 81.9
GPQA-D 43.4 37 29.3 31.7 35.2
SimpleQA 17.1 7.2 6.1 5.8 7.4
Reasoning
BBH 92.1 81.4 79.1 (84.5) 87.7 87.2
AGIEval-en 78 66.4 65.6 70.7 70.1
Math
GSM8K 93.1 87 87.5 (92.9) 90.8 90.3
MATH 72.9 61.1 63.5 (57.7) 81.7 61.3
Coding
MBPP 83.6 78.8 77.8 (84.5) 80.6 74.6
HumanEval 78 70.7 47.6 (58.5) 76.8 75.6
- Bold denotes open-source SOTA.
- "*" indicates that the results in this column are presented in the format of "reproduced_results (reported_results_if_any)".

Seed-OSS-36B-Instruct

Benchmark Seed1.6-Thinking-0715 OAI-OSS-20B* Qwen3-30B-A3B-Thinking-2507* Qwen3-32B* Gemma3-27B Seed-OSS-36B-Instruct
Knowledge
MMLU-Pro 86.6 76.2 81.9 (80.9) 81.8 67.5 82.7
MMLU 90.6 81.7 (85.3) 86.9 86.2 76.9 87.4
GPQA-D 80.7 72.2 (71.5) 71.4 (73.4) 66.7 (68.4) 42.4 71.4
SuperGPQA 63.4 50.1 57.3 (56.8) 49.3 - 55.7
SimpleQA 23.7 6.7 23.6 8.6 10 9.7
Math
AIME24 90.3 92.7 (92.1) 87.7 82.7 (81.4) - 91.7
AIME25 86 90.3 (91.7) 81.3 (85) 73.3 (72.9) - 84.7
BeyondAIME 60 69 56 29 - 65
Reasoning
ArcAGI V2 50.3 41.7 37.8 14.4 - 40.6
KORBench 74.8 72.3 70.2 65.4 - 70.6
Coding
LiveCodeBench v6
(02/2025-05/2025)
66.8 63.8 60.3 (66) 53.4 - 67.4
HLE 13.9 12.7 (10.9) 8.7 6.9 - 10.1
Instruction Following
IFEval 86.3 92.8 88 (88.9) 88.4 (85) 90.4 85.8
Agent
TAU1-Retail 63 (54.8) 58.7 (67.8) 40.9 - 70.4
TAU1-Airline 49 (38) 47 (48) 38 - 46
SWE-Bench Verified
(OpenHands)
41.8 (60.7) 31 23.4 - 56
SWE-Bench Verified
(AgentLess 4*10)
48.4 - 33.5 39.7 - 47
Multi-SWE-Bench 17.7 - 9.5 7.7 - 17
Multilingualism
MMMLU 84.3 77.4 (75.7) 79 79 (80.6) - 78.4
Long Context
RULER
(128K)
94.5 78.7 94.5 77.5 - 94.6
Safety
AIR-Bench - - - - - 75.6
- Bold denotes open-source SOTA. Underlined indicates the second place in the open-source model.
- "*" indicates that the results in this column are presented in the format of "reproduced_results (reported_results_if_any)". Some results have been omitted due to the failure of the evaluation run.
- The results of Gemma3-27B are sourced directly from its technical report.
- Generation configs for Seed-OSS-36B-Instruct: temperature=1.1, top_p=0.95. Specifically, for Taubench, temperature=1, top_p=0.7.

We recommend sampling with temperature=1.1 and top_p=0.95.

Thinking Budget

Users can flexibly specify the model's thinking budget. The figure below shows the performance curves across different tasks as the thinking budget varies. For simpler tasks (such as IFEval), the model's chain of thought (CoT) is shorter, and the score exhibits fluctuations as the thinking budget increases. For more challenging tasks (such as AIME and LiveCodeBench), the model's CoT is longer, and the score improves with an increase in the thinking budget.

thinking_budget

Here is an example with a thinking budget set to 512: during the reasoning process, the model periodically triggers self-reflection to estimate the consumed and remaining budget, and delivers the final response once the budget is exhausted or the reasoning concludes.

<seed:think>
Got it, let's try to solve this problem step by step. The problem says ... ...
<seed:cot_budget_reflect>I have used 129 tokens, and there are 383 tokens remaining for use.</seed:cot_budget_reflect>
Using the power rule, ... ...
<seed:cot_budget_reflect>I have used 258 tokens, and there are 254 tokens remaining for use.</seed:cot_budget_reflect>
Alternatively, remember that ... ...
<seed:cot_budget_reflect>I have used 393 tokens, and there are 119 tokens remaining for use.</seed:cot_budget_reflect>
Because if ... ...
<seed:cot_budget_reflect>I have exhausted my token budget, and now I will start answering the question.</seed:cot_budget_reflect>
</seed:think>
To solve the problem, we start by using the properties of logarithms to simplify the given equations: (full answer omitted).

If no thinking budget is set (default mode), Seed-OSS will initiate thinking with unlimited length. If a thinking budget is specified, users are advised to prioritize values that are integer multiples of 512 (e.g., 512, 1K, 2K, 4K, 8K, or 16K), as the model has been extensively trained on these intervals. Models are instructed to output a direct response when the thinking budget is 0, and we recommend setting any budget below 512 to this value.

Quick Start

pip3 install -r requirements.txt
pip install git+ssh://[email protected]/Fazziekey/transformers.git@seed-oss
from transformers import AutoModelForCausalLM, AutoTokenizer
import os
import re

model_name_or_path = "ByteDance-Seed/Seed-OSS-36B-Instruct"

tokenizer = AutoTokenizer.from_pretrained(model_name_or_path)
model = AutoModelForCausalLM.from_pretrained(model_name_or_path, device_map="auto")  # You may want to use bfloat16 and/or move to GPU here
messages = [
    {"role": "user", "content": "How to make pasta?"},
]
tokenized_chat = tokenizer.apply_chat_template(
  messages, 
  tokenize=True, 
  add_generation_prompt=True, 
  return_tensors="pt", 
  thinking_budget=512 # control the thinking budget
)

outputs = model.generate(tokenized_chat.to(model.device), max_new_tokens=2048)

output_text = tokenizer.decode(outputs[0])

Inference

Download Model

Download Seed-OSS checkpoint to ./Seed-OSS-36B-Instruct

Transformers

The generate.py script provides a simple interface for model inference with configurable options.

Basic Usage

cd inference
python3 generate.py --model_path /path/to/model

Key Parameters

Parameter Description
--model_path Path to the pretrained model directory (required)
--prompts Input prompts (default: sample cooking/code questions)
--max_new_tokens Maximum tokens to generate (default: 4096)
--attn_implementation Attention mechanism: flash_attention_2 (default) or eager
--load_in_4bit/8bit Enable 4-bit/8-bit quantization (reduces memory usage)
--thinking_budget Thinking budget in tokens (default: -1 for unlimited budget)

Quantization Examples

# 8-bit quantization
python3 generate.py --model_path /path/to/model --load_in_8bit True

# 4-bit quantization
python3 generate.py --model_path /path/to/model --load_in_4bit True

Custom Prompts

python3 generate.py --model_path /path/to/model --prompts "['What is machine learning?', 'Explain quantum computing']"

vLLM

Use vllm >= 0.10.0 or higher for inference.

  • First install vLLM with Seed-OSS support version:
VLLM_USE_PRECOMPILED=1 VLLM_TEST_USE_PRECOMPILED_NIGHTLY_WHEEL=1 pip install git+ssh://[email protected]/FoolPlayer/vllm.git@seed-oss
  • Start vLLM API server:
python3 -m vllm.entrypoints.openai.api_server \
    --host localhost \
    --port 4321 \
    --enable-auto-tool-choice \
    --tool-call-parser seed_oss \
    --trust-remote-code \
    --model ./Seed-OSS-36B-Instruct \
    --chat-template ./Seed-OSS-36B-Instruct/chat_template.jinja \
    --tensor-parallel-size 8 \
    --dtype bfloat16 \
    --served-model-name seed_oss
  • Test with OpenAI client:

Chat

python3 inference/vllm_chat.py

Tool Call

python3 inference/vllm_tool_call.py

Model Card

See MODEL_CARD.

License

This project is licensed under Apache-2.0. See the LICENSE flie for details.

Citation

@misc{seed2025seed-oss,
  author={ByteDance Seed Team},
  title={Seed-OSS Open-Source Models},
  year={2025},
  howpublished={\url{https://github.com/ByteDance-Seed/seed-oss}}
}

About ByteDance Seed Team

Founded in 2023, ByteDance Seed Team is dedicated to crafting the industry's most advanced AI foundation models. The team aspires to become a world-class research team and make significant contributions to the advancement of science and society.

Downloads last month
40
Safetensors
Model size
36.2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 4 Ask for provider support

Collection including ByteDance-Seed/Seed-OSS-36B-Base-woSyn