Fast-Math-Qwen3-14B

Fast-Math-Qwen3-14B is an efficiency-optimized version of Qwen3-14B, developed following the two-stage recipe of Supervised Fine-Tuning (SFT) and Reinforcement Learning from Online Inference (GRPO) presented in the paper:

A Practical Two-Stage Recipe for Mathematical LLMs: Maximizing Accuracy with SFT and Efficiency with Reinforcement Learning

This model enables approx. 65% faster inference on average, with minimal loss in performance, compared to the base Qwen3-14B.

Technical details can be found in our github repository.

Note: This model likely inherits the ability to perform inference in TIR mode from the original model. However, all of our experiments were conducted in CoT mode, and its performance in TIR mode has not been evaluated.

Evaluation

AIME 2024 AIME 2025
Model Token budget Pass@1 (avg. 64) Mean output tokens Pass@1 (avg. 64) Mean output tokens
Qwen3-14B 32000 79.3 13669 69.5 16481
24000 75.9 13168 65.6 15235
16000 64.5 11351 50.4 12522
12000 49.7 9746 36.3 10353
8000 28.4 7374 19.5 7485
Fast-Math-Qwen3-14B 32000 77.6 9740 66.6 12281
24000 76.5 9634 65.3 11847
16000 72.6 8793 60.1 10195
12000 65.1 7775 49.4 8733
8000 50.7 6260 36 6618

Inference

vLLM

from vllm import LLM, SamplingParams
from transformers import AutoTokenizer
model_path = 'RabotniKuma/Fast-Math-Qwen3-14B'
vllm_engine = LLM(
    model=model_path,
    max_model_len=16000,
    gpu_memory_utilization=0.9,
    trust_remote_code=True,
)
tokenizer = AutoTokenizer.from_pretrained(model_path)
sampling_params = SamplingParams(
    temperature=1.0,
    top_p=0.90,
    min_p=0.05,
    max_tokens=8192,
    stop='</think>',  # For even faster inference, applying early stopping at the </think> tag and extracting the final boxed content is recommended.
)
messages = [
    {
        'role': 'user',
        'content': (
            'Solve the problem, and put the answer in \\boxed{{}}. '
            'Sarah is twice as old as her youngest brother. If the difference between their ages is 15 years. How old is her youngest brother?'
        )
    }
]
messages = tokenizer.apply_chat_template(
    conversation=messages,
    tokenize=False,
    add_generation_prompt=True
)
response = vllm_engine.generate(messages, sampling_params=sampling_params)
Downloads last month
18
Safetensors
Model size
14.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for RabotniKuma/Fast-Math-Qwen3-14B

Finetuned
Qwen/Qwen3-14B
Finetuned
(84)
this model
Quantizations
2 models

Collection including RabotniKuma/Fast-Math-Qwen3-14B