--- base_model: LGAI-EXAONE/EXAONE-4.0-32B base_model_relation: finetune license: other license_name: exaone license_link: LICENSE language: - en - ko - es tags: - lg-ai - exaone - exaone-4.0 pipeline_tag: text-generation library_name: transformers ---
βοΈ Try on FriendliAI
π’ EXAONE 4.0 is officially supported by HuggingFace transformers! Please check out the guide below
# EXAONE-4.0.1-32B
*The version 4.0.1 is a patch version to reduce unintended or inappropriate responses.*
## Introduction
We introduce **EXAONE 4.0**, which integrates a **Non-reasoning mode** and **Reasoning mode** to achieve both the excellent usability of [EXAONE 3.5](https://github.com/LG-AI-EXAONE/EXAONE-3.5) and the advanced reasoning abilities of [EXAONE Deep](https://github.com/LG-AI-EXAONE/EXAONE-Deep). To pave the way for the agentic AI era, EXAONE 4.0 incorporates essential features such as agentic tool use, and its multilingual capabilities are extended
to support Spanish in addition to English and Korean.
The EXAONE 4.0 model series consists of two sizes: a mid-size **32B** model optimized for high performance, and a small-size **1.2B** model designed for on-device applications.
In the EXAONE 4.0 architecture, we apply new architectural changes compared to previous EXAONE models as below:
1. **Hybrid Attention**: For the 32B model, we adopt hybrid attention scheme, which combines *Local attention (sliding window attention)* with *Global attention (full attention)* in a 3:1 ratio. We do not use RoPE (Rotary Positional Embedding) for global attention for better global context understanding.
2. **QK-Reorder-Norm**: We reorder the LayerNorm position from the traditional Pre-LN scheme by applying LayerNorm directly to the attention and MLP outputs, and we add RMS normalization right after the Q and K projection. It helps yield better performance on downstream tasks despite consuming more computation.
For more details, please refer to our [technical report](https://arxiv.org/abs/2507.11407), [HuggingFace paper](https://huggingface.co/papers/2507.11407), [blog](https://www.lgresearch.ai/blog/view?seq=576), and [GitHub](https://github.com/LG-AI-EXAONE/EXAONE-4.0).
### Model Configuration
- Number of Parameters (without embeddings): 30.95B
- Number of Layers: 64
- Number of Attention Heads: GQA with 40-heads and 8-KV heads
- Vocab Size: 102,400
- Context Length: 131,072 tokens
## Quickstart
You should install the transformers library with version >= `4.54.0`.
### Non-reasoning mode
For general use, you can use the EXAONE 4.0 models with the following example:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "LGAI-EXAONE/EXAONE-4.0.1-32B"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="bfloat16",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# choose your prompt
prompt = "Explain how wonderful you are"
prompt = "Explica lo increΓble que eres"
prompt = "λκ° μΌλ§λ λλ¨νμ§ μ€λͺ
ν΄ λ΄"
messages = [
{"role": "user", "content": prompt}
]
input_ids = tokenizer.apply_chat_template(
messages,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
)
output = model.generate(
input_ids.to(model.device),
max_new_tokens=128,
do_sample=False,
)
print(tokenizer.decode(output[0]))
```
### Reasoning mode
The EXAONE 4.0 models have reasoning capabilities for handling complex problems. You can activate reasoning mode by using the `enable_thinking=True` argument with the tokenizer, which opens a reasoning block that starts with `
### 32B Non-Reasoning Mode
EXAONE 4.0.1 32B
Phi 4 reasoning-plus
Magistral Small-2506
Qwen 3 32B
Qwen 3 235B
DeepSeek R1-0528
Model Size
32.0B
14.7B
23.6B
32.8B
235B
671B
Hybrid Reasoning
β
β
β
World Knowledge
MMLU-Pro
81.8
76.0
73.4
80.0
83.0
85.0
GPQA-Diamond
74.3
68.9
68.2
68.4
71.1
81.0
Math/Coding
AIME 2025
84.5
78.0
62.8
72.9
81.5
87.5
LiveCodeBench v6
67.7
47.1
47.4
60.1
58.9
70.3
Instruction Following
IFEval
82.3
84.9
37.9
85.0
83.4
80.8
Agentic Tool Use
BFCL-v3
60.7
N/A
40.4
70.3
70.8
64.7
Tau-Bench (Airline)
48.0
N/A
38.5
34.5
37.5
53.5
Tau-Bench (Retail)
65.4
N/A
10.2
55.2
58.3
63.9
Multilinguality
KMMLU-Pro
65.7
55.8
51.5
61.4
68.1
71.7
KSM
87.0
79.8
71.9
82.8
86.2
86.7
MMMLU (ES)
85.4
84.3
68.9
82.8
86.7
88.2
## Usage Guideline
> [!IMPORTANT]
> To achieve the expected performance, we recommend using the following configurations:
>
> - For non-reasoning mode, we recommend using a lower temperature value such as `temperature<0.6` for better performance.
> - For reasoning mode (using `
EXAONE 4.0.1 32B
Phi 4
Mistral-Small-2506
Gemma3 27B
Qwen3 32B
Qwen3 235B
Llama-4-Maverick
DeepSeek V3-0324
Model Size
32.0B
14.7B
24.0B
27.4B
32.8B
235B
402B
671B
Hybrid Reasoning
β
β
β
World Knowledge
MMLU-Pro
77.4
70.4
69.1
67.5
74.4
77.4
80.5
81.2
GPQA-Diamond
61.6
56.1
46.1
42.4
54.6
62.9
69.8
68.4
Math/Coding
AIME 2025
36.3
17.8
30.2
23.8
20.2
24.7
18.0
50.0
LiveCodeBench v6
43.3
27.4
26.9
29.7
28.0
31.4
32.7
44.0
Instruction Following
IFEval
84.7
63.0
77.8
82.6
83.2
83.2
85.4
81.2
Agentic Tool Use
BFCL-v3
63.9
N/A
57.7
N/A
63.0
68.0
52.9
63.8
Tau-Bench (Airline)
18.5
N/A
36.1
N/A
16.0
27.0
38.0
40.5
Tau-Bench (Retail)
52.0
N/A
35.5
N/A
47.6
56.5
6.5
68.5
Multilinguality
KMMLU-Pro
59.8
44.8
51.0
50.7
58.3
64.4
68.8
67.3
KSM
56.3
29.1
35.5
36.1
41.3
46.6
40.6
63.5
MMMLU (ES)
80.3
81.2
78.4
78.7
82.1
83.7
86.9
86.7