|
--- |
|
library_name: transformers |
|
license: apache-2.0 |
|
pipeline_tag: text-generation |
|
base_model: |
|
- Qwen/Qwen3-4B-Base |
|
tags: |
|
- konanllm |
|
language: |
|
- ko |
|
- en |
|
--- |
|
# Konan-LLM-OND |
|
|
|
## **Overview** |
|
|
|
**Konan-LLM-OND**, a large language model from Konan Technology Inc., is based on [Qwen3-4B-Base](https://huggingface.co/Qwen/Qwen3-4B-Base). It has been specifically optimized for the Korean language through vocabulary expansion, continual pre-training, and instruction tuning to enhance performance and efficiency. |
|
* **Languages**: Primarily Korean, with support for English. |
|
* **Key Features:** |
|
* **Expanded Korean Vocabulary:** The model's vocabulary has been expanded with additional Korean tokens to improve tokenization efficiency. As a result, Konan-LLM-OND is approximately 30% more token-efficient with Korean input than Qwen3, leading to greater cost-effectiveness and processing speed. |
|
* **Continual Pre-training**: The model underwent continual pre-training on a large-scale Korean corpus using an expanded vocabulary. This process enhanced its fundamental understanding and text generation capabilities in Korean. |
|
* **Supervised Fine-Tuning (SFT):** The model was fine-tuned on a high-quality Korean instruction dataset to improve its ability to understand and execute a wide variety of real-world tasks. |
|
|
|
## Benchmark Results |
|
|
|
#### **Model Performance (οΌ 5B)** |
|
|
|
<table border="1" style="border-collapse: collapse; width: 100%;"> |
|
<thead> |
|
<tr> |
|
<th rowspan="2" style="text-align: center; padding: 8px;">Model</th> |
|
<th rowspan="2" style="text-align: center; padding: 8px;">Model size</th> |
|
<th colspan="3" style="text-align: center; padding: 8px;">Korean</th> |
|
<th colspan="3" style="text-align: center; padding: 8px;">English</th> |
|
</tr> |
|
<tr> |
|
<th style="text-align: center; padding: 8px;">KMMLU</th> |
|
<th style="text-align: center; padding: 8px;">HRM8K</th> |
|
<th style="text-align: center; padding: 8px;">Ko-IFEval</th> |
|
<th style="text-align: center; padding: 8px;">MMLU</th> |
|
<th style="text-align: center; padding: 8px;">GSM8K</th> |
|
<th style="text-align: center; padding: 8px;">IFEval</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr> |
|
<td style="padding: 8px;"><strong>Konan-LLM-OND</strong></td> |
|
<td style="text-align: center; padding: 8px;">4.0B</td> |
|
<td style="text-align: center; padding: 8px;"><strong>50.6<strong></td> |
|
<td style="text-align: center; padding: 8px;"><strong>46.4<strong></td> |
|
<td style="text-align: center; padding: 8px;">68.4</td> |
|
<td style="text-align: center; padding: 8px;"><strong>68.8<strong></td> |
|
<td style="text-align: center; padding: 8px;"><strong>86.8<strong></td> |
|
<td style="text-align: center; padding: 8px;">73.3</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>EXAONE-3.5-2.4B-Instruct</strong></td> |
|
<td style="text-align: center; padding: 8px;">2.4B</td> |
|
<td style="text-align: center; padding: 8px;">44.2</td> |
|
<td style="text-align: center; padding: 8px;">31.8</td> |
|
<td style="text-align: center; padding: 8px;">60.5</td> |
|
<td style="text-align: center; padding: 8px;">59.1</td> |
|
<td style="text-align: center; padding: 8px;">81.5</td> |
|
<td style="text-align: center; padding: 8px;">77.7</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>kanana-1.5-2.1b-instruct-2505</strong></td> |
|
<td style="text-align: center; padding: 8px;">2.1B</td> |
|
<td style="text-align: center; padding: 8px;">32.7</td> |
|
<td style="text-align: center; padding: 8px;">27.2</td> |
|
<td style="text-align: center; padding: 8px;">56.0</td> |
|
<td style="text-align: center; padding: 8px;">52.9</td> |
|
<td style="text-align: center; padding: 8px;">68.8</td> |
|
<td style="text-align: center; padding: 8px;">64.6</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>Midm-2.0-Mini-Instruct</strong></td> |
|
<td style="text-align: center; padding: 8px;">2.3B</td> |
|
<td style="text-align: center; padding: 8px;">42.4</td> |
|
<td style="text-align: center; padding: 8px;">36.2</td> |
|
<td style="text-align: center; padding: 8px;">66.8</td> |
|
<td style="text-align: center; padding: 8px;">57.4</td> |
|
<td style="text-align: center; padding: 8px;">74.8</td> |
|
<td style="text-align: center; padding: 8px;">68.3</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>Qwen3-4B(w/o reasoning)</strong></td> |
|
<td style="text-align: center; padding: 8px;">4.0B</td> |
|
<td style="text-align: center; padding: 8px;">0.0(*)</td> |
|
<td style="text-align: center; padding: 8px;">37.5</td> |
|
<td style="text-align: center; padding: 8px;">68.4</td> |
|
<td style="text-align: center; padding: 8px;">29.4(*)</td> |
|
<td style="text-align: center; padding: 8px;">83.9</td> |
|
<td style="text-align: center; padding: 8px;"><strong>80.0<strong></td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>gemma-3-4b-it</strong></td> |
|
<td style="text-align: center; padding: 8px;">4.3B</td> |
|
<td style="text-align: center; padding: 8px;">38.7</td> |
|
<td style="text-align: center; padding: 8px;">32.7</td> |
|
<td style="text-align: center; padding: 8px;"><strong>69.2<strong></td> |
|
<td style="text-align: center; padding: 8px;">59.1</td> |
|
<td style="text-align: center; padding: 8px;">82.2</td> |
|
<td style="text-align: center; padding: 8px;">78.3</td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
#### **Model Performance (β₯ 7B)** |
|
|
|
<table border="1" style="border-collapse: collapse; width: 100%;"> |
|
<thead> |
|
<tr> |
|
<th rowspan="2" style="text-align: center; padding: 8px;">Model</th> |
|
<th rowspan="2" style="text-align: center; padding: 8px;">Model size</th> |
|
<th colspan="3" style="text-align: center; padding: 8px;">Korean</th> |
|
<th colspan="3" style="text-align: center; padding: 8px;">English</th> |
|
</tr> |
|
<tr> |
|
<th style="text-align: center; padding: 8px;">KMMLU</th> |
|
<th style="text-align: center; padding: 8px;">HRM8K</th> |
|
<th style="text-align: center; padding: 8px;">Ko-IFEval</th> |
|
<th style="text-align: center; padding: 8px;">MMLU</th> |
|
<th style="text-align: center; padding: 8px;">GSM8K</th> |
|
<th style="text-align: center; padding: 8px;">IFEval</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr> |
|
<td style="padding: 8px;"><strong>Konan-LLM-OND</strong></td> |
|
<td style="text-align: center; padding: 8px;">4.0B</td> |
|
<td style="text-align: center; padding: 8px;">50.6</td> |
|
<td style="text-align: center; padding: 8px;"><strong>46.4</strong></td> |
|
<td style="text-align: center; padding: 8px;">68.4</td> |
|
<td style="text-align: center; padding: 8px;">68.8</td> |
|
<td style="text-align: center; padding: 8px;">86.8</td> |
|
<td style="text-align: center; padding: 8px;">73.3</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>A.X-4.0-Light</strong></td> |
|
<td style="text-align: center; padding: 8px;">7.2B</td> |
|
<td style="text-align: center; padding: 8px;"><strong>55.3</strong></td> |
|
<td style="text-align: center; padding: 8px;">44.6</td> |
|
<td style="text-align: center; padding: 8px;">71.5</td> |
|
<td style="text-align: center; padding: 8px;"><strong>70.6</strong></td> |
|
<td style="text-align: center; padding: 8px;">87.3</td> |
|
<td style="text-align: center; padding: 8px;">81.3</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>EXAONE-3.5-7.8B-Instruct</strong></td> |
|
<td style="text-align: center; padding: 8px;">7.8B</td> |
|
<td style="text-align: center; padding: 8px;">48.0</td> |
|
<td style="text-align: center; padding: 8px;">39.3</td> |
|
<td style="text-align: center; padding: 8px;">66.8</td> |
|
<td style="text-align: center; padding: 8px;">66.8</td> |
|
<td style="text-align: center; padding: 8px;"><strong>91.4</strong></td> |
|
<td style="text-align: center; padding: 8px;">79.9</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>kanana-1.5-8b-instruct-2505</strong></td> |
|
<td style="text-align: center; padding: 8px;">8.0B</td> |
|
<td style="text-align: center; padding: 8px;">40.4</td> |
|
<td style="text-align: center; padding: 8px;">35.5</td> |
|
<td style="text-align: center; padding: 8px;">71.1</td> |
|
<td style="text-align: center; padding: 8px;">63.1</td> |
|
<td style="text-align: center; padding: 8px;">79.3</td> |
|
<td style="text-align: center; padding: 8px;">76.8</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>Midm-2.0-Base-Instruct</strong></td> |
|
<td style="text-align: center; padding: 8px;">11.5B</td> |
|
<td style="text-align: center; padding: 8px;">54.2</td> |
|
<td style="text-align: center; padding: 8px;">46.0</td> |
|
<td style="text-align: center; padding: 8px;"><strong>75.0</strong></td> |
|
<td style="text-align: center; padding: 8px;">70.2</td> |
|
<td style="text-align: center; padding: 8px;">88.9</td> |
|
<td style="text-align: center; padding: 8px;">79.7</td> |
|
</tr> |
|
<tr> |
|
<td style="padding: 8px;"><strong>Qwen3-8B(w/o reasoning)</strong></td> |
|
<td style="text-align: center; padding: 8px;">8.1B</td> |
|
<td style="text-align: center; padding: 8px;">0.0(*)</td> |
|
<td style="text-align: center; padding: 8px;">40.0</td> |
|
<td style="text-align: center; padding: 8px;">70.9</td> |
|
<td style="text-align: center; padding: 8px;">7.4(*)</td> |
|
<td style="text-align: center; padding: 8px;">84.0</td> |
|
<td style="text-align: center; padding: 8px;"><strong>82.8</strong></td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
Note: |
|
* The highest scores are shown in bold. |
|
* (*) Qwen3 models often failed to follow the required answer format in the few-shot setting. As a result, the MMLU and KMMLU scores are markedly lower than expected and should be considered unreliable. |
|
|
|
## **Benchmark Setup** |
|
|
|
All benchmarks were executed using the following standardized environment. |
|
|
|
* **Evaluation Framework**: `lm-evaluation-harness v0.4.9` |
|
* **Runtime & Hardware**: All models were served with `vLLM v0.9.1` on a single NVIDIA GPU. |
|
* **Inference Mode**: For every benchmark, we invoked the `chat_completions` API, and scores were computed solely from the generated responses. |
|
|
|
#### **Metric Adjustments** |
|
|
|
* MMLU was evaluated following the KMMLU protocol. |
|
* Ko-IFEval was evaluated using the original IFEval protocol, with the dataset sourced from [allganize/IFEval-Ko](https://huggingface.co/datasets/allganize/IFEval-Ko). |
|
|
|
#### **Evaluation Protocol** |
|
|
|
<table> |
|
<thead> |
|
<tr> |
|
<th>Benchmark</th> |
|
<th>Scoring Method</th> |
|
<th>Few-shot</th> |
|
</tr> |
|
</thead> |
|
<tbody> |
|
<tr> |
|
<td><strong>KMMLU</strong></td> |
|
<td><code>exact_match</code></td> |
|
<td>5-shot</td> |
|
</tr> |
|
<tr> |
|
<td><strong>HRM8K</strong></td> |
|
<td>mean of <code>hrm8k_gsm8k</code>, <code>hrm8k_ksm</code>, <code>hrm8k_math</code>, <code>hrm8k_mmmlu</code>, <code>hrm8k_omni_math</code></td> |
|
<td>5-shot</td> |
|
</tr> |
|
<tr> |
|
<td><strong>Ko-IFEval</strong></td> |
|
<td>mean of <code>prompt_level_strict_acc</code>, <code>inst_level_strict_acc</code>, <code>prompt_level_loose_acc</code>, <code>inst_level_loose_acc</code></td> |
|
<td>0-shot</td> |
|
</tr> |
|
<tr> |
|
<td><strong>MMLU</strong></td> |
|
<td><code>exact_match</code></td> |
|
<td>5-shot</td> |
|
</tr> |
|
<tr> |
|
<td><strong>GSM8K</strong></td> |
|
<td><code>exact_match</code> & <code>flexible-extract</code></td> |
|
<td>5-shot</td> |
|
</tr> |
|
<tr> |
|
<td><strong>IFEval</strong></td> |
|
<td>mean of <code>prompt_level_strict_acc</code>, <code>inst_level_strict_acc</code>, <code>prompt_level_loose_acc</code>, <code>inst_level_loose_acc</code></td> |
|
<td>0-shot</td> |
|
</tr> |
|
</tbody> |
|
</table> |
|
|
|
## Quickstart |
|
|
|
**Konan-LLM-OND** is supported in `transformers v4.52.0` and later. |
|
```bash |
|
pip install transformers>=4.52.0 |
|
``` |
|
|
|
The code example below shows you how to get the model to generate content based on given inputs. |
|
```python |
|
import torch |
|
from transformers import AutoTokenizer, AutoModelForCausalLM |
|
|
|
model_name = "konantech/Konan-LLM-OND" |
|
model = AutoModelForCausalLM.from_pretrained( |
|
model_name, |
|
torch_dtype=torch.bfloat16, |
|
device_map="auto", |
|
) |
|
model.eval() |
|
tokenizer = AutoTokenizer.from_pretrained(model_name) |
|
|
|
messages = [ |
|
{"role": "system", "content": "You are a helpful assistant."}, |
|
{"role": "user", "content": "λνλ―Όκ΅ μλλ?"} |
|
] |
|
|
|
|
|
input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device) |
|
|
|
with torch.no_grad(): |
|
output = model.generate( |
|
input_ids, |
|
max_new_tokens=64, |
|
do_sample=False, |
|
) |
|
|
|
len_input_prompt = len(input_ids[0]) |
|
response = tokenizer.decode(output[0][len_input_prompt:], skip_special_tokens=True) |
|
print(response) |
|
# λνλ―Όκ΅ μλλ μμΈμ
λλ€. |
|
``` |
|
|
|
## Citation |
|
``` |
|
@misc{Konan-LLM-OND-2025, |
|
author = {Konan Technology Inc.}, |
|
title = {Konan-LLM-OND}, |
|
year = {2025}, |
|
url = {https://huggingface.co/konantech/Konan-LLM-OND} |
|
} |
|
``` |
|
|