Konan-LLM-OND / README.md
SeongBeomLEE's picture
Upload folder using huggingface_hub
4bc96c6 verified
metadata
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
base_model:
  - Qwen/Qwen3-4B-Base
tags:
  - konanllm
language:
  - ko
  - en

Konan-LLM-OND

Overview

Konan-LLM-OND, a large language model from Konan Technology Inc., is based on Qwen3-4B-Base. It has been specifically optimized for the Korean language through vocabulary expansion, continual pre-training, and instruction tuning to enhance performance and efficiency.

  • Languages: Primarily Korean, with support for English.
  • Key Features:
    • Expanded Korean Vocabulary: The model's vocabulary has been expanded with additional Korean tokens to improve tokenization efficiency. As a result, Konan-LLM-OND is approximately 30% more token-efficient with Korean input than Qwen3, leading to greater cost-effectiveness and processing speed.
    • Continual Pre-training: The model underwent continual pre-training on a large-scale Korean corpus using an expanded vocabulary. This process enhanced its fundamental understanding and text generation capabilities in Korean.
    • Supervised Fine-Tuning (SFT): The model was fine-tuned on a high-quality Korean instruction dataset to improve its ability to understand and execute a wide variety of real-world tasks.

Benchmark Results

Model Performance (๏ผœ 5B)

Model Model size Korean English
KMMLU HRM8K Ko-IFEval MMLU GSM8K IFEval
Konan-LLM-OND 4.0B 50.6 46.4 68.4 68.8 86.8 73.3
EXAONE-3.5-2.4B-Instruct 2.4B 44.2 31.8 60.5 59.1 81.5 77.7
kanana-1.5-2.1b-instruct-2505 2.1B 32.7 27.2 56.0 52.9 68.8 64.6
Midm-2.0-Mini-Instruct 2.3B 42.4 36.2 66.8 57.4 74.8 68.3
Qwen3-4B(w/o reasoning) 4.0B 0.0(*) 37.5 68.4 29.4(*) 83.9 80.0
gemma-3-4b-it 4.3B 38.7 32.7 69.2 59.1 82.2 78.3

Model Performance (โ‰ฅ 7B)

Model Model size Korean English
KMMLU HRM8K Ko-IFEval MMLU GSM8K IFEval
Konan-LLM-OND 4.0B 50.6 46.4 68.4 68.8 86.8 73.3
A.X-4.0-Light 7.2B 55.3 44.6 71.5 70.6 87.3 81.3
EXAONE-3.5-7.8B-Instruct 7.8B 48.0 39.3 66.8 66.8 91.4 79.9
kanana-1.5-8b-instruct-2505 8.0B 40.4 35.5 71.1 63.1 79.3 76.8
Midm-2.0-Base-Instruct 11.5B 54.2 46.0 75.0 70.2 88.9 79.7
Qwen3-8B(w/o reasoning) 8.1B 0.0(*) 40.0 70.9 7.4(*) 84.0 82.8

Note:

  • The highest scores are shown in bold.
  • (*) Qwen3 models often failed to follow the required answer format in the few-shot setting. As a result, the MMLU and KMMLU scores are markedly lower than expected and should be considered unreliable.

Benchmark Setup

All benchmarks were executed using the following standardized environment.

  • Evaluation Framework: lm-evaluation-harness v0.4.9
  • Runtime & Hardware: All models were served with vLLM v0.9.1 on a single NVIDIA GPU.
  • Inference Mode: For every benchmark, we invoked the chat_completions API, and scores were computed solely from the generated responses.

Metric Adjustments

  • MMLU was evaluated following the KMMLU protocol.
  • Ko-IFEval was evaluated using the original IFEval protocol, with the dataset sourced from allganize/IFEval-Ko.

Evaluation Protocol

Benchmark Scoring Method Few-shot
KMMLU exact_match 5-shot
HRM8K mean of hrm8k_gsm8k, hrm8k_ksm, hrm8k_math, hrm8k_mmmlu, hrm8k_omni_math 5-shot
Ko-IFEval mean of prompt_level_strict_acc, inst_level_strict_acc, prompt_level_loose_acc, inst_level_loose_acc 0-shot
MMLU exact_match 5-shot
GSM8K exact_match & flexible-extract 5-shot
IFEval mean of prompt_level_strict_acc, inst_level_strict_acc, prompt_level_loose_acc, inst_level_loose_acc 0-shot

Quickstart

Konan-LLM-OND is supported in transformers v4.52.0 and later.

pip install transformers>=4.52.0

The code example below shows you how to get the model to generate content based on given inputs.

import torch
from transformers import AutoTokenizer, AutoModelForCausalLM

model_name = "konantech/Konan-LLM-OND"
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
)
model.eval()
tokenizer = AutoTokenizer.from_pretrained(model_name)

messages = [
    {"role": "system", "content": "You are a helpful assistant."},
    {"role": "user", "content": "๋Œ€ํ•œ๋ฏผ๊ตญ ์ˆ˜๋„๋Š”?"}
]


input_ids = tokenizer.apply_chat_template(messages, add_generation_prompt=True, return_tensors="pt").to(model.device)

with torch.no_grad():
    output = model.generate(
        input_ids,
        max_new_tokens=64,
        do_sample=False,
    )

len_input_prompt = len(input_ids[0])
response = tokenizer.decode(output[0][len_input_prompt:], skip_special_tokens=True)
print(response)
# ๋Œ€ํ•œ๋ฏผ๊ตญ ์ˆ˜๋„๋Š” ์„œ์šธ์ž…๋‹ˆ๋‹ค.

Citation

@misc{Konan-LLM-OND-2025,
  author = {Konan Technology Inc.},
  title = {Konan-LLM-OND},
  year = {2025},
  url = {https://huggingface.co/konantech/Konan-LLM-OND}
}