File size: 4,282 Bytes
ecce02f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 |
---
license: gemma
language:
- ko
- en
tags:
- korean
- reasoning
- instruction-tuning
- fine-tuning
- gemma3
- sft
---
# π§ gemma-3-12b-it-Ko-Reasoning
> A large-scale Korean reasoning model fine-tuned from **google/gemma-3-12b-it**, designed to excel in logical and multi-hop reasoning tasks in Korean.
---
## π Overview
**gemma-3-12b-it-Ko-Reasoning** is a fine-tuned version of [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it), specifically optimized for **logical reasoning in Korean**. This model is part of a broader research initiative to explore:
- The **transition from multilingual reasoning LLMs** to **Korean-specialized reasoning models**
- The enhancement of **non-reasoning Korean language models** into **reasoning-capable variants**
- The development of open-access models that rival proprietary alternatives in complex reasoning tasks
This model was fine-tuned using a large-scale Korean-English instruction dataset containing diverse multi-hop questions, symbolic logic tasks, and human-crafted reasoning steps.
---
## π§ͺ Benchmark Results
> - π All benchmarks were measured using the **0-shot CoT (Chain-of-Thought)** method.
> - π The **Score** represents either the **accuracy (%)** of correct answers or a rating on a **1-10 scale** from a judge model.
> - π **LLM-as-a-judge** benchmarks were evaluated using **GPT-4o (2024-08-01-preview)**.
| **Benchmark** | **Score** |
|------------------|---------------|
| GPQA diamond | 61.3 |
| GSM8K | 59.6 |
| HAERAE | 73.9 |
| KSM | 66.7 |
| LogicKor | 8.56 |
| Math500 | 77.8 |
| MT-Bench | 8.54 |
| MT-Bench(Ko) | 8.80 |
---
## π§βπ» Usage
Install Transformers >= 4.50:
```bash
pip install -U transformers
```
Basic example:
```python
from transformers import AutoProcessor, Gemma3ForConditionalGeneration
from PIL import Image
import requests
import torch
model_id = "DimensionSTP/gemma-3-12b-it-Ko-Reasoning"
model = Gemma3ForConditionalGeneration.from_pretrained(
model_id, device_map="auto"
).eval()
processor = AutoProcessor.from_pretrained(model_id)
messages = [
{
"role": "system",
"content": [{"type": "text", "text": "You are a helpful assistant."}]
},
{
"role": "user",
"content": [
{"type": "text", "text": "μμΈκ³Ό λΆμ° μ€ μ΄λκ° λ 컀?"}
]
}
]
inputs = processor.apply_chat_template(
messages, add_generation_prompt=True, tokenize=True,
return_dict=True, return_tensors="pt"
).to(model.device, dtype=torch.bfloat16)
input_len = inputs["input_ids"].shape[-1]
with torch.inference_mode():
generation = model.generate(**inputs, max_new_tokens=8192, do_sample=False)
generation = generation[0][input_len:]
decoded = processor.decode(generation, skip_special_tokens=True)
print(decoded)
```
---
## π§ Base Model: google/gemma-3-12b-it
The base model, [google/gemma-3-12b-it](https://huggingface.co/google/gemma-3-12b-it), is a VLM developed by the Google team.
For more technical details, refer to the [Gemma 3 Technical Report](https://arxiv.org/abs/2503.19786).
---
## π§± Model Architecture
| Property | Value |
|------------------|--------------------------------------|
| Architecture | Gemma3ForConditionalGeneration |
| Parameters | 12B |
| Context Length | 128,000 tokens |
| Tokenizer | Gemma3Tokenizer (BPE) |
---
## π
Release Date
**Mar 2025**
This model was released in March 2025 as part of the **Ko-Reasoning Series**, which focuses on pushing the boundaries of open-source reasoning in Korean using modern LLMs.
---
## π¬ Contact
For questions, collaborations, or deployment inquiries, please contact:
- π€ Hugging Face: [https://huggingface.co/DimensionSTP](https://huggingface.co/DimensionSTP)
- βοΈ Email: [[email protected]]
---
## π¦ Available Checkpoints
- β
`main`: Final stable version from the `last` branch
- β
All training artifacts available (tokenizer, config, model weights)
|