Model
base_model : beomi/OPEN-SOLAR-KO-10.7B
Dataset
- ๊ณต๊ฐ ๋ฐ์ดํฐ ์์ง
- Deduplicating Training Data Makes Language Models Better ์๊ณ ๋ฆฌ์ฆ ํ์ฉ
Code
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model_name = "jingyeom/SOLAR_KO_1.3_deup"
model = AutoModelForCausalLM.from_pretrained(
model_name,
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Benchmark
Ko-LLM-Leaderboard (24.01.29 ๊ธฐ์ค ๋ฆฌ๋๋ณด๋ 11๋ฑ)
Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
---|---|---|---|---|---|
53.63 | 52.65 | 60.92 | 50.9 | 45.14 | 58.56 |
- Downloads last month
- 277
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support