Quantization made by Richard Erkhov.
Linkbricks-Horizon-AI-Korean-Pro-12B - GGUF
- Model creator: https://huggingface.co/Saxo/
- Original model: https://huggingface.co/Saxo/Linkbricks-Horizon-AI-Korean-Pro-12B/
Original model description:
library_name: transformers license: apache-2.0 base_model: - Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B datasets: - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset - >- Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled - Saxo/ko-news-corpus-1 - Saxo/ko-news-corpus-2 - Saxo/ko-news-corpus-3 - Saxo/ko-news-corpus-4 - Saxo/ko-news-corpus-5 - Saxo/ko-news-corpus-6 - Saxo/ko-news-corpus-7 - Saxo/ko-news-corpus-8 - Saxo/ko-news-corpus-9 - maywell/ko_Ultrafeedback_binarized - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo - lilacai/glaive-function-calling-v2-sharegpt - kuotient/gsm8k-ko language: - ko - en - jp - cn pipeline_tag: text-generation
Model Card for Model ID

AI ์ ๋น
๋ฐ์ดํฐ ๋ถ์ ์ ๋ฌธ ๊ธฐ์
์ธ Linkbricks์ ๋ฐ์ดํฐ์ฌ์ด์ธํฐ์คํธ์ธ ์ง์ค์ฑ(Saxo) ์ด์ฌ๊ฐ
mistralai/Mistral-Nemo-Instruct-2407 ๋ฒ ์ด์ค๋ชจ๋ธ์ ์ฌ์ฉํด์ H100-80G 8๊ฐ๋ฅผ ํตํด ์ฝ 20%์ ๋์ ํ๋ผ๋ฏธํฐ๋ฅผ ํ๊ตญ์ด CPT(Continued-Pretraining)->SFT->DPO ํ
ํ๊ธ ์ธ์ด ๋ชจ๋ธ์ธ Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B์ ์ถ๊ฐ์ ์ธ ํ๊ธ, ์์ด, ์ผ์ด, ์ค๊ตญ์ด ๊ต์ฐจ ๋ฐ์ดํฐ๋ค์ ํ์ฉํด์ ๋ค์ํ ํ
์คํฌ๋ณ ํ๊ตญ์ด-์ค๊ตญ์ด-์์ด-์ผ๋ณธ์ด ๊ต์ฐจ ํ์ต ๋ฐ์ดํฐ์ ์ํ ๋ฐ
๋
ผ๋ฆฌํ๋จ ๋ฐ์ดํฐ๋ฅผ ํตํ์ฌ ํ์ค์ผ์ ์ธ์ด ๊ต์ฐจ ์ฆ๊ฐ ์ฒ๋ฆฌ์ ๋ณต์กํ ๋
ผ๋ฆฌ ๋ฌธ์ ์ญ์ ๋์ ๊ฐ๋ฅํ๋๋ก ํ๋ จํ ๋ชจ๋ธ์ด๋ค.
-ํ๊ธ, ์์ด, ์ค๊ตญ์ด, ์ผ๋ณธ์ด ๊ต์ฐจ ์ฒ๋ฆฌ ๊ฐํ ๋ฒ์
-ํ ํฌ๋์ด์ ๋ ๋จ์ด ํ์ฅ ์์ด ๋ฒ ์ด์ค ๋ชจ๋ธ ๊ทธ๋๋ก ์ฌ์ฉ
-๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋ ๋ณต์กํ ํ๊ธ ์ถ๋ก ๋ฐ ์์
ํฌ์คํ
๊ณ ์ฐจ์ ๋ถ์ ๋ฐ ์ฝ๋ฉ๊ณผ ์๋ฌธ, ์ํ, ๋
ผ๋ฆฌํ๋จ ๋ฑ์ด ๊ฐํ๋ ๋ชจ๋ธ
-128k-Context Window
Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, has developed a Korean language model
using the Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B, a Korean language model that uses the mistralai/Mistral-Nemo-Instruct-2407 basemodel to train about 20% of the parameters through 8 H100-80Gs
using Korean CPT (Continued-Pretraining)->SFT->DPO.
It is a model trained to handle cross-lingual augmentation and complex logic problems by utilizing additional Korean, Engliash, Japanese and Chinese Language data, cross-training data of Korean, Chinese, English, and Japanese by various tasks, and math and logic judgment data.
Translated with DeepL.com (free version)
-Reinforced Korean, Engliash, Japanese, Chinese Language processing
-Tokenizer uses the base model without word expansion
-Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math, decision making and complex inference
-128k-Context Window
-Deepspeed Stage=3, use rslora and BAdam Layer Mode
www.linkbricks.com, www.linkbricks.vc
Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.
- Downloads last month
- 64
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit