YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

Quantization made by Richard Erkhov.

Github

Discord

Request more models

Linkbricks-Horizon-AI-Korean-Pro-12B - GGUF

Original model description:

library_name: transformers license: apache-2.0 base_model: - Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B datasets: - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset - >- Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface - >- Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled - Saxo/ko-news-corpus-1 - Saxo/ko-news-corpus-2 - Saxo/ko-news-corpus-3 - Saxo/ko-news-corpus-4 - Saxo/ko-news-corpus-5 - Saxo/ko-news-corpus-6 - Saxo/ko-news-corpus-7 - Saxo/ko-news-corpus-8 - Saxo/ko-news-corpus-9 - maywell/ko_Ultrafeedback_binarized - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo - lilacai/glaive-function-calling-v2-sharegpt - kuotient/gsm8k-ko language: - ko - en - jp - cn pipeline_tag: text-generation

Model Card for Model ID

AI ์™€ ๋น…๋ฐ์ดํ„ฐ ๋ถ„์„ ์ „๋ฌธ ๊ธฐ์—…์ธ Linkbricks์˜ ๋ฐ์ดํ„ฐ์‚ฌ์ด์–ธํ‹ฐ์ŠคํŠธ์ธ ์ง€์œค์„ฑ(Saxo) ์ด์‚ฌ๊ฐ€
mistralai/Mistral-Nemo-Instruct-2407 ๋ฒ ์ด์Šค๋ชจ๋ธ์„ ์‚ฌ์šฉํ•ด์„œ H100-80G 8๊ฐœ๋ฅผ ํ†ตํ•ด ์•ฝ 20%์ •๋„์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ํ•œ๊ตญ์–ด CPT(Continued-Pretraining)->SFT->DPO ํ•œ
ํ•œ๊ธ€ ์–ธ์–ด ๋ชจ๋ธ์ธ Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B์„ ์ถ”๊ฐ€์ ์ธ ํ•œ๊ธ€, ์˜์–ด, ์ผ์–ด, ์ค‘๊ตญ์–ด ๊ต์ฐจ ๋ฐ์ดํ„ฐ๋“ค์„ ํ™œ์šฉํ•ด์„œ ๋‹ค์–‘ํ•œ ํ…Œ์Šคํฌ๋ณ„ ํ•œ๊ตญ์–ด-์ค‘๊ตญ์–ด-์˜์–ด-์ผ๋ณธ์–ด ๊ต์ฐจ ํ•™์Šต ๋ฐ์ดํ„ฐ์™€ ์ˆ˜ํ•™ ๋ฐ
๋…ผ๋ฆฌํŒ๋‹จ ๋ฐ์ดํ„ฐ๋ฅผ ํ†ตํ•˜์—ฌ ํ•œ์ค‘์ผ์˜ ์–ธ์–ด ๊ต์ฐจ ์ฆ๊ฐ• ์ฒ˜๋ฆฌ์™€ ๋ณต์žกํ•œ ๋…ผ๋ฆฌ ๋ฌธ์ œ ์—ญ์‹œ ๋Œ€์‘ ๊ฐ€๋Šฅํ•˜๋„๋ก ํ›ˆ๋ จํ•œ ๋ชจ๋ธ์ด๋‹ค.
-ํ•œ๊ธ€, ์˜์–ด, ์ค‘๊ตญ์–ด, ์ผ๋ณธ์–ด ๊ต์ฐจ ์ฒ˜๋ฆฌ ๊ฐ•ํ™” ๋ฒ„์ „
-ํ† ํฌ๋‚˜์ด์ €๋Š” ๋‹จ์–ด ํ™•์žฅ ์—†์ด ๋ฒ ์ด์Šค ๋ชจ๋ธ ๊ทธ๋Œ€๋กœ ์‚ฌ์šฉ
-๊ณ ๊ฐ ๋ฆฌ๋ทฐ๋‚˜ ๋ณต์žกํ•œ ํ•œ๊ธ€ ์ถ”๋ก  ๋ฐ ์†Œ์…œ ํฌ์ŠคํŒ… ๊ณ ์ฐจ์› ๋ถ„์„ ๋ฐ ์ฝ”๋”ฉ๊ณผ ์ž‘๋ฌธ, ์ˆ˜ํ•™, ๋…ผ๋ฆฌํŒ๋‹จ ๋“ฑ์ด ๊ฐ•ํ™”๋œ ๋ชจ๋ธ
-128k-Context Window


Dr. Yunsung Ji (Saxo), a data scientist at Linkbricks, a company specializing in AI and big data analytics, has developed a Korean language model
using the Saxo/Linkbricks-Horizon-AI-Korean-Advanced-12B, a Korean language model that uses the mistralai/Mistral-Nemo-Instruct-2407 basemodel to train about 20% of the parameters through 8 H100-80Gs
using Korean CPT (Continued-Pretraining)->SFT->DPO. It is a model trained to handle cross-lingual augmentation and complex logic problems by utilizing additional Korean, Engliash, Japanese and Chinese Language data, cross-training data of Korean, Chinese, English, and Japanese by various tasks, and math and logic judgment data.

Translated with DeepL.com (free version)
-Reinforced Korean, Engliash, Japanese, Chinese Language processing
-Tokenizer uses the base model without word expansion
-Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math, decision making and complex inference
-128k-Context Window
-Deepspeed Stage=3, use rslora and BAdam Layer Mode


www.linkbricks.com, www.linkbricks.vc

Additional thanks to @nicoboss for giving me access to his private supercomputer, enabling me to provide many more quants, at much higher speed, than I would otherwise be able to.

Downloads last month
64
GGUF
Model size
12.2B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support