lbourdois commited on
Commit
0e46381
·
verified ·
1 Parent(s): bc14fbd

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +85 -76
README.md CHANGED
@@ -1,76 +1,85 @@
1
- ---
2
- library_name: transformers
3
- license: apache-2.0
4
- base_model: Qwen/Qwen2.5-32B-Instruct
5
- datasets:
6
- - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
7
- - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
8
- - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
9
- - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
10
- - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
11
- - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
12
- - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
13
- - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
14
- - Saxo/ko-news-corpus-1
15
- - Saxo/ko-news-corpus-2
16
- - Saxo/ko-news-corpus-3
17
- - Saxo/ko-news-corpus-4
18
- - Saxo/ko-news-corpus-5
19
- - Saxo/ko-news-corpus-6
20
- - Saxo/ko-news-corpus-7
21
- - Saxo/ko-news-corpus-8
22
- - Saxo/ko-news-corpus-9
23
- - maywell/ko_Ultrafeedback_binarized
24
- - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
25
- - lilacai/glaive-function-calling-v2-sharegpt
26
- - kuotient/gsm8k-ko
27
- language:
28
- - ko
29
- - en
30
- - jp
31
- - cn
32
- pipeline_tag: text-generation
33
- ---
34
-
35
- # Model Card for Model ID
36
-
37
- <div align="center">
38
- <img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" />
39
- </div>
40
- <br>
41
- <a href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/">Open LLM Leaderboard</a> 🏆 <32B, Rank-2 2025/02/24~
42
- <br>
43
- <br>
44
- <br>
45
-
46
- AI専門企業であるLinkbricks Horizon-AI のデータサイエンティストであるジ・ユンソン(Saxo)代表が<br>
47
- Qwen/Qwen2.5-32B-Instructベースモデルを使用し、H100-80G 8個で約35%程度のパラメータをCPT->SFT->DPO->ORPOした多言語強化言語モデル。<br>
48
- 8千万件の様々な言語圏のニュースやウィキコーパスを基に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学や論理判断データを通じて、日中韓英言語のクロスエンハンスメント処理と複雑な論理問題にも対応できるように訓練したモデルである。
49
- -トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br>
50
- -カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br>
51
- -Function Call<br>
52
- -Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br>
53
- -「transformers_version」: 「4.46.3」<br>
54
-
55
- <br><br>
56
-
57
- AI 전문 기업인 Linkbricks Horizon-AI 의 데이터사이언티스트인 지윤성(Saxo) 대표가
58
- Qwen/Qwen2.5-32B-Instruct 베이스모델을 사용해서 H100-80G 8개를 통해 약 35%정도의 파라미터를 CPT->SFT->DPO->ORPO 한 다국어 강화 언어 모델<br>
59
- 8천만건의 다양한 언어권의 뉴스 및 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 및 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br>
60
- -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br>
61
- -고객 리뷰나 소셜 포스팅 고차원 분석 및 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br>
62
- -Function Call 및 Tool Calling 지원<br>
63
- -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br>
64
- -"transformers_version": "4.46.3"<br>
65
- <br><br>
66
-
67
- Finetuned by Mr. Yunsung Ji (Saxo), a data scientist and CEO at Linkbricks Horiozn-AI, a company specializing in AI and big data analytics <br>
68
- about 35% of total parameters CPT->SFT->DPO->ORPO training model based on Qwen/Qwen2.5-32B-Instruct through 8 H100-80Gs as multi-lingual boosting language model <br>
69
- It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 80M multi-lingual news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
70
- -Tokenizer uses the base model without word expansion<br>
71
- -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
72
- -Function Calling<br>
73
- -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
74
- <br><br>
75
-
76
- <a href="www.horizonai.ai">www.horizonai.ai</a>, <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: Qwen/Qwen2.5-32B-Instruct
5
+ datasets:
6
+ - Saxo/ko_cn_translation_tech_social_science_linkbricks_single_dataset
7
+ - Saxo/ko_jp_translation_tech_social_science_linkbricks_single_dataset
8
+ - Saxo/en_ko_translation_tech_science_linkbricks_single_dataset_with_prompt_text_huggingface
9
+ - Saxo/en_ko_translation_social_science_linkbricks_single_dataset_with_prompt_text_huggingface
10
+ - Saxo/ko_aspect_sentiment_sns_mall_sentiment_linkbricks_single_dataset_with_prompt_text_huggingface
11
+ - Saxo/ko_summarization_linkbricks_single_dataset_with_prompt_text_huggingface
12
+ - Saxo/OpenOrca_cleaned_kor_linkbricks_single_dataset_with_prompt_text_huggingface
13
+ - Saxo/ko_government_qa_total_linkbricks_single_dataset_with_prompt_text_huggingface_sampled
14
+ - Saxo/ko-news-corpus-1
15
+ - Saxo/ko-news-corpus-2
16
+ - Saxo/ko-news-corpus-3
17
+ - Saxo/ko-news-corpus-4
18
+ - Saxo/ko-news-corpus-5
19
+ - Saxo/ko-news-corpus-6
20
+ - Saxo/ko-news-corpus-7
21
+ - Saxo/ko-news-corpus-8
22
+ - Saxo/ko-news-corpus-9
23
+ - maywell/ko_Ultrafeedback_binarized
24
+ - youjunhyeok/ko-orca-pair-and-ultrafeedback-dpo
25
+ - lilacai/glaive-function-calling-v2-sharegpt
26
+ - kuotient/gsm8k-ko
27
+ language:
28
+ - zho
29
+ - eng
30
+ - fra
31
+ - spa
32
+ - por
33
+ - deu
34
+ - ita
35
+ - rus
36
+ - jpn
37
+ - kor
38
+ - vie
39
+ - tha
40
+ - ara
41
+ pipeline_tag: text-generation
42
+ ---
43
+
44
+ # Model Card for Model ID
45
+
46
+ <div align="center">
47
+ <img src="http://www.linkbricks.com/wp-content/uploads/2024/11/fulllogo.png" />
48
+ </div>
49
+ <br>
50
+ <a href="https://huggingface.co/spaces/open-llm-leaderboard/open_llm_leaderboard#/">Open LLM Leaderboard</a> 🏆 <32B, Rank-2 2025/02/24~
51
+ <br>
52
+ <br>
53
+ <br>
54
+
55
+ AI専門企業であるLinkbricks Horizon-AI のデータサイエンティストであるジ・ユンソン(Saxo)代表が<br>
56
+ Qwen/Qwen2.5-32B-Instructベースモデルを使用し、H100-80G 8個で約35%程度のパラメータをCPT->SFT->DPO->ORPOした多言語強化言語モデル。<br>
57
+ 8千万件の様々な言語圏のニュースやウィキコーパスを基に、様々なタスク別の日本語・韓国語・中国語・英語クロス学習データと数学や論理判断データを通じて、日中韓英言語のクロスエンハンスメント処理と複雑な論理問題にも対応できるように訓練したモデルである。
58
+ -トークナイザーは、単語拡張なしでベースモデルのまま使用します。<br>
59
+ -カスタマーレビューやソーシャル投稿の高次元分析及びコーディングとライティング、数学、論理判断などが強化されたモデル。<br>
60
+ -Function Call<br>
61
+ -Deepspeed Stage=3、rslora及びBAdam Layer Modeを使用 <br>
62
+ -「transformers_version」: 「4.46.3」<br>
63
+
64
+ <br><br>
65
+
66
+ AI 전문 기업인 Linkbricks Horizon-AI 의 데이터사이언티스트인 지윤성(Saxo) 대표가
67
+ Qwen/Qwen2.5-32B-Instruct 베이스모델을 사용해서 H100-80G 8개를 통해 35%정도의 파라미터를 CPT->SFT->DPO->ORPO 다국어 강화 언어 모델<br>
68
+ 8천만건의 다양한 언어권의 뉴스 위키 코퍼스를 기준으로 다양한 테스크별 일본어-한국어-중국어-영어 교차 학습 데이터와 수학 논리판단 데이터를 통하여 한중일영 언어 교차 증강 처리와 복잡한 논리 문제 역시 대응 가능하도록 훈련한 모델이다.<br>
69
+ -토크나이저는 단어 확장 없이 베이스 모델 그대로 사용<br>
70
+ -고객 리뷰나 소셜 포스팅 고차원 분석 코딩과 작문, 수학, 논리판단 등이 강화된 모델<br>
71
+ -Function Call Tool Calling 지원<br>
72
+ -Deepspeed Stage=3, rslora 및 BAdam Layer Mode 사용 <br>
73
+ -"transformers_version": "4.46.3"<br>
74
+ <br><br>
75
+
76
+ Finetuned by Mr. Yunsung Ji (Saxo), a data scientist and CEO at Linkbricks Horiozn-AI, a company specializing in AI and big data analytics <br>
77
+ about 35% of total parameters CPT->SFT->DPO->ORPO training model based on Qwen/Qwen2.5-32B-Instruct through 8 H100-80Gs as multi-lingual boosting language model <br>
78
+ It is a model that has been trained to handle Japanese-Korean-Chinese-English cross-training data and 80M multi-lingual news corpus and logic judgment data for various tasks to enable cross-fertilization processing and complex Korean logic & math problems. <br>
79
+ -Tokenizer uses the base model without word expansion<br>
80
+ -Models enhanced with high-dimensional analysis of customer reviews and social posts, as well as coding, writing, math and decision making<br>
81
+ -Function Calling<br>
82
+ -Deepspeed Stage=3, use rslora and BAdam Layer Mode<br>
83
+ <br><br>
84
+
85
+ <a href="www.horizonai.ai">www.horizonai.ai</a>, <a href="www.linkbricks.com">www.linkbricks.com</a>, <a href="www.linkbricks.vc">www.linkbricks.vc</a>