lbourdois commited on
Commit
69b2763
·
verified ·
1 Parent(s): d976a5d

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +83 -71
README.md CHANGED
@@ -1,71 +1,83 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-32B
4
- - zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
5
- library_name: transformers
6
- tags:
7
- - mergekit
8
- - merge
9
- - qwen2.5
10
- - TIES
11
- license: apache-2.0
12
- language:
13
- - en
14
- pipeline_tag: text-generation
15
- ---
16
- # merge
17
-
18
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
19
-
20
- ## Merge Details
21
- ### Merge Method
22
-
23
- This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
24
-
25
- ### Models Merged
26
-
27
- The following models were included in the merge:
28
- * [zetasepic/Qwen2.5-32B-Instruct-abliterated-v2](https://huggingface.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2)
29
-
30
- ### Configuration
31
-
32
- The following YAML configuration was used to produce this model:
33
-
34
- ```yaml
35
- models:
36
- - model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
37
- parameters:
38
- weight: 1
39
- density: 1
40
- merge_method: ties
41
- base_model: Qwen/Qwen2.5-32B
42
- parameters:
43
- weight: 1
44
- density: 1
45
- normalize: true
46
- int8_mask: true
47
- dtype: bfloat16
48
- ```
49
-
50
- ## Citations
51
-
52
- The merge is based on the technique posted [here](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-14b/discussions/1#67098eecdf3b26954feb2eab).
53
-
54
-
55
- ```
56
- @misc{qwen2.5,
57
- title = {Qwen2.5: A Party of Foundation Models},
58
- url = {https://qwenlm.github.io/blog/qwen2.5/},
59
- author = {Qwen Team},
60
- month = {September},
61
- year = {2024}
62
- }
63
-
64
- @article{qwen2,
65
- title={Qwen2 Technical Report},
66
- author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
67
- journal={arXiv preprint arXiv:2407.10671},
68
- year={2024}
69
- }
70
- ```
71
-
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-32B
4
+ - zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
5
+ library_name: transformers
6
+ tags:
7
+ - mergekit
8
+ - merge
9
+ - qwen2.5
10
+ - TIES
11
+ license: apache-2.0
12
+ language:
13
+ - zho
14
+ - eng
15
+ - fra
16
+ - spa
17
+ - por
18
+ - deu
19
+ - ita
20
+ - rus
21
+ - jpn
22
+ - kor
23
+ - vie
24
+ - tha
25
+ - ara
26
+ pipeline_tag: text-generation
27
+ ---
28
+ # merge
29
+
30
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
31
+
32
+ ## Merge Details
33
+ ### Merge Method
34
+
35
+ This model was merged using the [TIES](https://arxiv.org/abs/2306.01708) merge method using [Qwen/Qwen2.5-32B](https://huggingface.co/Qwen/Qwen2.5-32B) as a base.
36
+
37
+ ### Models Merged
38
+
39
+ The following models were included in the merge:
40
+ * [zetasepic/Qwen2.5-32B-Instruct-abliterated-v2](https://huggingface.co/zetasepic/Qwen2.5-32B-Instruct-abliterated-v2)
41
+
42
+ ### Configuration
43
+
44
+ The following YAML configuration was used to produce this model:
45
+
46
+ ```yaml
47
+ models:
48
+ - model: zetasepic/Qwen2.5-32B-Instruct-abliterated-v2
49
+ parameters:
50
+ weight: 1
51
+ density: 1
52
+ merge_method: ties
53
+ base_model: Qwen/Qwen2.5-32B
54
+ parameters:
55
+ weight: 1
56
+ density: 1
57
+ normalize: true
58
+ int8_mask: true
59
+ dtype: bfloat16
60
+ ```
61
+
62
+ ## Citations
63
+
64
+ The merge is based on the technique posted [here](https://huggingface.co/rombodawg/Rombos-LLM-V2.5-Qwen-14b/discussions/1#67098eecdf3b26954feb2eab).
65
+
66
+
67
+ ```
68
+ @misc{qwen2.5,
69
+ title = {Qwen2.5: A Party of Foundation Models},
70
+ url = {https://qwenlm.github.io/blog/qwen2.5/},
71
+ author = {Qwen Team},
72
+ month = {September},
73
+ year = {2024}
74
+ }
75
+
76
+ @article{qwen2,
77
+ title={Qwen2 Technical Report},
78
+ author={An Yang and Baosong Yang and Binyuan Hui and Bo Zheng and Bowen Yu and Chang Zhou and Chengpeng Li and Chengyuan Li and Dayiheng Liu and Fei Huang and Guanting Dong and Haoran Wei and Huan Lin and Jialong Tang and Jialin Wang and Jian Yang and Jianhong Tu and Jianwei Zhang and Jianxin Ma and Jin Xu and Jingren Zhou and Jinze Bai and Jinzheng He and Junyang Lin and Kai Dang and Keming Lu and Keqin Chen and Kexin Yang and Mei Li and Mingfeng Xue and Na Ni and Pei Zhang and Peng Wang and Ru Peng and Rui Men and Ruize Gao and Runji Lin and Shijie Wang and Shuai Bai and Sinan Tan and Tianhang Zhu and Tianhao Li and Tianyu Liu and Wenbin Ge and Xiaodong Deng and Xiaohuan Zhou and Xingzhang Ren and Xinyu Zhang and Xipin Wei and Xuancheng Ren and Yang Fan and Yang Yao and Yichang Zhang and Yu Wan and Yunfei Chu and Yuqiong Liu and Zeyu Cui and Zhenru Zhang and Zhihao Fan},
79
+ journal={arXiv preprint arXiv:2407.10671},
80
+ year={2024}
81
+ }
82
+ ```
83
+