lbourdois commited on
Commit
fe74202
·
verified ·
1 Parent(s): 470f90e

Improve language tag

Browse files

Hi! As the model is multilingual, this is a PR to add other languages than English to the language tag to improve the referencing. Note that 29 languages are announced in the README, but only 13 are explicitly listed. I was therefore only able to add these 13 languages.

Files changed (1) hide show
  1. README.md +82 -82
README.md CHANGED
@@ -1,82 +1,82 @@
1
- ---
2
- base_model:
3
- - Qwen/Qwen2.5-7B
4
- - Qwen/Qwen2.5-Coder-7B
5
- - Qwen/Qwen2.5-7B-Instruct
6
- - Qwen/Qwen2.5-Math-7B
7
- library_name: transformers
8
- tags:
9
- - mergekit
10
- - merge
11
- language:
12
- - zho
13
- - eng
14
- - fra
15
- - spa
16
- - por
17
- - deu
18
- - ita
19
- - rus
20
- - jpn
21
- - kor
22
- - vie
23
- - tha
24
- - ara
25
- ---
26
- # nthehai01/Qwen2.5-7B-Instruct-Math-Code-breadcrumbs
27
-
28
- This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
29
-
30
- ## Performance
31
- | Metric |Value|
32
- |---------------------------------|----:|
33
- |GSM8k (zero-shot) |90.06|
34
- |HellaSwag (zero-Shot) |82.77|
35
- |MBPP (zero-shot) |62.21|
36
-
37
- ## Merge Details
38
- ### Merge Method
39
-
40
- This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
41
-
42
- ### Models Merged
43
-
44
- The following models were included in the merge:
45
- * [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B)
46
- * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
47
- * [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
48
-
49
- ### Configuration
50
-
51
- The following YAML configuration was used to produce this model:
52
-
53
- ```yaml
54
- base_model: Qwen/Qwen2.5-7B
55
- dtype: bfloat16
56
- merge_method: breadcrumbs
57
- parameters:
58
- lambda: 0.9075603207928135
59
- normalize: 1.0
60
- slices:
61
- - sources:
62
- - layer_range: [0, 28]
63
- model: Qwen/Qwen2.5-7B
64
- - layer_range: [0, 28]
65
- model: Qwen/Qwen2.5-Math-7B
66
- parameters:
67
- density: 0.11722197443445775
68
- gamma: 0.07547691839721048
69
- weight: 0.17267293536872041
70
- - layer_range: [0, 28]
71
- model: Qwen/Qwen2.5-Coder-7B
72
- parameters:
73
- density: 0.48352747334554935
74
- gamma: 0.0753405327865558
75
- weight: 0.11164770709858211
76
- - layer_range: [0, 28]
77
- model: Qwen/Qwen2.5-7B-Instruct
78
- parameters:
79
- density: 0.8190520808683315
80
- gamma: 0.022307694128235696
81
- weight: 0.7626295102691242
82
- ```
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen2.5-7B
4
+ - Qwen/Qwen2.5-Coder-7B
5
+ - Qwen/Qwen2.5-7B-Instruct
6
+ - Qwen/Qwen2.5-Math-7B
7
+ library_name: transformers
8
+ tags:
9
+ - mergekit
10
+ - merge
11
+ language:
12
+ - zho
13
+ - eng
14
+ - fra
15
+ - spa
16
+ - por
17
+ - deu
18
+ - ita
19
+ - rus
20
+ - jpn
21
+ - kor
22
+ - vie
23
+ - tha
24
+ - ara
25
+ ---
26
+ # nthehai01/Qwen2.5-7B-Instruct-Math-Code-breadcrumbs
27
+
28
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
29
+
30
+ ## Performance
31
+ | Metric |Value|
32
+ |---------------------------------|----:|
33
+ |GSM8k (zero-shot) |90.06|
34
+ |HellaSwag (zero-Shot) |82.77|
35
+ |MBPP (zero-shot) |62.21|
36
+
37
+ ## Merge Details
38
+ ### Merge Method
39
+
40
+ This model was merged using the [Model Breadcrumbs](https://arxiv.org/abs/2312.06795) merge method using [Qwen/Qwen2.5-7B](https://huggingface.co/Qwen/Qwen2.5-7B) as a base.
41
+
42
+ ### Models Merged
43
+
44
+ The following models were included in the merge:
45
+ * [Qwen/Qwen2.5-Coder-7B](https://huggingface.co/Qwen/Qwen2.5-Coder-7B)
46
+ * [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct)
47
+ * [Qwen/Qwen2.5-Math-7B](https://huggingface.co/Qwen/Qwen2.5-Math-7B)
48
+
49
+ ### Configuration
50
+
51
+ The following YAML configuration was used to produce this model:
52
+
53
+ ```yaml
54
+ base_model: Qwen/Qwen2.5-7B
55
+ dtype: bfloat16
56
+ merge_method: breadcrumbs
57
+ parameters:
58
+ lambda: 0.9075603207928135
59
+ normalize: 1.0
60
+ slices:
61
+ - sources:
62
+ - layer_range: [0, 28]
63
+ model: Qwen/Qwen2.5-7B
64
+ - layer_range: [0, 28]
65
+ model: Qwen/Qwen2.5-Math-7B
66
+ parameters:
67
+ density: 0.11722197443445775
68
+ gamma: 0.07547691839721048
69
+ weight: 0.17267293536872041
70
+ - layer_range: [0, 28]
71
+ model: Qwen/Qwen2.5-Coder-7B
72
+ parameters:
73
+ density: 0.48352747334554935
74
+ gamma: 0.0753405327865558
75
+ weight: 0.11164770709858211
76
+ - layer_range: [0, 28]
77
+ model: Qwen/Qwen2.5-7B-Instruct
78
+ parameters:
79
+ density: 0.8190520808683315
80
+ gamma: 0.022307694128235696
81
+ weight: 0.7626295102691242
82
+ ```