snigdhachandan commited on
Commit
1f783f6
1 Parent(s): 38c3733

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +32 -18
README.md CHANGED
@@ -3,38 +3,52 @@ tags:
3
  - merge
4
  - mergekit
5
  - lazymergekit
6
- - upaya07/Arithmo2-Mistral-7B
7
  - WizardLMTeam/WizardMath-7B-V1.1
 
 
8
  base_model:
9
- - upaya07/Arithmo2-Mistral-7B
10
  - WizardLMTeam/WizardMath-7B-V1.1
 
 
11
  ---
12
 
13
  # ganeet-V4
14
 
15
  ganeet-V4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
- * [upaya07/Arithmo2-Mistral-7B](https://huggingface.co/upaya07/Arithmo2-Mistral-7B)
17
  * [WizardLMTeam/WizardMath-7B-V1.1](https://huggingface.co/WizardLMTeam/WizardMath-7B-V1.1)
 
 
18
 
19
  ## 🧩 Configuration
20
 
21
  ```yaml
22
- slices:
23
- - sources:
24
- - model: upaya07/Arithmo2-Mistral-7B
25
- layer_range: [0, 32]
26
- - model: WizardLMTeam/WizardMath-7B-V1.1
27
- layer_range: [0, 32]
28
- merge_method: slerp
29
- base_model: upaya07/Arithmo2-Mistral-7B
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
  parameters:
31
- t:
32
- - filter: self_attn
33
- value: [0, 0.5, 0.3, 0.7, 1]
34
- - filter: mlp
35
- value: [1, 0.5, 0.7, 0.3, 0]
36
- - value: 0.4
37
- dtype: bfloat16
38
  ```
39
 
40
  ## 💻 Usage
 
3
  - merge
4
  - mergekit
5
  - lazymergekit
 
6
  - WizardLMTeam/WizardMath-7B-V1.1
7
+ - microsoft/rho-math-7b-interpreter-v0.1
8
+ - meta-math/MetaMath-Mistral-7B
9
  base_model:
 
10
  - WizardLMTeam/WizardMath-7B-V1.1
11
+ - microsoft/rho-math-7b-interpreter-v0.1
12
+ - meta-math/MetaMath-Mistral-7B
13
  ---
14
 
15
  # ganeet-V4
16
 
17
  ganeet-V4 is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
 
18
  * [WizardLMTeam/WizardMath-7B-V1.1](https://huggingface.co/WizardLMTeam/WizardMath-7B-V1.1)
19
+ * [microsoft/rho-math-7b-interpreter-v0.1](https://huggingface.co/microsoft/rho-math-7b-interpreter-v0.1)
20
+ * [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B)
21
 
22
  ## 🧩 Configuration
23
 
24
  ```yaml
25
+ models:
26
+ - model: WizardLMTeam/WizardMath-7B-V1.1
27
+ layer_range: [0, 30]
28
+ parameters:
29
+ density: 0.5 # fraction of weights in differences from the base model to retain
30
+ weight: # weight gradient
31
+ - filter: mlp
32
+ value: 0.5
33
+ - value: 0
34
+ - model: deepseek-ai/deepseek-math-7b-rl
35
+ layer_range: [0, 30]
36
+ - model: microsoft/rho-math-7b-interpreter-v0.1
37
+ layer_range: [0, 30]
38
+ parameters:
39
+ density: 0.5
40
+ weight: 0.5
41
+ - model: meta-math/MetaMath-Mistral-7B
42
+ layer_range: [0, 30]
43
+ parameters:
44
+ density: 0.5
45
+ weight: 0.5
46
+ merge_method: ties
47
+ base_model: deepseek-ai/deepseek-math-7b-rl
48
  parameters:
49
+ normalize: true
50
+ int8_mask: true
51
+ dtype: float16
 
 
 
 
52
  ```
53
 
54
  ## 💻 Usage