Tarek07 commited on
Commit
65df0c9
·
verified ·
1 Parent(s): 9a0e3a7

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +27 -19
README.md CHANGED
@@ -1,11 +1,19 @@
1
  ---
2
- base_model: []
 
 
 
 
 
 
 
3
  library_name: transformers
4
  tags:
5
  - mergekit
6
  - merge
7
-
8
  ---
 
9
  # Della
10
 
11
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
@@ -13,18 +21,18 @@ This is a merge of pre-trained language models created using [mergekit](https://
13
  ## Merge Details
14
  ### Merge Method
15
 
16
- This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using downloads/Genesis-R1-L3.3-70B as a base.
17
 
18
  ### Models Merged
19
 
20
  The following models were included in the merge:
21
- * downloads/Llama-3.3-70B-ArliAI-RPMax-v1.4
22
- * downloads/Negative_LLAMA_70B
23
- * downloads/Wayfarer-Large-70B-Llama-3.3
24
- * downloads/Anubis-70B-v1
25
- * downloads/Fallen-Llama-3.3-R1-70B-v1
26
- * downloads/70B-L3.3-mhnnn-x1
27
- * downloads/EVA-LLaMA-3.33-70B-v0.1
28
 
29
  ### Configuration
30
 
@@ -32,16 +40,16 @@ The following YAML configuration was used to produce this model:
32
 
33
  ```yaml
34
  models:
35
- - model: downloads/Wayfarer-Large-70B-Llama-3.3
36
- - model: downloads/Llama-3.3-70B-ArliAI-RPMax-v1.4
37
- - model: downloads/70B-L3.3-mhnnn-x1
38
- - model: downloads/Anubis-70B-v1
39
- - model: downloads/EVA-LLaMA-3.33-70B-v0.1
40
- - model: downloads/Negative_LLAMA_70B
41
- - model: downloads/Fallen-Llama-3.3-R1-70B-v1
42
  merge_method: della_linear
43
  chat_template: llama3
44
- base_model: downloads/Genesis-R1-L3.3-70B
45
  parameters:
46
  weight: 0.14
47
  density: 0.7
@@ -50,5 +58,5 @@ parameters:
50
  normalize: true
51
  dtype: bfloat16
52
  tokenizer:
53
- source: downloads/Genesis-R1-L3.3-70B
54
  ```
 
1
  ---
2
+ base_model:
3
+ - ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
4
+ - SicariusSicariiStuff/Negative_LLAMA_70B
5
+ - LatitudeGames/Wayfarer-Large-70B-Llama-3.3
6
+ - TheDrummer/Anubis-70B-v1
7
+ - TheDrummer/Fallen-Llama-3.3-R1-70B-v1
8
+ - TareksLab/Genesis-R1-L3.3-70B
9
+ - EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
10
  library_name: transformers
11
  tags:
12
  - mergekit
13
  - merge
14
+ license: llama3.3
15
  ---
16
+ My ideal vision for Dungeonmaster were these 7 models. However I was concerned about that many models in one single. I decided to try both and see...
17
  # Della
18
 
19
  This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
 
21
  ## Merge Details
22
  ### Merge Method
23
 
24
+ This model was merged using the [Linear DELLA](https://arxiv.org/abs/2406.11617) merge method using TareksLab/Genesis-R1-L3.3-70B as a base.
25
 
26
  ### Models Merged
27
 
28
  The following models were included in the merge:
29
+ * ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
30
+ * SicariusSicariiStuff/Negative_LLAMA_70B
31
+ * LatitudeGames/Wayfarer-Large-70B-Llama-3.3
32
+ * TheDrummer/Anubis-70B-v1
33
+ * TheDrummer/Fallen-Llama-3.3-R1-70B-v1
34
+ * TareksLab/Genesis-R1-L3.3-70B
35
+ * EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
36
 
37
  ### Configuration
38
 
 
40
 
41
  ```yaml
42
  models:
43
+ - model: LatitudeGames/Wayfarer-Large-70B-Llama-3.3
44
+ - model: ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4
45
+ - model: Sao10K/70B-L3.3-mhnnn-x1
46
+ - model: TheDrummer/Anubis-70B-v1
47
+ - model: EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1
48
+ - model: SicariusSicariiStuff/Negative_LLAMA_70B
49
+ - model: TheDrummer/Fallen-Llama-3.3-R1-70B-v1
50
  merge_method: della_linear
51
  chat_template: llama3
52
+ base_model: TareksLab/Genesis-R1-L3.3-70B
53
  parameters:
54
  weight: 0.14
55
  density: 0.7
 
58
  normalize: true
59
  dtype: bfloat16
60
  tokenizer:
61
+ source: TareksLab/Genesis-R1-L3.3-70B
62
  ```