File size: 2,113 Bytes
ddc4326
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
---
base_model: []
library_name: transformers
tags:
- mergekit
- merge

---
# merge

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [DELLA](https://arxiv.org/abs/2406.11617) merge method using /media/administrator/oiseauxai1data1/modelout/Smart-base-v2 as a base.

### Models Merged

The following models were included in the merge:
* /media/administrator/oiseauxai1data1/modelout/Dark-Base-V1
* /media/administrator/oiseauxai1data1/modelout/Middle-Base-V1
* /media/administrator/oiseauxai1data/modelout/Story-Base-V2

### Configuration

The following YAML configuration was used to produce this model:

```yaml
#--- Mergekit Example: della_linear ---
# Method: Implements the DELLA concept (Deep Ensembling with Layer-wise Linear Averaging).
#         This typically involves a sophisticated layer-wise linear combination of models.

base_model: /media/administrator/oiseauxai1data1/modelout/Smart-base-v2
models:
  - model: /media/administrator/oiseauxai1data1/modelout/Dark-Base-V1
    parameters:
      weight: 0.6
      density: 0.95
      epsilon: 0.018 # <-- Epsilon for this model
  - model: /media/administrator/oiseauxai1data/modelout/Story-Base-V2
    parameters:
      weight: 0.3
      density: 0.80
      epsilon: 0.018 # <-- Epsilon for this model
  - model: /media/administrator/oiseauxai1data1/modelout/Middle-Base-V1
    parameters:
      weight: 0.3
      density: 0.80
      epsilon: 0.018 # <-- Epsilon for this model
model_name: L3.3-70b-Amalgamma-V2 # Name of your merge
dtype: float32 # Input size float32, float16, bfloat16
out_dtype: bfloat16 # output size float32, float16, bfloat16
merge_method: della
parameters: # These are global parameters for the merge_method itself
  normalize: false
  # epsilon: 0.018 # REMOVE global epsilon from here
  lambda: 1.20
tokenizer_source: base # Or 'base' if base_model is set, or 'union', careful with this one
chat_template: llama3 # Template for chat (Chatml, llama3, etc...)
license: apache-2.0 # License type
```