merge
Interesting merge, starting testing..
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
merge_method: slerp
dtype: bfloat16
slices:
- sources:
- model: Undi95/PsyMedRP-v1-20B
layer_range: [0, 62]
- model: Undi95/MXLewd-L2-20B
layer_range: [0, 62]
base_model: Undi95/PsyMedRP-v1-20B
parameters:
t:
- 0.25 # layer 0 (favor PsyMedRP)
- 0.5 # layer 1 (balanced)
- 0.75 # layer 2 (favor MXLewd)
- 0.5 # layer 3 (balanced)
- 0.25 # layer 4
- 0.5 # layer 5
- 0.75 # layer 6
- 0.5 # layer 7
- 0.25 # layer 8
- 0.5 # layer 9
- 0.75 # layer 10
- 0.5 # layer 11
- 0.25 # layer 12
- 0.5 # layer 13
- 0.75 # layer 14
- 0.5 # layer 15
- 0.25 # layer 16
- 0.5 # layer 17
- 0.75 # layer 18
- 0.5 # layer 19
- 0.25 # layer 20
- 0.5 # layer 21
- 0.75 # layer 22
- 0.5 # layer 23
- 0.25 # layer 24
- 0.5 # layer 25
- 0.75 # layer 26
- 0.5 # layer 27
- 0.25 # layer 28
- 0.5 # layer 29
- 0.75 # layer 30
- 0.5 # layer 31
- 0.25 # layer 32
- 0.5 # layer 33
- 0.75 # layer 34
- 0.5 # layer 35
- 0.25 # layer 36
- 0.5 # layer 37
- 0.75 # layer 38
- 0.5 # layer 39
- 0.25 # layer 40
- 0.5 # layer 41
- 0.75 # layer 42
- 0.5 # layer 43
- 0.25 # layer 44
- 0.5 # layer 45
- 0.75 # layer 46
- 0.5 # layer 47
- 0.25 # layer 48
- 0.5 # layer 49
- 0.75 # layer 50
- 0.5 # layer 51
- 0.25 # layer 52
- 0.5 # layer 53
- 0.75 # layer 54
- 0.5 # layer 55
- 0.25 # layer 56
- 0.5 # layer 57
- 0.75 # layer 58
- 0.5 # layer 59
- 0.25 # layer 60
- 0.5 # layer 61
- 0.75 # layer 62
- Downloads last month
- 2
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support