Nohobby commited on
Commit
ae49cd2
·
verified ·
1 Parent(s): 1d8aaa8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +9 -3
README.md CHANGED
@@ -14,11 +14,17 @@ tags:
14
 
15
  > There's no 'I' in 'brain damage'
16
 
17
- ![]()
18
 
19
  ### Overview
20
 
21
- I'll write something here later
 
 
 
 
 
 
22
 
23
  ### Quants
24
 
@@ -35,7 +41,7 @@ tokenizer_source: base
35
  merge_method: della_linear
36
  parameters:
37
  density: 0.5
38
- epsilon: 0.4 #I'm a dumbass, it was supposed to be 0.04 😭
39
  lambda: 1.1
40
  base_model: allura-org/Qwen2.5-32b-RP-Ink
41
  models:
 
14
 
15
  > There's no 'I' in 'brain damage'
16
 
17
+ ![](https://files.catbox.moe/9k5p1v.png)
18
 
19
  ### Overview
20
 
21
+ An attempt to make QwentileSwap write better by merging it with RP-Ink. And DeepSeek, because why not. However, I screwed up the first merge step by accidentally setting an extremely high epsilon value. Step2 wasn't planned, but due to a wonky tensor size mismatch error, I couldn't merge Step1 into QwentileSwap using sce, so I just threw in some random model. And that did, in fact, solve the issue.
22
+
23
+ The result? Well, it's usable, I guess. The slop is reduced, more details are brought up, but said details sometimes get messed up. It's fixed by a few swipes and there's a chance that it's caused by my sampler settings, but uhh I'll just leave them as they are.
24
+
25
+ Prompt format: ChatML
26
+
27
+ Settings: [This kinda works but I'm weird](https://files.catbox.moe/hh551f.json)
28
 
29
  ### Quants
30
 
 
41
  merge_method: della_linear
42
  parameters:
43
  density: 0.5
44
+ epsilon: 0.4 #was supposed to be 0.04
45
  lambda: 1.1
46
  base_model: allura-org/Qwen2.5-32b-RP-Ink
47
  models: