ArtusDev commited on
Commit
054dc28
·
verified ·
1 Parent(s): a955731

Upload folder using huggingface_hub

Browse files
README.md ADDED
@@ -0,0 +1,57 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: other
3
+ license_name: mrl
4
+ language:
5
+ - en
6
+ tags:
7
+ - chat
8
+ pipeline_tag: text-generation
9
+
10
+ library_name: transformers
11
+ ---
12
+ # Monstral 123B v2
13
+ A Mistral-Large merge
14
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a531bc7ec6af0f95c707b1/sf_mh-yR7V7ghi7M8UnPS.png)
15
+
16
+ This model is a hybrid merge of Behemoth 1.2, Tess, and Magnum V4. The intention was to do a three-way slerp merge, which is technically
17
+ not possible. To simulate the effeect of a menage-a-slerp, I slerped B1.2 with tess, then separately did B1.2 with magnum. I then did a
18
+ model stock merge of those two slerps using B1.2 as the base. Somehow, it worked out spectacularly well. Sometimes dumb ideas pay off.
19
+
20
+ Mergefuel:
21
+ - TheDrummer/Behemoth-123B-v1.2
22
+ - anthracite-org/magnum-v4-123b
23
+ - migtissera/Tess-3-Mistral-Large-2-123B
24
+
25
+ See recipe.txt for full details.
26
+
27
+ Improvements over Monstral v1: Drummer's 1.2 tune of behemoth is a marked improvement over the original, and the addition ot tess to the
28
+ mix really makes the creativity pop. I seem to have dialed out the rapey magnum influence, without stripping it of the ability to get mean
29
+ and/or dirty when the situation actually calls for it. The RP output of this model shows a lot more flowery and "literary" description of
30
+ scenes and activities. It's more colorful and vibrant. Repitition is dramatically reduced, as is slop (though to a lesser extent). The
31
+ annoying tendency to double-describe things with "it was X, almost Y" is virtually gone. Do you like a slow-burn story that builds over
32
+ time? Well good fucking news, because v2 excels at that.
33
+
34
+ The only complaint I've received is occasional user impersonation with certain cards. I've not seen this myself on any of my cards, so I
35
+ have to assume it's down to the specific formatting on specific cards. I don't want to say it's a skill issue, but...
36
+
37
+ This model is uncensored and perfectly capable of generating objectionable material. I have not observed it injecting NSFW content into
38
+ SFW scenarios, but no guarentees can be made. As with any LLM, no factual claims made by the model should be taken at face value. You
39
+ know that boilerplate safety disclaimer that most professional models have? Assume this has it too. This model is for entertainment
40
+ purposes only.
41
+
42
+ GGUFs: https://huggingface.co/MarsupialAI/Monstral-123B-v2_GGUF
43
+
44
+
45
+ # Prompt Format
46
+ Metharme seems to work flawlessly. In theory, mistral V3 or possibly even chatml should work to some extent, but meth was providing such
47
+ high quality output that I couldn't even be bothered to test the others. Just do meth, kids.
48
+
49
+ If you really want to kick it up a notch, use Konnect's methception prompt. It's available as an all-in-one sillytavern preset, and as an
50
+ abridged plaintext prompt to use as a sysprompt or character card insertion. https://huggingface.co/Konnect1221/Methception-Llamaception-SillyTavern-Preset
51
+
52
+
53
+ # Braggadocio
54
+ As of 1/14/25, this model is #4 on the UGI leaderboard overall, and #2 for open-weight models (just behind a 405b finetune). Imagine how
55
+ well it would score if I knew what I was doing.
56
+
57
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/65a531bc7ec6af0f95c707b1/Y63OcwnPNrRO2JcBOrLvK.png)
config.json ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "I:\\raw\\behemoth12",
3
+ "architectures": [
4
+ "MistralForCausalLM"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 1,
8
+ "eos_token_id": 2,
9
+ "head_dim": 128,
10
+ "hidden_act": "silu",
11
+ "hidden_size": 12288,
12
+ "initializer_range": 0.02,
13
+ "intermediate_size": 28672,
14
+ "max_position_embeddings": 131072,
15
+ "model_type": "mistral",
16
+ "num_attention_heads": 96,
17
+ "num_hidden_layers": 88,
18
+ "num_key_value_heads": 8,
19
+ "rms_norm_eps": 1e-05,
20
+ "rope_theta": 1000000.0,
21
+ "sliding_window": null,
22
+ "tie_word_embeddings": false,
23
+ "torch_dtype": "float16",
24
+ "transformers_version": "4.46.1",
25
+ "use_cache": true,
26
+ "vocab_size": 32768,
27
+ "quantization_config": {
28
+ "quant_method": "exl3",
29
+ "version": "0.0.2",
30
+ "bits": 1.4,
31
+ "head_bits": 4,
32
+ "calibration": {
33
+ "rows": 100,
34
+ "cols": 2048
35
+ },
36
+ "out_scales": "auto"
37
+ }
38
+ }
mergekit_config.yml ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ models:
2
+ - model: I:\raw\monstral2m
3
+ - model: I:\raw\monstral2t
4
+ merge_method: model_stock
5
+ base_model: I:\raw\behemoth12
6
+ dtype: float16
model-00001-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f2362e1a50dd11d69c106058f3df9cea22013c8a528db9434ababc29b6ae9720
3
+ size 8561310976
model-00002-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:56adf0c05c088e8da233e6ae4dfb7ddbb05a24d629353271916506e05cf4d237
3
+ size 8499741600
model-00003-of-00003.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d062920c1187c50440287a114816e65d7b7b3573bfef2478271f8dcf2271a750
3
+ size 5284928512
model.safetensors.index.json ADDED
The diff for this file is too large to render. See raw diff
 
quantization_config.json ADDED
The diff for this file is too large to render. See raw diff
 
recipe.txt ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ models:
2
+ - model: behemoth12
3
+ - model: tess123
4
+ merge_method: slerp
5
+ base_model: behemoth12
6
+ parameters:
7
+ t: [0.1, 0.3, 0.5, 0.3, 0.1]
8
+ dtype: float16
9
+ name: btess
10
+ ---
11
+ models:
12
+ - model: behemoth12
13
+ - model: magnum123b_v4
14
+ merge_method: slerp
15
+ base_model: behemoth12
16
+ parameters:
17
+ t: [0.1, 0.3, 0.5, 0.3, 0.1]
18
+ dtype: float16
19
+ name: bmag
20
+ ---
21
+ models:
22
+ - model: btess
23
+ - model: bmag
24
+ merge_method: model_stock
25
+ base_model: behemoth12
26
+ dtype: float16
special_tokens_map.json ADDED
@@ -0,0 +1,23 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<s>",
4
+ "lstrip": false,
5
+ "normalized": false,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "</s>",
11
+ "lstrip": false,
12
+ "normalized": false,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "unk_token": {
17
+ "content": "<unk>",
18
+ "lstrip": false,
19
+ "normalized": false,
20
+ "rstrip": false,
21
+ "single_word": false
22
+ }
23
+ }
tokenizer.json ADDED
The diff for this file is too large to render. See raw diff
 
tokenizer.model ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:59f95e28944c062244741268596badc900df86c7f5ded05088d2da22a7379e06
3
+ size 587583
tokenizer_config.json ADDED
The diff for this file is too large to render. See raw diff