Natkituwu commited on
Commit
136ee3f
·
verified ·
1 Parent(s): 266982c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +45 -0
README.md CHANGED
@@ -1,3 +1,48 @@
1
  ---
 
 
 
 
 
 
 
 
2
  license: cc-by-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ pipeline_tag: text-generation
5
+ tags:
6
+ - text-generation-inference
7
+ - instruct
8
+ - conversational
9
+ - roleplay
10
  license: cc-by-4.0
11
  ---
12
+
13
+ <h1 style="text-align: center">Erosumika-7B-v3</h1>
14
+
15
+ <div style="display: flex; justify-content: center;">
16
+ <img src="https://cdn-uploads.huggingface.co/production/uploads/6512681f4151fb1fa719e033/ZX5NLfB2CctdwuctS9W8A.gif" alt="Header GIF">
17
+ </div>
18
+
19
+
20
+ 7.1bpw exl2 quant. great for 16k context on 8GB GPUS!
21
+
22
+ ## Model Details
23
+ A DARE TIES merge between Nitral's [Kunocchini-7b](https://huggingface.co/Nitral-AI/Kunocchini-7b), Endevor's [InfinityRP-v1-7B](https://huggingface.co/Endevor/InfinityRP-v1-7B) and my [FlatErosAlpha](https://huggingface.co/localfultonextractor/FlatErosAlpha), a flattened(in order to keep the vocab size 32000) version of tavtav's [eros-7B-ALPHA](https://huggingface.co/tavtav/eros-7B-ALPHA). Alpaca and ChatML work best.
24
+
25
+ [GGUF quants](https://huggingface.co/localfultonextractor/Erosumika-7B-v3-GGUF)
26
+
27
+
28
+ ## Limitations and biases
29
+ The intended use-case for this model is fictional writing for entertainment purposes. Any other sort of usage is out of scope.
30
+ It may produce socially unacceptable or undesirable text, even if the prompt itself does not include anything explicitly offensive. Outputs might often be factually wrong or misleading.
31
+
32
+
33
+ ```yaml
34
+ base_model: localfultonextractor/FlatErosAlpha
35
+ models:
36
+ - model: localfultonextractor/FlatErosAlpha
37
+ - model: Epiculous/InfinityRP-v1-7B
38
+ parameters:
39
+ density: 0.4
40
+ weight: 0.25
41
+ - model: Nitral-AI/Kunocchini-7b
42
+ parameters:
43
+ density: 0.3
44
+ weight: 0.35
45
+ merge_method: dare_ties
46
+ dtype: bfloat16
47
+ ```
48
+ Note: Copied the tokenizer from InfinityRP-v1-7B.