OwenArli commited on
Commit
839eafe
·
verified ·
1 Parent(s): 802aab9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +54 -3
README.md CHANGED
@@ -1,3 +1,54 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+ # InternLM2_5-20B-ArliAI-RPMax-v1.1
5
+ =====================================
6
+
7
+ ## RPMax Series Overview
8
+
9
+ | [2B](https://huggingface.co/ArliAI/Gemma-2-2B-ArliAI-RPMax-v1.1) |
10
+ [3.8B](https://huggingface.co/ArliAI/Phi-3.5-mini-3.8B-ArliAI-RPMax-v1.1) |
11
+ [8B](https://huggingface.co/ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.1) |
12
+ [9B](https://huggingface.co/ArliAI/Gemma-2-9B-ArliAI-RPMax-v1.1) |
13
+ [12B](https://huggingface.co/ArliAI/Mistral-Nemo-12B-ArliAI-RPMax-v1.1) |
14
+ [20B](https://huggingface.co/ArliAI/InternLM2_5-20B-ArliAI-RPMax-v1.1) |
15
+ [70B](https://huggingface.co/ArliAI/Llama-3.1-70B-ArliAI-RPMax-v1.1) |
16
+
17
+ RPMax is a series of models that are trained on a diverse set of curated creative writing and RP datasets with a focus on variety and deduplication. This model is designed to be highly creative and non-repetitive by making sure no two entries in the dataset have repeated characters or situations, which makes sure the model does not latch on to a certain personality and be capable of understanding and acting appropriately to any characters or situations.
18
+
19
+ Early tests by users mentioned that these models does not feel like any other RP models, having a different style and generally doesn't feel in-bred.
20
+
21
+ You can access the models at https://arliai.com and ask questions at https://www.reddit.com/r/ArliAI/
22
+
23
+ We also have a models ranking page at https://www.arliai.com/models-ranking
24
+
25
+ Ask questions in our new Discord Server! https://discord.gg/aDVx6FZN
26
+
27
+ ## Model Description
28
+
29
+ InternLM2_5-20B-ArliAI-RPMax-v1.1 is a variant based on internlm2_5-20b-chat.
30
+
31
+ Unfortunately somehow InternLM uses so much VRAM I could only train it with 2048 context length.
32
+
33
+ ### Training Details
34
+
35
+ * **Sequence Length**: 2048
36
+ * **Training Duration**: Approximately 4 days on 2x3090Ti
37
+ * **Epochs**: 1 epoch training for minimized repetition sickness
38
+ * **QLORA**: 64-rank 128-alpha, resulting in ~2% trainable weights
39
+ * **Learning Rate**: 0.00001
40
+ * **Gradient accumulation**: Very low 32 for better learning.
41
+
42
+ ## Suggested Prompt Format
43
+
44
+ ChatML Instruct Format
45
+
46
+ ```
47
+ <|im_start|>system
48
+ Provide some context and/or instructions to the model.
49
+ <|im_end|>
50
+ <|im_start|>user
51
+ The user’s message goes here
52
+ <|im_end|>
53
+ <|im_start|>assistant
54
+ ```