huihui-ai commited on
Commit
45fc849
·
verified ·
1 Parent(s): 0b9d54d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -16,6 +16,7 @@ language:
16
 
17
  ## Overview
18
  `Llama-3.1-8B-Fusion-9010` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated). The weights are blended in a 9:1 ratio, with 90% of the weights from SuperNova-Lite and 10% from the abliterated Meta-Llama-3.1-8B-Instruct model.
 
19
  This is an experiment. Later, I will test the 8:2, 7:3, 6:4, and 5:5 ratios separately to see how much impact they have on the model.
20
 
21
  ## Model Details
 
16
 
17
  ## Overview
18
  `Llama-3.1-8B-Fusion-9010` is a mixed model that combines the strengths of two powerful Llama-based models: [arcee-ai/Llama-3.1-SuperNova-Lite](https://huggingface.co/arcee-ai/Llama-3.1-SuperNova-Lite) and [mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated](https://huggingface.co/mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated). The weights are blended in a 9:1 ratio, with 90% of the weights from SuperNova-Lite and 10% from the abliterated Meta-Llama-3.1-8B-Instruct model.
19
+ **Although it's a simple mix, the model is usable, and no gibberish has appeared**.
20
  This is an experiment. Later, I will test the 8:2, 7:3, 6:4, and 5:5 ratios separately to see how much impact they have on the model.
21
 
22
  ## Model Details