File size: 2,550 Bytes
21de20e
6e3dd73
 
 
 
 
 
 
 
21de20e
 
6e3dd73
 
 
 
 
d3f1d8a
 
6e3dd73
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
base_model:
- upstage/SOLAR-10.7B-Instruct-v1.0
- NousResearch/Nous-Hermes-2-SOLAR-10.7B
tags:
- mergekit
- merge
- solar
- gguf
license: apache-2.0
---
# vicgalle/franken-SOLAR-18B-v1.0-GGUF

This is a SOLAR-like model upscaled to 18B. 
It is a frankenmerge model created using mergekit, alternating layers of Nous-Hermes-2-SOLAR-10.7B and SOLAR-10.7B-Instruct.

This repo has the quantized GGUF versions from https://huggingface.co/vicgalle/franken-SOLAR-18B-v1.0

![image/png](https://cdn-uploads.huggingface.co/production/uploads/5fad8602b8423e1d80b8a965/mMyHMuuftG71_o4at5suy.png)

Evaluations coming soon! 

This model has very good writing capabilities (compared to SOLAR-10.7B), specially for role-playing.

## Merge Details
### Merge Method

This model was merged using the passthrough merge method.

### Models Merged

The following models were included in the merge:
* [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0)
* [NousResearch/Nous-Hermes-2-SOLAR-10.7B](https://huggingface.co/NousResearch/Nous-Hermes-2-SOLAR-10.7B)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
slices:
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [0, 12]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [6, 18]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [13, 25]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [19, 31]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [26, 38]
  - sources:
    - model: upstage/SOLAR-10.7B-Instruct-v1.0
      layer_range: [32, 44]
  - sources:
    - model: NousResearch/Nous-Hermes-2-SOLAR-10.7B
      layer_range: [39, 48]
    
merge_method: passthrough
dtype: float16

```


### Usage

You can use the provided template:

```
tokenizer = AutoTokenizer.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0")
model = AutoModelForCausalLM.from_pretrained("vicgalle/franken-SOLAR-18B-v1.0", torch_dtype=torch.float16, load_in_4bit=True)

conversation = [ {'role': 'system', 'content': SYSTEM_PROMPT}, {'role': 'user', 'content': USER_PROMPT} ] 
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, use_cache=True, max_new_tokens=1024, do_sample=True, temperature=0.8)
output_text = tokenizer.decode(outputs[0]) 
```