aipib commited on
Commit
142d823
·
verified ·
1 Parent(s): 59b9c02

Upload folder using huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +68 -0
README.md ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - ce-lery/dolly-japanese-gpt-1b-clone
4
+ - rinna/japanese-gpt-1b
5
+ tags:
6
+ - merge
7
+ - mergekit
8
+ - lazymergekit
9
+ - ce-lery/dolly-japanese-gpt-1b-clone
10
+ - rinna/japanese-gpt-1b
11
+ ---
12
+
13
+ # jp-gpt-ab-dareties
14
+
15
+ jp-gpt-ab-dareties is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing):
16
+ * [ce-lery/dolly-japanese-gpt-1b-clone](https://huggingface.co/ce-lery/dolly-japanese-gpt-1b-clone)
17
+ * [rinna/japanese-gpt-1b](https://huggingface.co/rinna/japanese-gpt-1b)
18
+
19
+ ## 🧩 Configuration
20
+
21
+ ```yaml
22
+ slices:
23
+ - sources:
24
+ - layer_range: [0, 24]
25
+ model: ce-lery/dolly-japanese-gpt-1b-clone
26
+ parameters:
27
+ density: [1, 0.7, 0.1]
28
+ weight: 1.0
29
+ - layer_range: [0, 24]
30
+ model: rinna/japanese-gpt-1b
31
+ parameters:
32
+ density: 0.33
33
+ weight:
34
+ - filter: mlp
35
+ value: 0.5
36
+ - value: 0
37
+ merge_method: dare_ties
38
+ base_model: aipib/Tinyllama-moe3
39
+ parameters:
40
+ normalize: true
41
+ int8_mask: true
42
+ dtype: bfloat16
43
+ ```
44
+
45
+ ## 💻 Usage
46
+
47
+ ```python
48
+ !pip install -qU transformers accelerate
49
+
50
+ from transformers import AutoTokenizer
51
+ import transformers
52
+ import torch
53
+
54
+ model = "aipib/jp-gpt-ab-dareties"
55
+ messages = [{"role": "user", "content": "What is a large language model?"}]
56
+
57
+ tokenizer = AutoTokenizer.from_pretrained(model)
58
+ prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
59
+ pipeline = transformers.pipeline(
60
+ "text-generation",
61
+ model=model,
62
+ torch_dtype=torch.float16,
63
+ device_map="auto",
64
+ )
65
+
66
+ outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95)
67
+ print(outputs[0]["generated_text"])
68
+ ```