minlik commited on
Commit
9bf7178
·
1 Parent(s): c847a2e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +62 -0
README.md ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: chinese-alpaca-33b-merged
3
+ emoji: 📚
4
+ colorFrom: gray
5
+ colorTo: red
6
+ sdk: gradio
7
+ sdk_version: 3.23.0
8
+ app_file: app.py
9
+ pinned: false
10
+ ---
11
+
12
+ 加入中文词表并继续预训练中文Embedding,并在此基础上继续使用指令数据集finetuning,得到的中文Alpaca-33B模型。
13
+
14
+ base-model: elinas/llama-30b-hf-transformers-4.29
15
+ lora-model: ziqingyang/chinese-alpaca-lora-33b
16
+
17
+ 详情可参考:https://github.com/ymcui/Chinese-LLaMA-Alpaca/releases/tag/v4.0
18
+
19
+
20
+ ### 使用方法参考
21
+ 1. 安装模块包
22
+ ```bash
23
+ pip install sentencepiece
24
+ pip install transformers>=4.28.0
25
+ ```
26
+
27
+ 2. 生成文本
28
+ ```python
29
+ import torch
30
+ import transformers
31
+ from transformers import LlamaTokenizer, LlamaForCausalLM
32
+
33
+ def generate_prompt(text):
34
+ return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
35
+
36
+ ### Instruction:
37
+ {text}
38
+
39
+ ### Response:"""
40
+
41
+
42
+ tokenizer = LlamaTokenizer.from_pretrained('minlik/chinese-alpaca-plus-7b-merged')
43
+ model = LlamaForCausalLM.from_pretrained('minlik/chinese-alpaca-plus-7b-merged').half().to('cuda')
44
+ model.eval()
45
+
46
+ text = '第一个登上月球的人是谁?'
47
+ prompt = generate_prompt(text)
48
+ input_ids = tokenizer.encode(prompt, return_tensors='pt').to('cuda')
49
+
50
+
51
+ with torch.no_grad():
52
+ output_ids = model.generate(
53
+ input_ids=input_ids,
54
+ max_new_tokens=128,
55
+ temperature=1,
56
+ top_k=40,
57
+ top_p=0.9,
58
+ repetition_penalty=1.15
59
+ ).cuda()
60
+ output = tokenizer.decode(output_ids[0], skip_special_tokens=True)
61
+ print(output.replace('text', '').strip())
62
+ ```