Defetya commited on
Commit
5e8a851
·
verified ·
1 Parent(s): 6319464

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -8,7 +8,7 @@ Qwen 4B chat by Alibaba, SFTuned on Saiga dataset. Finetuned with EasyDeL framew
8
 
9
  Чтобы использовать модель, необходимо назначить eos токен как <|im_end|>. Полный код:
10
 
11
- import torch
12
  from transformers import AutoTokenizer, AutoModelForCausalLM
13
  model = AutoModelForCausalLM.from_pretrained('Defetya/qwen-4B-saiga', torch_dtype=torch.bfloat16, device_map='auto')
14
  tokenizer = AutoTokenizer.from_pretrained('Defetya/qwen-4B-saiga')
@@ -23,7 +23,7 @@ while True:
23
  generated_ids = model.generate(messages, max_new_tokens=512, do_sample=True, temperature=0.7, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id)
24
  decoded = tokenizer.decode(generated_ids[0][len(messages[0]):])
25
  print(decoded)
26
- print("==============================")
27
 
28
 
29
 
 
8
 
9
  Чтобы использовать модель, необходимо назначить eos токен как <|im_end|>. Полный код:
10
 
11
+ `import torch
12
  from transformers import AutoTokenizer, AutoModelForCausalLM
13
  model = AutoModelForCausalLM.from_pretrained('Defetya/qwen-4B-saiga', torch_dtype=torch.bfloat16, device_map='auto')
14
  tokenizer = AutoTokenizer.from_pretrained('Defetya/qwen-4B-saiga')
 
23
  generated_ids = model.generate(messages, max_new_tokens=512, do_sample=True, temperature=0.7, pad_token_id=tokenizer.eos_token_id, eos_token_id=tokenizer.eos_token_id)
24
  decoded = tokenizer.decode(generated_ids[0][len(messages[0]):])
25
  print(decoded)
26
+ print("==============================")`
27
 
28
 
29