KLGR123 commited on
Commit
56f9db9
·
verified ·
1 Parent(s): 78fec51

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -6,8 +6,8 @@ Below is the reference code for inference. First load the tokenizer and the mode
6
 
7
  ```
8
  from transformers import AutoTokenizer, AutoModelForCausalLM
9
- tokenizer = AutoTokenizer.from_pretrained("KLGR123/WEPO-mistral-7b", trust_remote_code=True)
10
- model = AutoModelForCausalLM.from_pretrained("KLGR123/WEPO-mistral-7b", trust_remote_code=True).to('cuda:0')
11
  ```
12
 
13
  Run a test-demo with random input.
 
6
 
7
  ```
8
  from transformers import AutoTokenizer, AutoModelForCausalLM
9
+ tokenizer = AutoTokenizer.from_pretrained("KLGR123/WEPO-gemma-2b", trust_remote_code=True)
10
+ model = AutoModelForCausalLM.from_pretrained("KLGR123/WEPO-gemma-2b", trust_remote_code=True).to('cuda:0')
11
  ```
12
 
13
  Run a test-demo with random input.