Update README.md
Browse files
README.md
CHANGED
@@ -6,8 +6,8 @@ Below is the reference code for inference. First load the tokenizer and the mode
|
|
6 |
|
7 |
```
|
8 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
9 |
-
tokenizer = AutoTokenizer.from_pretrained("KLGR123/WEPO-
|
10 |
-
model = AutoModelForCausalLM.from_pretrained("KLGR123/WEPO-
|
11 |
```
|
12 |
|
13 |
Run a test-demo with random input.
|
|
|
6 |
|
7 |
```
|
8 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
9 |
+
tokenizer = AutoTokenizer.from_pretrained("KLGR123/WEPO-gemma-2b", trust_remote_code=True)
|
10 |
+
model = AutoModelForCausalLM.from_pretrained("KLGR123/WEPO-gemma-2b", trust_remote_code=True).to('cuda:0')
|
11 |
```
|
12 |
|
13 |
Run a test-demo with random input.
|