zRzRzRzRzRzRzR
commited on
Commit
·
b822ba4
1
Parent(s):
3a397f1
9b readme
Browse files
README.md
CHANGED
@@ -7,7 +7,11 @@ pipeline_tag: text-generation
|
|
7 |
library_name: transformers
|
8 |
---
|
9 |
|
10 |
-
# GLM-4-
|
|
|
|
|
|
|
|
|
11 |
|
12 |
## Installation
|
13 |
|
@@ -22,7 +26,7 @@ pip install git+https://github.com/huggingface/transformers.git
|
|
22 |
```python
|
23 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
24 |
|
25 |
-
MODEL_PATH = "THUDM/GLM-4-
|
26 |
|
27 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
28 |
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
|
@@ -44,9 +48,4 @@ generate_kwargs = {
|
|
44 |
}
|
45 |
out = model.generate(**generate_kwargs)
|
46 |
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
|
47 |
-
|
48 |
```
|
49 |
-
|
50 |
-
## License
|
51 |
-
|
52 |
-
The usage of this model’s weights is subject to the terms outlined in the [LICENSE](LICENSE).
|
|
|
7 |
library_name: transformers
|
8 |
---
|
9 |
|
10 |
+
# GLM-4-9B-Chat-0414
|
11 |
+
|
12 |
+
## Introduction
|
13 |
+
|
14 |
+
Based on our latest technological advancements, we have trained a `GLM-4-0414` series model. During pretraining, we incorporated more code-related and reasoning-related data. In the alignment phase, we optimized the model specifically for agent capabilities. As a result, the model's performance in agent tasks such as tool use, web search, and coding has been significantly improved.
|
15 |
|
16 |
## Installation
|
17 |
|
|
|
26 |
```python
|
27 |
from transformers import AutoModelForCausalLM, AutoTokenizer
|
28 |
|
29 |
+
MODEL_PATH = "THUDM/GLM-4-9B-Chat-0414"
|
30 |
|
31 |
tokenizer = AutoTokenizer.from_pretrained(MODEL_PATH)
|
32 |
model = AutoModelForCausalLM.from_pretrained(MODEL_PATH, device_map="auto")
|
|
|
48 |
}
|
49 |
out = model.generate(**generate_kwargs)
|
50 |
print(tokenizer.decode(out[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True))
|
|
|
51 |
```
|
|
|
|
|
|
|
|