Update README.md
#1
by
urroxyz
- opened
README.md
CHANGED
@@ -27,12 +27,12 @@ pip install qwen_vl_utils
|
|
27 |
```
|
28 |
Then you could use our model:
|
29 |
```python
|
30 |
-
from transformers import
|
31 |
from qwen_vl_utils import process_vision_info
|
32 |
|
33 |
model_path = "OpenGVLab/VideoChat-R1_7B_caption"
|
34 |
# default: Load the model on the available device(s)
|
35 |
-
model =
|
36 |
model_path, torch_dtype="auto", device_map="auto",
|
37 |
attn_implementation="flash_attention_2"
|
38 |
)
|
|
|
27 |
```
|
28 |
Then you could use our model:
|
29 |
```python
|
30 |
+
from transformers import Qwen2VLForConditionalGeneration, AutoTokenizer, AutoProcessor
|
31 |
from qwen_vl_utils import process_vision_info
|
32 |
|
33 |
model_path = "OpenGVLab/VideoChat-R1_7B_caption"
|
34 |
# default: Load the model on the available device(s)
|
35 |
+
model = Qwen2VLForConditionalGeneration.from_pretrained(
|
36 |
model_path, torch_dtype="auto", device_map="auto",
|
37 |
attn_implementation="flash_attention_2"
|
38 |
)
|