Commit
•
54521e8
1
Parent(s):
046a989
Add processor chat template (#7)
Browse files- Upload processor (0dbc5f3c38ee0f0909e1b208376615dc36f87389)
- Update README.md (a23b0b5dcca5a68ba421f24c9b2388c95e401b2d)
- Update README.md (ff8295e63aaaeb52212a5b072d5c11729c40896e)
- Update README.md (7f5b217b4a111b6bedb69d3e54fa8d89b5d3cbc7)
- Update README.md (460ee72562823640acf40dc3d849a9d15622e5bf)
- README.md +61 -3
- chat_template.json +3 -0
- special_tokens_map.json +21 -3
- tokenizer_config.json +4 -1
README.md
CHANGED
@@ -23,9 +23,10 @@ PROMPT = "<s>[INST]Describe the images.\n[IMG][IMG][IMG][IMG][/INST]"
|
|
23 |
|
24 |
inputs = processor(text=PROMPT, images=IMG_URLS, return_tensors="pt").to("cuda")
|
25 |
generate_ids = model.generate(**inputs, max_new_tokens=500)
|
26 |
-
|
27 |
```
|
28 |
-
|
|
|
29 |
```
|
30 |
|
31 |
"""
|
@@ -50,4 +51,61 @@ Sure, let's break down each image description:
|
|
50 |
|
51 |
Each image captures a different scene, from a close-up of a dog to expansive natural landscapes, showcasing various elements of nature and human interaction with it.
|
52 |
"""
|
53 |
-
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
23 |
|
24 |
inputs = processor(text=PROMPT, images=IMG_URLS, return_tensors="pt").to("cuda")
|
25 |
generate_ids = model.generate(**inputs, max_new_tokens=500)
|
26 |
+
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
27 |
```
|
28 |
+
|
29 |
+
You should get an output similar to the below:
|
30 |
```
|
31 |
|
32 |
"""
|
|
|
51 |
|
52 |
Each image captures a different scene, from a close-up of a dog to expansive natural landscapes, showcasing various elements of nature and human interaction with it.
|
53 |
"""
|
54 |
+
```
|
55 |
+
|
56 |
+
You can also use a chat template to format your chat history for Pixtral. Make sure that the `images` argument to the `processor` contains the images in the order
|
57 |
+
that they appear in the chat, so that the model understands where each image is supposed to go.
|
58 |
+
|
59 |
+
Here's an example with text and multiple images interleaved in the same message:
|
60 |
+
|
61 |
+
```python
|
62 |
+
from PIL import Image
|
63 |
+
from transformers import AutoProcessor, LlavaForConditionalGeneration
|
64 |
+
model_id = "mistral-community/pixtral-12b"
|
65 |
+
model = LlavaForConditionalGeneration.from_pretrained(model_id)
|
66 |
+
processor = AutoProcessor.from_pretrained(model_id)
|
67 |
+
|
68 |
+
url_dog = "https://picsum.photos/id/237/200/300"
|
69 |
+
url_mountain = "https://picsum.photos/seed/picsum/200/300"
|
70 |
+
|
71 |
+
chat = [
|
72 |
+
{
|
73 |
+
"role": "user", "content": [
|
74 |
+
{"type": "text", "content": "Can this animal"},
|
75 |
+
{"type": "image"},
|
76 |
+
{"type": "text", "content": "live here?"},
|
77 |
+
{"type": "image"}
|
78 |
+
]
|
79 |
+
}
|
80 |
+
]
|
81 |
+
|
82 |
+
prompt = processor.apply_chat_template(chat)
|
83 |
+
inputs = processor(text=prompt, images=[url_dog, url_mountain], return_tensors="pt").to(model.device)
|
84 |
+
generate_ids = model.generate(**inputs, max_new_tokens=500)
|
85 |
+
output = processor.batch_decode(generate_ids, skip_special_tokens=True, clean_up_tokenization_spaces=False)[0]
|
86 |
+
```
|
87 |
+
|
88 |
+
You should get something like this:
|
89 |
+
|
90 |
+
```
|
91 |
+
Can this animallive here?Certainly! Here are some details about the images you provided:
|
92 |
+
|
93 |
+
### First Image
|
94 |
+
- **Description**: The image shows a black dog lying on a wooden surface. The dog has a curious expression with its head tilted slightly to one side.
|
95 |
+
- **Details**: The dog appears to be a young puppy with soft, shiny fur. Its eyes are wide and alert, and it has a playful demeanor.
|
96 |
+
- **Context**: This image could be used to illustrate a pet-friendly environment or to showcase the dog's personality.
|
97 |
+
|
98 |
+
### Second Image
|
99 |
+
- **Description**: The image depicts a serene landscape with a snow-covered hill in the foreground. The sky is painted with soft hues of pink, orange, and purple, indicating a sunrise or sunset.
|
100 |
+
- **Details**: The hill is covered in a blanket of pristine white snow, and the horizon meets the sky in a gentle curve. The scene is calm and peaceful.
|
101 |
+
- **Context**: This image could be used to represent tranquility, natural beauty, or a winter wonderland.
|
102 |
+
|
103 |
+
### Combined Context
|
104 |
+
If you're asking whether the dog can "live here," referring to the snowy landscape, it would depend on the breed and its tolerance to cold weather. Some breeds, like Huskies or Saint Bernards, are well-adapted to cold environments, while others might struggle. The dog in the first image appears to be a breed that might prefer warmer climates.
|
105 |
+
|
106 |
+
Would you like more information on any specific aspect?
|
107 |
+
```
|
108 |
+
|
109 |
+
While it may appear that spacing in the input is disrupted, this is caused by us skipping special tokens for display, and actually "Can this animal" and "live here" are
|
110 |
+
correctly separated by image tokens. Try decoding with special tokens included to see exactly what the model sees!
|
111 |
+
|
chat_template.json
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
{
|
2 |
+
"chat_template": "{%- if messages[0][\"role\"] == \"system\" %}\n {%- set system_message = messages[0][\"content\"] %}\n {%- set loop_messages = messages[1:] %}\n{%- else %}\n {%- set loop_messages = messages %}\n{%- endif %}\n\n{{- bos_token }}\n{%- for message in loop_messages %}\n {%- if (message['role'] == 'user') != (loop.index0 % 2 == 0) %}\n {{- raise_exception('After the optional system message, conversation roles must alternate user/assistant/user/assistant/...') }}\n {%- endif %}\n {%- if message[\"role\"] == \"user\" %}\n {%- if loop.last and system_message is defined %}\n {{- \"[INST]\" + system_message + \"\n\n\" }}\n {%- else %}\n {{- \"[INST]\" }}\n {%- endif %}\n {%- if message[\"content\"] is not string %}\n {%- for chunk in message[\"content\"] %}\n {%- if chunk[\"type\"] == \"text\" %}\n {{- chunk[\"content\"] }}\n {%- elif chunk[\"type\"] == \"image\" %}\n {{- \"[IMG]\" }}\n {%- else %}\n {{- raise_exception(\"Unrecognized content type!\") }}\n {%- endif %}\n {%- endfor %}\n {%- else %}\n {{- message[\"content\"] }}\n {%- endif %}\n {{- \"[/INST]\" }}\n {%- elif message[\"role\"] == \"assistant\" %}\n {{- message[\"content\"] + eos_token}}\n {%- else %}\n {{- raise_exception(\"Only user and assistant roles are supported, with the exception of an initial optional system message!\") }}\n {%- endif %}\n{%- endfor %}"
|
3 |
+
}
|
special_tokens_map.json
CHANGED
@@ -1,5 +1,23 @@
|
|
1 |
{
|
2 |
-
"bos_token":
|
3 |
-
|
4 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
5 |
}
|
|
|
1 |
{
|
2 |
+
"bos_token": {
|
3 |
+
"content": "<s>",
|
4 |
+
"lstrip": false,
|
5 |
+
"normalized": false,
|
6 |
+
"rstrip": false,
|
7 |
+
"single_word": false
|
8 |
+
},
|
9 |
+
"eos_token": {
|
10 |
+
"content": "</s>",
|
11 |
+
"lstrip": false,
|
12 |
+
"normalized": false,
|
13 |
+
"rstrip": false,
|
14 |
+
"single_word": false
|
15 |
+
},
|
16 |
+
"unk_token": {
|
17 |
+
"content": "<unk>",
|
18 |
+
"lstrip": false,
|
19 |
+
"normalized": false,
|
20 |
+
"rstrip": false,
|
21 |
+
"single_word": false
|
22 |
+
}
|
23 |
}
|
tokenizer_config.json
CHANGED
@@ -8004,9 +8004,12 @@
|
|
8004 |
"bos_token": "<s>",
|
8005 |
"clean_up_tokenization_spaces": true,
|
8006 |
"eos_token": "</s>",
|
|
|
|
|
|
|
|
|
8007 |
"model_max_length": 1000000000000000019884624838656,
|
8008 |
"processor_class": "PixtralProcessor",
|
8009 |
"tokenizer_class": "PreTrainedTokenizerFast",
|
8010 |
-
"model_input_names": ["input_ids", "attention_mask"],
|
8011 |
"unk_token": "<unk>"
|
8012 |
}
|
|
|
8004 |
"bos_token": "<s>",
|
8005 |
"clean_up_tokenization_spaces": true,
|
8006 |
"eos_token": "</s>",
|
8007 |
+
"model_input_names": [
|
8008 |
+
"input_ids",
|
8009 |
+
"attention_mask"
|
8010 |
+
],
|
8011 |
"model_max_length": 1000000000000000019884624838656,
|
8012 |
"processor_class": "PixtralProcessor",
|
8013 |
"tokenizer_class": "PreTrainedTokenizerFast",
|
|
|
8014 |
"unk_token": "<unk>"
|
8015 |
}
|