Echo9Zulu commited on
Commit
bfcbe0b
·
verified ·
1 Parent(s): 748da6a

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +109 -3
README.md CHANGED
@@ -1,3 +1,109 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ base_model:
4
+ - google/gemma-3-4b-it-qat-int4-unquantized
5
+ tags:
6
+ - OpenArc
7
+ - OpenVINO
8
+ - Optimum-Intel
9
+ - image-text-to-text
10
+ ---
11
+
12
+ ## Gemma 3 for OpenArc has landed!
13
+
14
+ My Project [OpenArc](https://github.com/SearchSavior/OpenArc), an inference engine for OpenVINO, now supports this model and serves inference over OpenAI compatible endpoints for text to text *and* text with vision! That release comes out today or tomorrow.
15
+
16
+ We have a growing Discord community of others interested in using Intel for AI/ML.
17
+
18
+ [![Discord](https://img.shields.io/discord/1341627368581628004?logo=Discord&logoColor=%23ffffff&label=Discord&link=https%3A%2F%2Fdiscord.gg%2FmaMY7QjG)](https://discord.gg/maMY7QjG)
19
+
20
+
21
+ This model was converted to the OpenVINO IR format using the following Optimum-CLI command:
22
+
23
+ ```
24
+ optimum-cli export openvino -m ""input-model"" --task image-text-to-text --weight-format int8 ""converted-model""
25
+ ```
26
+ - Find documentation on the Optimum-CLI export process [here](https://huggingface.co/docs/optimum/main/en/intel/openvino/export)
27
+ - Use my HF space [Echo9Zulu/Optimum-CLI-Tool_tool](https://huggingface.co/spaces/Echo9Zulu/Optimum-CLI-Tool_tool) to build commands and execute locally
28
+
29
+ ### What does the test code do?
30
+
31
+ Well, it demonstrates how to inference in python *and* what parts of that code are important for benchmarking performance.
32
+ Text generation offers different challenges than text-generation with images; for examples, vision encoders often use different strategies for handling properties an image can have.
33
+ In practice this translates to higher memory usage, reduced throughput or bad results.
34
+
35
+ To run the test code:
36
+
37
+ - Install device specific drivers
38
+ - Build Optimum-Intel for OpenVINO from source
39
+ - Find your spiciest images to get that AGI refusal smell
40
+
41
+ ```
42
+ pip install optimum[openvino]+https://github.com/huggingface/optimum-intel
43
+ ```
44
+
45
+ ```
46
+ import time
47
+ from PIL import Image
48
+ from transformers import AutoProcessor
49
+ from optimum.intel.openvino import OVModelForVisualCausalLM
50
+
51
+
52
+ model_id = "Echo9Zulu/gemma-3-4b-it-int8_asym-ov" # Can be an HF id or a path
53
+
54
+ ov_config = {"PERFORMANCE_HINT": "LATENCY"} # Optimizes for first token latency and locks to single CPU socket
55
+
56
+ print("Loading model... this should get faster after the first generation due to caching behavior.")
57
+ print("")
58
+ start_load_time = time.time()
59
+ model = OVModelForVisualCausalLM.from_pretrained(model_id, export=False, device="CPU", ov_config=ov_config) # For GPU use "GPU.0"
60
+ processor = AutoProcessor.from_pretrained(model_id) # Instead of using AutoTokenizers we use AutoProcessor which routes to the appropriate input processor i.e, how does a model expect image tokens.
61
+ # Under the hood this takes care of model specific preprocessing and has functionality overlap with AutoTokenizers.
62
+ end_load_time = time.time()
63
+
64
+ image_path = r"" # This script expects .png
65
+ image = Image.open(image_path)
66
+ image = image.convert("RGB") # Required by gemma3. In practice this would need to be handled at the engine level OR in model-specifc pre-processing.
67
+
68
+ conversation = [
69
+ {
70
+ "role": "user",
71
+ "content": [
72
+ {
73
+ "type": "image"
74
+ },
75
+ {"type": "text", "text": "Describe this image."},
76
+ ],
77
+ }
78
+ ]
79
+
80
+ text_prompt = processor.apply_chat_template(conversation, add_generation_prompt=True)
81
+
82
+ inputs = processor(text=[text_prompt], images=[image], padding=True, return_tensors="pt")
83
+
84
+ input_token_count = len(inputs.input_ids[0])
85
+ print(f"Sum of image and text tokens: {len(inputs.input_ids[0])}")
86
+
87
+ start_time = time.time()
88
+ output_ids = model.generate(**inputs, max_new_tokens=1024)
89
+
90
+ generated_ids = [output_ids[len(input_ids) :] for input_ids, output_ids in zip(inputs.input_ids, output_ids)]
91
+ output_text = processor.batch_decode(generated_ids, skip_special_tokens=True, clean_up_tokenization_spaces=True)
92
+
93
+ num_tokens_generated = len(generated_ids[0])
94
+ load_time = end_load_time - start_load_time
95
+ generation_time = time.time() - start_time
96
+ tokens_per_second = num_tokens_generated / generation_time
97
+ average_token_latency = generation_time / num_tokens_generated
98
+
99
+ print("\nPerformance Report:")
100
+ print("-"*50)
101
+ print(f"Input Tokens : {input_token_count:>9}")
102
+ print(f"Generated Tokens : {num_tokens_generated:>9}")
103
+ print(f"Model Load Time : {load_time:>9.2f} sec")
104
+ print(f"Generation Time : {generation_time:>9.2f} sec")
105
+ print(f"Throughput : {tokens_per_second:>9.2f} t/s")
106
+ print(f"Avg Latency/Token : {average_token_latency:>9.3f} sec")
107
+
108
+ print(output_text)
109
+ ```