InternVL3-1B

This version of InternVL3-1B has been converted to run on the Axera NPU using w8a16 quantization.

This model has been optimized with the following LoRA:

Compatible with Pulsar2 version: 4.1

Convert tools links:

For those who are interested in model conversion, you can try to export axmodel through the original repo : https://huggingface.co/OpenGVLab/InternVL3-1B

How to Convert LLM from Huggingface to axmodel

AXera NPU HOST LLM Runtime

AXera NPU AXCL LLM Runtime

Support Platform

Chips image encoder 448 ttft w8a16
AX650 380 ms 623 ms 30 tokens/sec

How to use

Download all files from this repository to the device

root@ax650:/mnt/qtang/llm-test/internvl3-1b# tree -L 1
.
|-- gradio_demo.py
|-- internvl3_1b_ax650
|-- internvl3_tokenizer
|-- internvl3_tokenizer.py
|-- main_api_ax650
|-- main_api_axcl_x86
|-- main_ax650
|-- main_axcl_x86
|-- post_config.json
|-- run_internvl_3_1b_448_api_ax650.sh
|-- run_internvl_3_1b_448_api_axcl_x86.sh
|-- run_internvl_3_1b_448_ax650.sh
|-- run_internvl_3_1b_448_axcl_x86.sh
`-- ssd_car.jpg

Install transformer

pip install transformers==4.41.1

Start the Tokenizer service

root@ax650:/mnt/qtang/llm-test/internvl3-1b# python3 internvl3_tokenizer.py
None None 151645 <|im_end|> 151665 151667
context_len is  256
prompt is <|im_start|>system
你是书生·万象, 英文名是InternVL, 是由上海人工智能实验室、清华大学及多家合作单位联合开发的多模态大语言模型.<|im_end|>
......
http://0.0.0.0:12345

Inference with AX650 Host, such as M4N-Dock(爱芯派Pro) or AX650 DEMO Board

  • input text
描述下图片
  • input image

Open another terminal and run ./run_internvl3_1b_448_ax650.sh

root@ax650:/mnt/qtang/llm-test/internvl3-1b# ./run_internvl_3_1b_448_ax650.sh
[I][                            Init][ 134]: LLM init start
[I][                            Init][  34]: connect http://0.0.0.0:12345 ok
bos_id: -1, eos_id: 151645
img_start_token: 151665
img_context_token: 151667
  3% | ██                                |   1 /  27 [0.01s<0.32s, 83.33 count/s] tokenizer init ok
[I][                            Init][  45]: LLaMaEmbedSelector use mmap
  7% | ███                               |   2 /  27 [0.01s<0.19s, 142.86 count/s] embed_selector init ok
100% | ████████████████████████████████ |  27 /  27 [6.92s<6.92s, 3.90 count/s] init post axmodel ok,remain_cmm(11068 MB)
[I][                            Init][ 226]: IMAGE_CONTEXT_TOKEN: 151667, IMAGE_START_TOKEN: 151665
[I][                            Init][ 251]: image encoder input nchw@float32
[I][                            Init][ 281]: image encoder output float32
[I][                            Init][ 291]: image_encoder_height : 448, image_encoder_width: 448
[I][                            Init][ 293]: max_token_len : 2047
[I][                            Init][ 296]: kv_cache_size : 128, kv_cache_num: 2047
[I][                            Init][ 304]: prefill_token_num : 128
[I][                            Init][ 308]: grp: 1, prefill_max_token_num : 1
[I][                            Init][ 308]: grp: 2, prefill_max_token_num : 128
[I][                            Init][ 308]: grp: 3, prefill_max_token_num : 256
[I][                            Init][ 308]: grp: 4, prefill_max_token_num : 384
[I][                            Init][ 308]: grp: 5, prefill_max_token_num : 512
[I][                            Init][ 308]: grp: 6, prefill_max_token_num : 640
[I][                            Init][ 308]: grp: 7, prefill_max_token_num : 768
[I][                            Init][ 308]: grp: 8, prefill_max_token_num : 896
[I][                            Init][ 308]: grp: 9, prefill_max_token_num : 1024
[I][                            Init][ 312]: prefill_max_token_num : 1024
[I][                     load_config][ 282]: load config:
{
    "enable_repetition_penalty": false,
    "enable_temperature": true,
    "enable_top_k_sampling": true,
    "enable_top_p_sampling": false,
    "penalty_window": 20,
    "repetition_penalty": 1.2,
    "temperature": 0.9,
    "top_k": 10,
    "top_p": 0.8
}

[I][                            Init][ 321]: LLM init ok
Type "q" to exit, Ctrl+c to stop current running
prompt >> 描述下图片
image >> ssd_car.jpg
[I][                          Encode][ 415]: image encode time : 387.35 ms, size : 229376
[I][                          Encode][ 524]: idx:0 offset : 50 out_embed.size() : 279552
[I][                             Run][ 551]: input token num : 312, prefill_split_num : 3
[I][                             Run][ 566]: prefill grpid 4
[I][                             Run][ 593]: input_num_token:128
[I][                             Run][ 593]: input_num_token:128
[I][                             Run][ 593]: input_num_token:56
[I][                             Run][ 717]: ttft: 623.71 ms
图片中出现的物体包括:

1. 一辆红色的双层巴士,巴士上有一则广告,广告上写着“THINGS GET MORE EXCITING WHEN YOU SAY YES” (当你说“是”时,事情就更兴奋了)。
2. 一位微笑的女性站在巴士旁边。
3. 一辆黑色的汽车停在路边。
4. 一家商店的橱窗。
5. 一些建筑物的外墙和窗户。
6. 一根黑色的路灯杆。

这些是图片中实际存在的物体。

[N][                             Run][ 826]: hit eos,avg 28.78 token/s

prompt >> q
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for AXERA-TECH/InternVL3-1B

Collection including AXERA-TECH/InternVL3-1B