Improve model card: Add Transformers library, update pipeline tag, and link to InternVL3.5 paper
Browse filesThis PR enhances the model card for `InternViT-6B-448px-V2_5` by:
* Updating the `pipeline_tag` to `zero-shot-image-classification`, reflecting its role as a vision encoder within the InternVL series for tasks such as zero-shot classification and cross-modal retrieval, improving its discoverability on the Hugging Face Hub.
* Adding `library_name: transformers` to correctly identify its compatibility with the Transformers library, which enables the automated "How to use" widget on the model page.
* Adding a prominent link to the main paper [InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency](https://huggingface.co/papers/2508.18265) at the top of the README.
* Updating the "Chat Demo" link to the latest URL (`https://chat.intern-ai.org.cn/`) as found in the project's GitHub README.
* Updating the "Citation" section to include the BibTeX entry for the `InternVL3.5` paper and other recent associated works.
@@ -1,15 +1,18 @@
|
|
1 |
---
|
2 |
-
license: mit
|
3 |
-
pipeline_tag: image-feature-extraction
|
4 |
base_model: OpenGVLab/InternViT-6B-448px-V1-5
|
|
|
|
|
5 |
base_model_relation: finetune
|
|
|
6 |
---
|
7 |
|
8 |
# InternViT-6B-448px-V2_5
|
9 |
|
|
|
|
|
10 |
[\[π GitHub\]](https://github.com/OpenGVLab/InternVL) [\[π InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[π InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[π Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[π InternVL 2.5\]](https://huggingface.co/papers/2412.05271)
|
11 |
|
12 |
-
[\[π Blog\]](https://internvl.github.io/blog/) [\[π¨οΈ Chat Demo\]](https://
|
13 |
|
14 |
<div align="center">
|
15 |
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
|
@@ -58,11 +61,11 @@ The training pipeline for a single model in InternVL 2.5 is structured across th
|
|
58 |
|
59 |

|
60 |
|
61 |
-
-
|
62 |
|
63 |
-
-
|
64 |
|
65 |
-
-
|
66 |
|
67 |
## Evaluation on Vision Capability
|
68 |
|
@@ -82,7 +85,7 @@ We present a comprehensive evaluation of the vision encoderβs performance acro
|
|
82 |
|
83 |
## Quick Start
|
84 |
|
85 |
-
>
|
86 |
> π¨ Note: In our experience, the InternViT V2.5 series is better suited for building MLLMs than traditional computer vision tasks.
|
87 |
|
88 |
```python
|
@@ -115,23 +118,49 @@ This project is released under the MIT License.
|
|
115 |
If you find this project useful in your research, please consider citing:
|
116 |
|
117 |
```BibTeX
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
118 |
@article{chen2024expanding,
|
119 |
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
|
120 |
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
|
121 |
journal={arXiv preprint arXiv:2412.05271},
|
122 |
year={2024}
|
123 |
}
|
|
|
|
|
|
|
|
|
|
|
|
|
124 |
@article{gao2024mini,
|
125 |
-
title={Mini-
|
126 |
author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
|
127 |
-
journal={
|
128 |
-
|
|
|
|
|
|
|
|
|
129 |
}
|
130 |
@article{chen2024far,
|
131 |
-
title={How
|
132 |
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
|
133 |
-
journal={
|
134 |
-
|
|
|
|
|
|
|
|
|
135 |
}
|
136 |
@inproceedings{chen2024internvl,
|
137 |
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
|
@@ -141,3 +170,13 @@ If you find this project useful in your research, please consider citing:
|
|
141 |
year={2024}
|
142 |
}
|
143 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
|
|
|
|
2 |
base_model: OpenGVLab/InternViT-6B-448px-V1-5
|
3 |
+
license: mit
|
4 |
+
pipeline_tag: zero-shot-image-classification
|
5 |
base_model_relation: finetune
|
6 |
+
library_name: transformers
|
7 |
---
|
8 |
|
9 |
# InternViT-6B-448px-V2_5
|
10 |
|
11 |
+
This vision encoder is part of the InternVL 3.5 family, as presented in the paper [InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency](https://huggingface.co/papers/2508.18265).
|
12 |
+
|
13 |
[\[π GitHub\]](https://github.com/OpenGVLab/InternVL) [\[π InternVL 1.0\]](https://huggingface.co/papers/2312.14238) [\[π InternVL 1.5\]](https://huggingface.co/papers/2404.16821) [\[π Mini-InternVL\]](https://arxiv.org/abs/2410.16261) [\[π InternVL 2.5\]](https://huggingface.co/papers/2412.05271)
|
14 |
|
15 |
+
[\[π Blog\]](https://internvl.github.io/blog/) [\[π¨οΈ Chat Demo\]](https://chat.intern-ai.org.cn/) [\[π€ HF Demo\]](https://huggingface.co/spaces/OpenGVLab/InternVL) [\[π Quick Start\]](#quick-start) [\[π Documents\]](https://internvl.readthedocs.io/en/latest/)
|
16 |
|
17 |
<div align="center">
|
18 |
<img width="500" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/64006c09330a45b03605bba3/zJsd2hqd3EevgXo6fNgC-.png">
|
|
|
61 |
|
62 |

|
63 |
|
64 |
+
- **Stage 1: MLP Warmup.** In this stage, only the MLP projector is trained while the vision encoder and language model are frozen. A dynamic high-resolution training strategy is applied for better performance, despite increased cost. This phase ensures robust cross-modal alignment and prepares the model for stable multimodal training.
|
65 |
|
66 |
+
- **Stage 1.5: ViT Incremental Learning (Optional).** This stage allows incremental training of the vision encoder and MLP projector using the same data as Stage 1. It enhances the encoderβs ability to handle rare domains like multilingual OCR and mathematical charts. Once trained, the encoder can be reused across LLMs without retraining, making this stage optional unless new domains are introduced.
|
67 |
|
68 |
+
- **Stage 2: Full Model Instruction Tuning.** The entire model is trained on high-quality multimodal instruction datasets. Strict data quality controls are enforced to prevent degradation of the LLM, as noisy data can cause issues like repetitive or incorrect outputs. After this stage, the training process is complete.
|
69 |
|
70 |
## Evaluation on Vision Capability
|
71 |
|
|
|
85 |
|
86 |
## Quick Start
|
87 |
|
88 |
+
> [!Warning]
|
89 |
> π¨ Note: In our experience, the InternViT V2.5 series is better suited for building MLLMs than traditional computer vision tasks.
|
90 |
|
91 |
```python
|
|
|
118 |
If you find this project useful in your research, please consider citing:
|
119 |
|
120 |
```BibTeX
|
121 |
+
@article{wang2025internvl3_5,
|
122 |
+
title={InternVL3.5: Advancing Open-Source Multimodal Models in Versatility, Reasoning, and Efficiency},
|
123 |
+
author={Wang, Weiyun and Gao, Zhangwei and Gu, Lixin and Pu, Hengjun and Cui, Long and Wei, Xingguang and Liu, Zhaoyang and Jing, Linglin and Ye, Shenglong and Shao, Jie and others},
|
124 |
+
journal={arXiv preprint arXiv:2508.18265},
|
125 |
+
year={2025}
|
126 |
+
}
|
127 |
+
@article{zhu2025internvl3,
|
128 |
+
title={Internvl3: Exploring advanced training and test-time recipes for open-source multimodal models},
|
129 |
+
author={Zhu, Jinguo and Wang, Weiyun and Chen, Zhe and Liu, Zhaoyang and Ye, Shenglong and Gu, Lixin and Tian, Hao and Duan, Yuchen and Su, Weijie and Shao, Jie and others},
|
130 |
+
journal={arXiv preprint arXiv:2504.10479},
|
131 |
+
year={2025}
|
132 |
+
}
|
133 |
@article{chen2024expanding,
|
134 |
title={Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling},
|
135 |
author={Chen, Zhe and Wang, Weiyun and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Cui, Erfei and Zhu, Jinguo and Ye, Shenglong and Tian, Hao and Liu, Zhaoyang and others},
|
136 |
journal={arXiv preprint arXiv:2412.05271},
|
137 |
year={2024}
|
138 |
}
|
139 |
+
@article{wang2024mpo,
|
140 |
+
title={Enhancing the Reasoning Ability of Multimodal Large Language Models via Mixed Preference Optimization},
|
141 |
+
author={Wang, Weiyun and Chen, Zhe and Wang, Wenhai and Cao, Yue and Liu, Yangzhou and Gao, Zhangwei and Zhu, Jinguo and Zhu, Xizhou and Lu, Lewei and Qiao, Yu and Dai, Jifeng},
|
142 |
+
journal={arXiv preprint arXiv:2411.10442},
|
143 |
+
year={2024}
|
144 |
+
}
|
145 |
@article{gao2024mini,
|
146 |
+
title={Mini-InternVL: a flexible-transfer pocket multi-modal model with 5\% parameters and 90\% performance},
|
147 |
author={Gao, Zhangwei and Chen, Zhe and Cui, Erfei and Ren, Yiming and Wang, Weiyun and Zhu, Jinguo and Tian, Hao and Ye, Shenglong and He, Junjun and Zhu, Xizhou and others},
|
148 |
+
journal={Visual Intelligence},
|
149 |
+
volume={2},
|
150 |
+
number={1},
|
151 |
+
pages={1--17},
|
152 |
+
year={2024},
|
153 |
+
publisher={Springer}
|
154 |
}
|
155 |
@article{chen2024far,
|
156 |
+
title={How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites},
|
157 |
author={Chen, Zhe and Wang, Weiyun and Tian, Hao and Ye, Shenglong and Gao, Zhangwei and Cui, Erfei and Tong, Wenwen and Hu, Kongzhi and Luo, Jiapeng and Ma, Zheng and others},
|
158 |
+
journal={Science China Information Sciences},
|
159 |
+
volume={67},
|
160 |
+
number={12},
|
161 |
+
pages={220101},
|
162 |
+
year={2024},
|
163 |
+
publisher={Springer}
|
164 |
}
|
165 |
@inproceedings{chen2024internvl,
|
166 |
title={Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks},
|
|
|
170 |
year={2024}
|
171 |
}
|
172 |
```
|
173 |
+
|
174 |
+
## Acknowledgement
|
175 |
+
|
176 |
+
InternVL is built with reference to the code of the following projects: [OpenAI CLIP](https://github.com/openai/CLIP), [Open CLIP](https://github.com/mlfoundations/open_clip), [CLIP Benchmark](https://github.com/LAION-AI/CLIP_benchmark), [EVA](https://github.com/baaivision/EVA/tree/master), [InternImage](https://github.com/OpenGVLab/InternImage), [ViT-Adapter](https://github.com/czczup/ViT-Adapter), [MMSegmentation](https://github.com/open-mmlab/mmsegmentation), [Transformers](https://github.com/huggingface/transformers), [DINOv2](https://github.com/facebookresearch/dinov2), [BLIP-2](https://github.com/salesforce/LAVIS/tree/main/projects/blip2), [Qwen-VL](https://github.com/QwenLM/Qwen-VL/tree/master/eval_mm), and [LLaVA-1.5](https://github.com/haotian-liu/LLaVA). Thanks for their awesome work!
|
177 |
+
|
178 |
+
______________________________________________________________________
|
179 |
+
|
180 |
+
Scan the following QR Code, join our WeChat group.
|
181 |
+
|
182 |
+
<p align="center"><img width="300" alt="image" src="https://github.com/user-attachments/assets/f776df09-ebba-4fd5-80c2-fec4ff1518be"></p>
|