Vintern-1B-v2 βοΈ (Viet-InternVL2-1B-v2) - The LLaVA π Challenger
We are excited to introduce Vintern-1B-v2 the Vietnamese π»π³ multimodal model that combines the advanced Vietnamese language model Qwen2-0.5B-Instruct[1] with the latest visual model, InternViT-300M-448px[2], CVPR 2024. This model excels in tasks such as OCR-VQA, Doc-VQA, and Chart-VQA,... With only 1 billion parameters, it is 4096 context length finetuned from the Viet-InternVL2-1B model on over 3 million specialized image-question-answer pairs for optical character recognition π, text recognition π€, document extraction π, and general VQA. The model can be integrated into various on-device applications π±, demonstrating its versatility and robust capabilities.
The special thing is that our model can be easily finetuned with a T4 GPU on Google Colab by following the instructions provided at the end of this section.
Model Details
Model Name | Vision Part | Language Part |
---|---|---|
Vintern-1B-v2 | InternViT-300M-448px | Qwen2-0.5B-Instruct |
Vintern-1B-v2 is a multimodal large language model series, featuring models of various sizes. For each size, we release instruction-tuned models optimized for multimodal tasks. Vintern-1B-v2 consists of InternViT-300M-448px, an MLP projector, and Qwen2-0.5B-Instruct.
Training details π
The fine-tuning dataset was meticulously sampled in part from the following datasets:
Viet-OCR-VQA π, Viet-Doc-VQA π, Viet-Doc-VQA-II π, Vista πΌοΈ, Viet-Receipt-VQA π§Ύ, Viet-Sketches-VQA βοΈ, Viet-Geometry-VQA π, Viet-Wiki-Handwriting βοΈ, Viet-ComputerScience-VQA π», Viet-Handwriting-gemini-VQA ποΈ, Viet-Menu-gemini-VQA π½οΈ, Viet-Vintext-gemini-VQA π, Viet-OpenViVQA-gemini-VQA π§ , Viet-Resume-VQA π, Viet-ViTextVQA-gemini-VQA π
Benchmarks π
Since there are still many different metrics that need to be tested, we chose a quick and simple metric first to guide the development of our model. Our metric is inspired by Lavy[4]. For the time being, we are using GPT-4 to evaluate the quality of answers on two datasets: OpenViVQA and ViTextVQA. Detailed results can be found at the provided here. The inputs are images, questions, labels, and predicted answers. The model will return a score from 0 to 10 for the corresponding answer quality. The results table is shown below.
Model | gpt4o-score | |
OpenViVQA-dev | ViTextVQA-dev | |
Vintern-1B | 7.1/10 | 7.6/10 |
Vintern-1B-v2 | 7.7/10 | 7.7/10 |
The benchmark result in MTVQA
Models | Open-Source | Vietnamese Score |
---|---|---|
Qwen2-VL 72B (Top 1) | β | 41.6 |
GPT-4o (Top 2) | β | 34.2 |
Vintern-1B-V2 (Top 3) | β | 31.7 |
Qwen2-VL 7B | β | 30.0 |
Claude3 Opus | β | 29.1 |
GPT-4o mini | β | 29.1 |
GPT-4V | β | 28.9 |
Gemini Ultra | β | 28.6 |
InternVL2 76B | β | 26.9 |
QwenVL Max | β | 23.5 |
Claude3 Sonnet | β | 20.8 |
QwenVL Plus | β | 18.1 |
MiniCPM-V2.5 | β | 15.3 |
InternVL-V1.5 | β | 12.4 |
Examples
User: <image>
MΓ΄ tαΊ£ hΓ¬nh αΊ£nh mα»t cΓ‘ch chi tiαΊΏt.
Assistant: Bα»©c αΊ£nh lΓ mα»t trang giαΊ₯y kαΊ» Γ΄ vuΓ΄ng vα»i dΓ²ng chα»― viαΊΏt tay bαΊ±ng mα»±c Δen. Bα» cα»₯c ΔΖ‘n giαΊ£n, chα» cΓ³ nα»i dung vΔn bαΊ£n Δược viαΊΏt theo chiα»u dα»c tα»« trΓͺn xuα»ng dΖ°α»i. CΓ‘c dΓ²ng chα»― Δược sαΊ―p xαΊΏp theo thα»© tα»± tα»« trΓͺn xuα»ng dΖ°α»i nhΖ° sau:
- Nghe thαΊ§y Δα»c thΖ‘
- Em nghe thαΊ§y Δα»c bao ngΓ y
- TiαΊΏng thΖ‘ Δα» nαΊ―ng xanh cΓ’y quanh nhΓ
- MΓ‘i chΓ¨o nghe vα»ng sΓ΄ng xa
- Em Γͺm nghe tiαΊΏng của bΓ nΔm xΖ°a
- Nghe trΔng thα» Δα»ng tΓ u dα»«a
- RΓ o rΓ o nghe chuyα»n con mΖ°a giα»―a trα»i
- ThΓͺm yΓͺu tiαΊΏng hΓ‘t nα»₯ cΖ°α»i
- Nghe thΖ‘ em thαΊ₯y ΔαΊ₯t trα»i ΔαΊΉp ra.
- TrαΊ§n ΔΔng Khoa
User: <image>
Tam giΓ‘c ABC lΓ gΓ¬ của ΔΖ°α»ng trΓ²n tΓ’m O ?
Assistant: Tam giΓ‘c ABC lΓ tam giΓ‘c nα»i tiαΊΏp của ΔΖ°α»ng trΓ²n tΓ’m O.
User: <image>
MΓ΄ tαΊ£ hΓ¬nh αΊ£nh mα»t cΓ‘ch chi tiαΊΏt.
Assistant: Bα»©c αΊ£nh chα»₯p mα»t con sΓ΄ng α» Viα»t Nam vα»i nhα»―ng ngΓ΄i nhΓ cα» kΓnh Δược sΖ‘n mΓ u vΓ ng nhαΊ‘t. PhΓa trΖ°α»c lΓ mα»t con thuyα»n nhα» mΓ u Δα» vΓ mα»t chiαΊΏc thuyα»n lα»n mΓ u xanh dΖ°Ζ‘ng.
TrΓͺn thuyα»n cΓ³ mα»t ngΖ°α»i ΔΓ n Γ΄ng Δang chΓ¨o thuyα»n. Hai bΓͺn bα» sΓ΄ng lΓ nhα»―ng hΓ ng cΓ’y xanh mΓ‘t. BαΊ§u trα»i trong xanh vΓ cΓ³ mα»t vΓ i ΔΓ‘m mΓ’y trαΊ―ng.
α» gΓ³c dΖ°α»i bΓͺn trΓ‘i của bα»©c αΊ£nh cΓ³ dΓ²ng chα»― "VNEXPRESS".
User: <image>
MΓ΄ tαΊ£ hΓ¬nh αΊ£nh mα»t cΓ‘ch chi tiαΊΏt.
Assistant: Bα»©c αΊ£nh lΓ mα»t thΓ΄ng bΓ‘o tuyα»n dα»₯ng của cΓ΄ng ty Mobifone. Bα» cα»₯c Δược chia thΓ nh 3 phαΊ§n chΓnh:
- PhαΊ§n ΔαΊ§u tiΓͺn lΓ tiΓͺu Δα» "TUYα»N DỀNG" Δược in chα»― mΓ u Δα» ΔαΊm, cα»‘ chα»― lα»n, nαΊ±m α» vα» trΓ trung tΓ’m phΓa trΓͺn cΓΉng của bα»©c αΊ£nh.
- PhαΊ§n thα»© hai lΓ phαΊ§n giα»i thiα»u vα» vα» trΓ vΓ yΓͺu cαΊ§u của α»©ng viΓͺn. PhαΊ§n nΓ y Δược trΓ¬nh bΓ y theo dαΊ‘ng liα»t kΓͺ cΓ‘c mα»₯c:
- Vα» trΓ: NhΓ’n viΓͺn kinh doanh
- Δα»a Δiα»m: ThΓ nh phα» vΓ cΓ‘c huyα»n YΓͺn SΖ‘n, SΖ‘n DΖ°Ζ‘ng, HΓ m YΓͺn, ChiΓͺm HΓ³a
- YΓͺu cαΊ§u: TrΓ¬nh Δα»: Tα»t nghiα»p Trung cαΊ₯p trα» lΓͺn, Δα» tuα»i: KhΓ΄ng quΓ‘ 35 tuα»i
- PhαΊ§n thα»© ba lΓ phαΊ§n giα»i thiα»u vα» chαΊΏ Δα» ΔΓ£i ngα». PhαΊ§n nΓ y Δược trΓ¬nh bΓ y theo dαΊ‘ng liα»t kΓͺ cΓ‘c mα»₯c:
- ChαΊΏ Δα» ΔΓ£i ngα»:
- Mα»©c lΖ°Ζ‘ng: 7.000.000Δ - 17.000.000Δ
- ThΖ°α»ng thi Δua hΓ ng thΓ‘ng, quΓ½: ThΖ°α»ng theo doanh sα»
- ChαΊΏ Δα» bαΊ£o hiα»m theo quy Δα»nh
- Δược trau dα»i cΓ‘c kα»Ή nΔng: Giao tiαΊΏp, thuyαΊΏt phα»₯c, chΔm sΓ³c khΓ‘ch hΓ ng...
PhαΊ§n cuα»i cΓΉng lΓ phαΊ§n "Nα»P Hα» SΖ " vα»i thΓ΄ng tin liΓͺn lαΊ‘c của ngΖ°α»i nα»p hα» sΖ‘. PhαΊ§n nΓ y Δược trΓ¬nh bΓ y theo dαΊ‘ng liα»t kΓͺ cΓ‘c mα»₯c:
- NgΖ°α»i nhαΊn: TrαΊ§n Thα» ThΓΊy
- Δα»a chα»: 547 ΔΖ°α»ng Quang Trung, Tα» 25, PhΖ°α»ng Phan ThiαΊΏt, TP TuyΓͺn Quang
- Δiα»n thoαΊ‘i liΓͺn hα»: 0901.452.888
- Email nhαΊn hα» sΖ‘: [email protected]
- Thα»i hαΊ‘n nα»p hα» sΖ‘: ΔαΊΏn hαΊΏt ngΓ y 28/5/2021
User: <image>
Con nΓ o sαΊ½ thαΊ―ng ?
Assistant: Con thα» sαΊ½ thαΊ―ng.
User: <image>
Mô tả chi tiết hình ảnh.
Assistant: Bα»©c αΊ£nh hiα»n thα» mα»t ΔoαΊ‘n mΓ£ nguα»n Δược viαΊΏt bαΊ±ng ngΓ΄n ngα»― lαΊp trΓ¬nh Python. ΔoαΊ‘n mΓ£ nΓ y bao gα»m hai phαΊ§n:
PhαΊ§n ΔαΊ§u: NΓͺu tiΓͺu Δα» "Example of avoiding complex expressions" (MΓ΄ tαΊ£ vΓ dα»₯ vα» viα»c trΓ‘nh cΓ‘c hΓ m phα»©c tαΊ‘p).
PhαΊ§n chΓnh: Bao gα»m hai hΓ m square_numbers vΓ main().
HΓ m square_numbers nhαΊn mα»t tham sα» lΓ danh sΓ‘ch sα» tα»« 1 ΔαΊΏn 4. HΓ m nΓ y sα» dα»₯ng vΓ²ng lαΊ·p for Δα» duyα»t qua mα»i sα» trong danh sΓ‘ch vΓ thΓͺm nΓ³ vΓ o danh sΓ‘ch squares. Sau ΔΓ³, hΓ m trαΊ£ vα» danh sΓ‘ch squares.
HΓ m main() tαΊ‘o mα»t biαΊΏn numbers vα»i giΓ‘ trα» [1, 2, 3, 4], sau ΔΓ³ gα»i hΓ m square_numbers vΓ ghi kαΊΏt quαΊ£ vΓ o biαΊΏn result. Cuα»i cΓΉng, hΓ m print(result) sαΊ½ in ra kαΊΏt quαΊ£ của hΓ m main().
Quickstart
Here provides a code snippet to show you how to load the tokenizer and model and how to generate contents. To run inference using the model, follow the steps outlined in our Colab inference notebook
import numpy as np
import torch
import torchvision.transforms as T
# from decord import VideoReader, cpu
from PIL import Image
from torchvision.transforms.functional import InterpolationMode
from transformers import AutoModel, AutoTokenizer
IMAGENET_MEAN = (0.485, 0.456, 0.406)
IMAGENET_STD = (0.229, 0.224, 0.225)
def build_transform(input_size):
MEAN, STD = IMAGENET_MEAN, IMAGENET_STD
transform = T.Compose([
T.Lambda(lambda img: img.convert('RGB') if img.mode != 'RGB' else img),
T.Resize((input_size, input_size), interpolation=InterpolationMode.BICUBIC),
T.ToTensor(),
T.Normalize(mean=MEAN, std=STD)
])
return transform
def find_closest_aspect_ratio(aspect_ratio, target_ratios, width, height, image_size):
best_ratio_diff = float('inf')
best_ratio = (1, 1)
area = width * height
for ratio in target_ratios:
target_aspect_ratio = ratio[0] / ratio[1]
ratio_diff = abs(aspect_ratio - target_aspect_ratio)
if ratio_diff < best_ratio_diff:
best_ratio_diff = ratio_diff
best_ratio = ratio
elif ratio_diff == best_ratio_diff:
if area > 0.5 * image_size * image_size * ratio[0] * ratio[1]:
best_ratio = ratio
return best_ratio
def dynamic_preprocess(image, min_num=1, max_num=12, image_size=448, use_thumbnail=False):
orig_width, orig_height = image.size
aspect_ratio = orig_width / orig_height
# calculate the existing image aspect ratio
target_ratios = set(
(i, j) for n in range(min_num, max_num + 1) for i in range(1, n + 1) for j in range(1, n + 1) if
i * j <= max_num and i * j >= min_num)
target_ratios = sorted(target_ratios, key=lambda x: x[0] * x[1])
# find the closest aspect ratio to the target
target_aspect_ratio = find_closest_aspect_ratio(
aspect_ratio, target_ratios, orig_width, orig_height, image_size)
# calculate the target width and height
target_width = image_size * target_aspect_ratio[0]
target_height = image_size * target_aspect_ratio[1]
blocks = target_aspect_ratio[0] * target_aspect_ratio[1]
# resize the image
resized_img = image.resize((target_width, target_height))
processed_images = []
for i in range(blocks):
box = (
(i % (target_width // image_size)) * image_size,
(i // (target_width // image_size)) * image_size,
((i % (target_width // image_size)) + 1) * image_size,
((i // (target_width // image_size)) + 1) * image_size
)
# split the image
split_img = resized_img.crop(box)
processed_images.append(split_img)
assert len(processed_images) == blocks
if use_thumbnail and len(processed_images) != 1:
thumbnail_img = image.resize((image_size, image_size))
processed_images.append(thumbnail_img)
return processed_images
def load_image(image_file, input_size=448, max_num=12):
image = Image.open(image_file).convert('RGB')
transform = build_transform(input_size=input_size)
images = dynamic_preprocess(image, image_size=input_size, use_thumbnail=True, max_num=max_num)
pixel_values = [transform(image) for image in images]
pixel_values = torch.stack(pixel_values)
return pixel_values
model = AutoModel.from_pretrained(
"5CD-AI/Vintern-1B-v2",
torch_dtype=torch.bfloat16,
low_cpu_mem_usage=True,
trust_remote_code=True,
).eval().cuda()
tokenizer = AutoTokenizer.from_pretrained("5CD-AI/Vintern-1B-v2", trust_remote_code=True, use_fast=False)
test_image = 'test-image.jpg'
pixel_values = load_image(test_image, max_num=12).to(torch.bfloat16).cuda()
generation_config = dict(max_new_tokens= 1024, do_sample=False, num_beams = 3, repetition_penalty=2.5)
question = '<image>\nMΓ΄ tαΊ£ hΓ¬nh αΊ£nh mα»t cΓ‘ch chi tiαΊΏt.'
response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=None, return_history=True)
print(f'User: {question}\nAssistant: {response}')
#question = "CΓ’u hα»i khΓ‘c ......"
#response, history = model.chat(tokenizer, pixel_values, question, generation_config, history=history, return_history=True)
#print(f'User: {question}\nAssistant: {response}')
Finetune on your Data
Citation
@misc{doan2024vintern1befficientmultimodallarge,
title={Vintern-1B: An Efficient Multimodal Large Language Model for Vietnamese},
author={Khang T. Doan and Bao G. Huynh and Dung T. Hoang and Thuc D. Pham and Nhat H. Pham and Quan T. M. Nguyen and Bang Q. Vo and Suong N. Hoang},
year={2024},
eprint={2408.12480},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.12480},
}
References
[1] Yang, An, et al. "Qwen2 technical report." arXiv preprint arXiv:2407.10671 (2024).
[2] Chen, Zhe, et al. "Internvl: Scaling up vision foundation models and aligning for generic visual-linguistic tasks." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024.
[3] Chen, Zhe, et al. "How far are we to gpt-4v? closing the gap to commercial multimodal models with open-source suites." arXiv preprint arXiv:2404.16821 (2024).
[4] Tran, Chi, and Huong Le Thanh. "LaVy: Vietnamese Multimodal Large Language Model." arXiv preprint arXiv:2404.07922 (2024).
- Downloads last month
- 4,403
Model tree for 5CD-AI/Vintern-1B-v2
Base model
OpenGVLab/InternVL2-1B