KARAKURI VL 32B Instruct 2507

KARAKURI VL

Model Details

Model Description

Usage

Use in πŸ€— Transformers

First, install the required dependencies:

pip install transformers accelerate qwen-vl-utils[decord]==0.0.8

Then, use the following code to load the model and generate responses:

from transformers import AutoModelForImageTextToText, AutoProcessor
from qwen_vl_utils import process_vision_info

model_name = "karakuri-ai/karakuri-vl-32b-instruct-2507"
model = AutoModelForImageTextToText.from_pretrained(
    model_name, torch_dtype="auto", device_map="auto"
)
processor = AutoProcessor.from_pretrained(model_name)

messages = [
    {
        "role": "user",
        "content": [
            {
                "type": "image",
                "image": "https://qianwen-res.oss-cn-beijing.aliyuncs.com/Qwen-VL/assets/demo.jpeg",
            },
            {"type": "text", "text": "Describe this image."},
        ],
    }
]

# Preparation for inference
text = processor.apply_chat_template(
    messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
    text=[text],
    images=image_inputs,
    videos=video_inputs,
    padding=True,
    return_tensors="pt",
)
inputs = inputs.to(model.device)

# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
    out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
    generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)

Training Details

Training Infrastructure

  • Hardware: The model was trained on 20 nodes of an Amazon EC2 trn1.32xlarge instance.
  • Software: We use code based on neuronx-nemo-megatron.

Acknowledgments

This work was supported by the Ministry of Economy, Trade and Industry (METI) and the New Energy and Industrial Technology Development Organization (NEDO) through the Generative AI Accelerator Challenge (GENIAC).

Citation

@misc{karakuri_vl_32b_instruct_2507,
    author       = { {KARAKURI} {Inc.} },
    title        = { {KARAKURI} {VL} 32{B} {Instruct} 2507 },
    year         = { 2025 },
    url          = { https://huggingface.co/karakuri-ai/karakuri-vl-32b-instruct-2507 },
    publisher    = { {Hugging Face} },
    journal      = { {Hugging Face} repository }
}
Downloads last month
70
Safetensors
Model size
33.5B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for karakuri-ai/karakuri-vl-32b-instruct-2507

Finetuned
(23)
this model
Finetunes
1 model
Quantizations
3 models

Collection including karakuri-ai/karakuri-vl-32b-instruct-2507