File size: 3,534 Bytes
883766b
52d7491
 
883766b
52d7491
 
6f50949
52d7491
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f50949
52d7491
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
6f50949
52d7491
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
---
base_model:
- ibm-granite/granite-vision-3.2-2b
---
  
  
# MISHANM/ibm-granite-vision-3.2-2b-fp16
  
The MISHANM/ibm-granite-granite-vision-3.2-2b-fp16 model is a sophisticated vision-language model designed for image-to-text generation. It leverages advanced neural architectures to transform visual inputs into coherent textual descriptions.

## Model Details  
1. Language: English
2. Tasks: Imgae to Text Generation
  
### Model Example output  
  
This is the model inference output:   
  
![image/png](https://cdn-uploads.huggingface.co/production/uploads/66851b2c4461866b07738832/QeQENKNaU9VoaFhYvdXBs.png)


 ## Getting Started

 To begin using the model, ensure you have the necessary dependencies:

```shell
pip install transformers>=4.49

```

## Use the code below to get started with the model. 


Using Gradio
  
```python  
import gradio as gr
from transformers import AutoProcessor, AutoModelForVision2Seq
import torch
from PIL import Image

device = "cuda" if torch.cuda.is_available() else "cpu"

model_path = "MISHANM/ibm-granite-vision-3.2-2b-fp16"
processor = AutoProcessor.from_pretrained(model_path)
model = AutoModelForVision2Seq.from_pretrained(model_path, ignore_mismatched_sizes=True).to(device)


def process_image_and_prompt(image_path, prompt):
    # Load the image
    image = Image.open(image_path).convert("RGB")

    # Prepare the conversation input
    conversation = [
        {
            "role": "user",
            "content": [
                {"type": "image", "url": image},
                {"type": "text", "text": prompt},
            ],
        },
    ]

    # Process the inputs
    inputs = processor.apply_chat_template(
        conversation,
        add_generation_prompt=True,
        tokenize=True,
        return_dict=True,
        return_tensors="pt"
    ).to(device)


    # Generate the output
    output = model.generate(**inputs, max_new_tokens=100)
    return processor.decode(output[0], skip_special_tokens=True)

# Create the Gradio interface
iface = gr.Interface(
    fn=process_image_and_prompt,
    inputs=[
        gr.Image(type="filepath", label="Upload Image"),
        gr.Textbox(lines=2, placeholder="Enter your prompt here...", label="Prompt")
    ],
    outputs="text",
    title="Granite Vision: Advanced Image-to-Text Generation Model",
    description="Upload an image and enter a text prompt to get a response from the model."
)

# Launch the Gradio app
iface.launch(share=True)
 

```

## Uses  
  
### Direct Use  
  
This model is ideal for converting images into descriptive text, making it valuable for creative projects, content creation, and artistic exploration.

### Out-of-Scope Use  
  
The model is not intended for generating explicit or harmful content. It may also face challenges with highly abstract or nonsensical prompts.

## Bias, Risks, and Limitations  
  
The model may reflect biases present in its training data, potentially resulting in stereotypical or biased outputs. Users should be aware of these limitations and review generated content for accuracy and appropriateness.

### Recommendations  
  
Users are encouraged to critically evaluate the model's outputs, especially in sensitive contexts, to ensure they meet the desired standards of accuracy and appropriateness.

## Citation Information
```
@misc{MISHANM/ibm-granite-vision-3.2-2b-fp16,
  author = {Mishan Maurya},
  title = {Introducing Image to Text Generation model},
  year = {2025},
  publisher = {Hugging Face},
  journal = {Hugging Face repository},
  
}
```