model_id
stringlengths
8
65
model_card
stringlengths
0
15.7k
model_labels
listlengths
microsoft/kosmos-2-patch14-224
# Kosmos-2: Grounding Multimodal Large Language Models to the World <a href="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" target="_blank"><figure><img src="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" width="384"><figcaption><b>[An image of a snowman warming himself by a fire.]</b></figcaption></figure></a> This Hub repository contains a HuggingFace's `transformers` implementation of [the original Kosmos-2 model](https://github.com/microsoft/unilm/tree/master/kosmos-2) from Microsoft. ## How to Get Started with the Model Use the code below to get started with the model. ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForVision2Seq model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224") processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") prompt = "<grounding>An image of" url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) # The original Kosmos-2 demo saves the image first then reload it. For some images, this will give slightly different image input and change the generation outputs. image.save("new_image.jpg") image = Image.open("new_image.jpg") inputs = processor(text=prompt, images=image, return_tensors="pt") generated_ids = model.generate( pixel_values=inputs["pixel_values"], input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], image_embeds=None, image_embeds_position_mask=inputs["image_embeds_position_mask"], use_cache=True, max_new_tokens=128, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] # Specify `cleanup_and_extract=False` in order to see the raw model generation. processed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False) print(processed_text) # `<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>.` # By default, the generated text is cleanup and the entities are extracted. processed_text, entities = processor.post_process_generation(generated_text) print(processed_text) # `An image of a snowman warming himself by a fire.` print(entities) # `[('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])]` ``` ## Tasks This model is capable of performing different tasks through changing the prompts. First, let's define a function to run a prompt. <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForVision2Seq model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224") processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) def run_example(prompt): inputs = processor(text=prompt, images=image, return_tensors="pt") generated_ids = model.generate( pixel_values=inputs["pixel_values"], input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], image_embeds=None, image_embeds_position_mask=inputs["image_embeds_position_mask"], use_cache=True, max_new_tokens=128, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] _processed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False) processed_text, entities = processor.post_process_generation(generated_text) print(processed_text) print(entities) print(_processed_text) ``` </details> Here are the tasks `Kosmos-2` could perform: <details> <summary> Click to expand </summary> ### Multimodal Grounding #### • Phrase Grounding ```python prompt = "<grounding><phrase> a snowman</phrase>" run_example(prompt) # a snowman is warming himself by the fire # [('a snowman', (0, 9), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('the fire', (32, 40), [(0.203125, 0.015625, 0.453125, 0.859375)])] # <grounding><phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> is warming himself by<phrase> the fire</phrase><object><patch_index_0006><patch_index_0878></object> ``` #### • Referring Expression Comprehension ```python prompt = "<grounding><phrase> a snowman next to a fire</phrase>" run_example(prompt) # a snowman next to a fire # [('a snowman next to a fire', (0, 24), [(0.390625, 0.046875, 0.984375, 0.828125)])] # <grounding><phrase> a snowman next to a fire</phrase><object><patch_index_0044><patch_index_0863></object> ``` ### Multimodal Referring #### • Referring expression generation ```python prompt = "<grounding><phrase> It</phrase><object><patch_index_0044><patch_index_0863></object> is" run_example(prompt) # It is snowman in a hat and scarf # [('It', (0, 2), [(0.390625, 0.046875, 0.984375, 0.828125)])] # <grounding><phrase> It</phrase><object><patch_index_0044><patch_index_0863></object> is snowman in a hat and scarf ``` ### Perception-Language Tasks #### • Grounded VQA ```python prompt = "<grounding> Question: What is special about this image? Answer:" run_example(prompt) # Question: What is special about this image? Answer: The image features a snowman sitting by a campfire in the snow. # [('a snowman', (71, 80), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a campfire', (92, 102), [(0.109375, 0.640625, 0.546875, 0.984375)])] # <grounding> Question: What is special about this image? Answer: The image features<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> sitting by<phrase> a campfire</phrase><object><patch_index_0643><patch_index_1009></object> in the snow. ``` #### • Grounded VQA with multimodal referring via bounding boxes ```python prompt = "<grounding> Question: Where is<phrase> the fire</phrase><object><patch_index_0005><patch_index_0911></object> next to? Answer:" run_example(prompt) # Question: Where is the fire next to? Answer: Near the snowman. # [('the fire', (19, 27), [(0.171875, 0.015625, 0.484375, 0.890625)]), ('the snowman', (50, 61), [(0.390625, 0.046875, 0.984375, 0.828125)])] # <grounding> Question: Where is<phrase> the fire</phrase><object><patch_index_0005><patch_index_0911></object> next to? Answer: Near<phrase> the snowman</phrase><object><patch_index_0044><patch_index_0863></object>. ``` ### Grounded Image captioning #### • Brief ```python prompt = "<grounding> An image of" run_example(prompt) # An image of a snowman warming himself by a campfire. # [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a campfire', (41, 51), [(0.109375, 0.640625, 0.546875, 0.984375)])] # <grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a campfire</phrase><object><patch_index_0643><patch_index_1009></object>. ``` #### • Detailed ```python prompt = "<grounding> Describe this image in detail:" run_example(prompt) # Describe this image in detail: The image features a snowman sitting by a campfire in the snow. He is wearing a hat, scarf, and gloves, with a pot nearby and a cup nearby. The snowman appears to be enjoying the warmth of the fire, and it appears to have a warm and cozy atmosphere. # [('a campfire', (71, 81), [(0.171875, 0.015625, 0.484375, 0.984375)]), ('a hat', (109, 114), [(0.515625, 0.046875, 0.828125, 0.234375)]), ('scarf', (116, 121), [(0.515625, 0.234375, 0.890625, 0.578125)]), ('gloves', (127, 133), [(0.515625, 0.390625, 0.640625, 0.515625)]), ('a pot', (140, 145), [(0.078125, 0.609375, 0.265625, 0.859375)]), ('a cup', (157, 162), [(0.890625, 0.765625, 0.984375, 0.984375)])] # <grounding> Describe this image in detail: The image features a snowman sitting by<phrase> a campfire</phrase><object><patch_index_0005><patch_index_1007></object> in the snow. He is wearing<phrase> a hat</phrase><object><patch_index_0048><patch_index_0250></object>,<phrase> scarf</phrase><object><patch_index_0240><patch_index_0604></object>, and<phrase> gloves</phrase><object><patch_index_0400><patch_index_0532></object>, with<phrase> a pot</phrase><object><patch_index_0610><patch_index_0872></object> nearby and<phrase> a cup</phrase><object><patch_index_0796><patch_index_1023></object> nearby. The snowman appears to be enjoying the warmth of the fire, and it appears to have a warm and cozy atmosphere. ``` </details> ## Draw the bounding bboxes of the entities on the image Once you have the `entities`, you can use the following helper function to draw their bounding bboxes on the image: <details> <summary> Click to expand </summary> ```python import cv2 import numpy as np import os import requests import torch import torchvision.transforms as T from PIL import Image def is_overlapping(rect1, rect2): x1, y1, x2, y2 = rect1 x3, y3, x4, y4 = rect2 return not (x2 < x3 or x1 > x4 or y2 < y3 or y1 > y4) def draw_entity_boxes_on_image(image, entities, show=False, save_path=None): """_summary_ Args: image (_type_): image or image path collect_entity_location (_type_): _description_ """ if isinstance(image, Image.Image): image_h = image.height image_w = image.width image = np.array(image)[:, :, [2, 1, 0]] elif isinstance(image, str): if os.path.exists(image): pil_img = Image.open(image).convert("RGB") image = np.array(pil_img)[:, :, [2, 1, 0]] image_h = pil_img.height image_w = pil_img.width else: raise ValueError(f"invaild image path, {image}") elif isinstance(image, torch.Tensor): image_tensor = image.cpu() reverse_norm_mean = torch.tensor([0.48145466, 0.4578275, 0.40821073])[:, None, None] reverse_norm_std = torch.tensor([0.26862954, 0.26130258, 0.27577711])[:, None, None] image_tensor = image_tensor * reverse_norm_std + reverse_norm_mean pil_img = T.ToPILImage()(image_tensor) image_h = pil_img.height image_w = pil_img.width image = np.array(pil_img)[:, :, [2, 1, 0]] else: raise ValueError(f"invaild image format, {type(image)} for {image}") if len(entities) == 0: return image new_image = image.copy() previous_bboxes = [] # size of text text_size = 1 # thickness of text text_line = 1 # int(max(1 * min(image_h, image_w) / 512, 1)) box_line = 3 (c_width, text_height), _ = cv2.getTextSize("F", cv2.FONT_HERSHEY_COMPLEX, text_size, text_line) base_height = int(text_height * 0.675) text_offset_original = text_height - base_height text_spaces = 3 for entity_name, (start, end), bboxes in entities: for (x1_norm, y1_norm, x2_norm, y2_norm) in bboxes: orig_x1, orig_y1, orig_x2, orig_y2 = int(x1_norm * image_w), int(y1_norm * image_h), int(x2_norm * image_w), int(y2_norm * image_h) # draw bbox # random color color = tuple(np.random.randint(0, 255, size=3).tolist()) new_image = cv2.rectangle(new_image, (orig_x1, orig_y1), (orig_x2, orig_y2), color, box_line) l_o, r_o = box_line // 2 + box_line % 2, box_line // 2 + box_line % 2 + 1 x1 = orig_x1 - l_o y1 = orig_y1 - l_o if y1 < text_height + text_offset_original + 2 * text_spaces: y1 = orig_y1 + r_o + text_height + text_offset_original + 2 * text_spaces x1 = orig_x1 + r_o # add text background (text_width, text_height), _ = cv2.getTextSize(f" {entity_name}", cv2.FONT_HERSHEY_COMPLEX, text_size, text_line) text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2 = x1, y1 - (text_height + text_offset_original + 2 * text_spaces), x1 + text_width, y1 for prev_bbox in previous_bboxes: while is_overlapping((text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2), prev_bbox): text_bg_y1 += (text_height + text_offset_original + 2 * text_spaces) text_bg_y2 += (text_height + text_offset_original + 2 * text_spaces) y1 += (text_height + text_offset_original + 2 * text_spaces) if text_bg_y2 >= image_h: text_bg_y1 = max(0, image_h - (text_height + text_offset_original + 2 * text_spaces)) text_bg_y2 = image_h y1 = image_h break alpha = 0.5 for i in range(text_bg_y1, text_bg_y2): for j in range(text_bg_x1, text_bg_x2): if i < image_h and j < image_w: if j < text_bg_x1 + 1.35 * c_width: # original color bg_color = color else: # white bg_color = [255, 255, 255] new_image[i, j] = (alpha * new_image[i, j] + (1 - alpha) * np.array(bg_color)).astype(np.uint8) cv2.putText( new_image, f" {entity_name}", (x1, y1 - text_offset_original - 1 * text_spaces), cv2.FONT_HERSHEY_COMPLEX, text_size, (0, 0, 0), text_line, cv2.LINE_AA ) # previous_locations.append((x1, y1)) previous_bboxes.append((text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2)) pil_image = Image.fromarray(new_image[:, :, [2, 1, 0]]) if save_path: pil_image.save(save_path) if show: pil_image.show() return new_image # (The same image from the previous code example) url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) # From the previous code example entities = [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])] # Draw the bounding bboxes draw_entity_boxes_on_image(image, entities, show=True) ``` </details> Here is the annotated image: <a href="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" target="_blank"><img src="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" width="500"></a> ## BibTex and citation info ``` @article{kosmos-2, title={Kosmos-2: Grounding Multimodal Large Language Models to the World}, author={Zhiliang Peng and Wenhui Wang and Li Dong and Yaru Hao and Shaohan Huang and Shuming Ma and Furu Wei}, journal={ArXiv}, year={2023}, volume={abs/2306} } @article{kosmos-1, title={Language Is Not All You Need: Aligning Perception with Language Models}, author={Shaohan Huang and Li Dong and Wenhui Wang and Yaru Hao and Saksham Singhal and Shuming Ma and Tengchao Lv and Lei Cui and Owais Khan Mohammed and Qiang Liu and Kriti Aggarwal and Zewen Chi and Johan Bjorck and Vishrav Chaudhary and Subhojit Som and Xia Song and Furu Wei}, journal={ArXiv}, year={2023}, volume={abs/2302.14045} } @article{metalm, title={Language Models are General-Purpose Interfaces}, author={Yaru Hao and Haoyu Song and Li Dong and Shaohan Huang and Zewen Chi and Wenhui Wang and Shuming Ma and Furu Wei}, journal={ArXiv}, year={2022}, volume={abs/2206.06336} } ```
null
merve/blip2-opt-6.7b
# BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
dblasko/blip-dalle3-img2prompt
# DALL·E 3 Image prompt reverse-engineering Pre-trained image-captioning model BLIP fine-tuned on a mixture of `laion/dalle-3-dataset` and semi-automatically gathered `(image, prompt)` data from DALLE·E 3. It takes a generated image as an input and outputs a potential prompt to generate such an image, which can then be used as a base to generate similar images. ⚠️ Disclaimer: This model is **not intended for commercial use** as the data it was trained on includes images generated by DALLE·E 3. This is for educational purposes only. ### Usage: Loading the model and preprocessor: ```python from transformers import BlipForConditionalGeneration, AutoProcessor model = BlipForConditionalGeneration.from_pretrained("dblasko/blip-dalle3-img2prompt").to(device) processor = AutoProcessor.from_pretrained("dblasko/blip-dalle3-img2prompt") ``` Inference example on an image from `laion/dalle-3-dataset`: ```python from datasets import load_dataset dataset = load_dataset("laion/dalle-3-dataset", split=f'train[0%:1%]') # for fast download time in the toy example example = dataset[img_index][0] image = example["image"] caption = example["caption"] inputs = processor(images=image, return_tensors="pt").to(device) pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50) generated_caption = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] print(f"Generated caption: {generated_caption}\nReal caption: {caption}") ```
null
merve/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
smdesai/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
Ayansk11/Image_Caption_using_ViT_GPT2
# The Illustrated Image Captioning using transformers ![](https://ankur3107.github.io/assets/images/vision-encoder-decoder.png) # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("Ayansk11/Image_Caption_using_ViT_GPT2") feature_extractor = ViTImageProcessor.from_pretrained("Ayansk11/Image_Caption_using_ViT_GPT2") tokenizer = AutoTokenizer.from_pretrained("Ayansk11/Image_Caption_using_ViT_GPT2") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['doctor.e16ba4e4.jpg']) # ['a woman in a hospital bed with a woman in a hospital bed'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="Ayansk11/Image_Caption_using_ViT_GPT2") image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png") # [{'generated_text': 'a soccer game with a player jumping to catch the ball '}] ```
null
mrvero/BT-Image-Captioning-Large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
kaist-ai/volcano-7b
## Links for Reference - **Repository: https://github.com/kaistAI/Volcano** - **Paper: https://arxiv.org/abs/2311.07362** # Overview ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550c4f27bbfce1878f5f280/AnqbCNf6pRiQ_5uNX0r4d.png) Volcano employs a single LMM to generate initial responses, feedback, and revisions, as well as decisions to accept revisions. It follows a sequential procedure of an iterative critique-revision-decide loop. # Model details **Model type:** Volcano-7b is a multimodal self-feedback guided revision model that was fine-tuned by mixing the visual instruction tuning dataset used in [LLaVA-v1.5](https://llava-vl.github.io/) with multimodal feedback and revision data collected through [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5), applied to the [vicuna-7b-v1.5](https://huggingface.co/lmsys/vicuna-7b-v1.5) model. **Model date:** Volcano-7b was trained in October 2023. # Training dataset - **274K multimodal feedback and revision data** - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data You can find [here](https://huggingface.co/datasets/kaist-ai/volcano-train) the dataset used to train Volcano, which includes all the aforementioned datasets. # Evaluation dataset A collection of three multimodal hallucination benchmarks ([MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), [Pope](https://github.com/RUCAIBox/POPE), [GAVIE](https://github.com/FuxiaoLiu/LRV-Instruction)) and two multimodal understanding benchmarks ([MM-Vet](https://github.com/yuweihao/MM-Vet), [MMBench](https://github.com/open-compass/MMBench)).
null
kaist-ai/volcano-13b
## Links for Reference - **Repository: https://github.com/kaistAI/Volcano** - **Paper: https://arxiv.org/abs/2311.07362** # Overview ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6550c4f27bbfce1878f5f280/AnqbCNf6pRiQ_5uNX0r4d.png) Volcano employs a single LMM to generate initial responses, feedback, and revisions, as well as decisions to accept revisions. It follows a sequential procedure of an iterative critique-revision-decide loop. # Model details **Model type:** Volcano-13b is a multimodal self-feedback guided revision model that was fine-tuned by mixing the visual instruction tuning dataset used in [LLaVA-v1.5](https://llava-vl.github.io/) with multimodal feedback and revision data collected through [gpt-3.5-turbo](https://platform.openai.com/docs/models/gpt-3-5), applied to the [vicuna-13b-v1.5](https://huggingface.co/lmsys/vicuna-13b-v1.5) model. **Model date:** Volcano-13b was trained in October 2023. # Training dataset - **274K multimodal feedback and revision data** - 558K filtered image-text pairs from LAION/CC/SBU, captioned by BLIP. - 158K GPT-generated multimodal instruction-following data. - 450K academic-task-oriented VQA data mixture. - 40K ShareGPT data You can find [here](https://huggingface.co/datasets/kaist-ai/volcano-train) the dataset used to train Volcano, which includes all the aforementioned datasets. # Evaluation dataset A collection of three multimodal hallucination benchmarks ([MMHal-Bench](https://huggingface.co/datasets/Shengcao1006/MMHal-Bench), [Pope](https://github.com/RUCAIBox/POPE), [GAVIE](https://github.com/FuxiaoLiu/LRV-Instruction)) and two multimodal understanding benchmarks ([MM-Vet](https://github.com/yuweihao/MM-Vet), [MMBench](https://github.com/open-compass/MMBench)).
null
shadowlilac/visor
# Visor - Natural language Anime Tagging Visor is a natural-language-based image tagging model based on the BLIP model architecture. Potential Use cases can be to caption anime images for training diffusion models
null
kpyu/eilev-blip2-opt-2.7b
# Model Card for EILEV BLIP-2-OPT-2.7B ![Teaser](teaser.png) [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) trained using [EILeV](https://github.com/yukw777/EILEV), a novel training method that can elicit in-context learning in vision-language models (VLMs) for videos without requiring massive, naturalistic video datasets. ## Model Details ### Model Description EILEV BLIP-2-OPT-2.7B is a VLM optimized for egocentric video. It can perform in-context learning over videos and texts. It was trained on Ego4D. ### Model Sources - **Repository:** https://github.com/yukw777/EILEV - **Paper:** https://arxiv.org/abs/2311.17041 - **Demo:** https://2e09-141-212-106-177.ngrok-free.app ## Bias, Risks, and Limitations EILEV BLIP-2-OPT-2.7B uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > EILEV BLIP-2-OPT-2.7B has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## How to Get Started with the Model Please check out the official repository: https://github.com/yukw777/EILEV
null
kpyu/eilev-blip2-flan-t5-xl
# Model Card for EILEV BLIP-2-Flan-T5-xl ![Teaser](teaser.png) [Salesforce/blip2-flan-t5-xl](https://huggingface.co/Salesforce/blip2-flan-t5-xl) trained using [EILeV](https://github.com/yukw777/EILEV), a novel training method that can elicit in-context learning in vision-language models (VLMs) for videos without requiring massive, naturalistic video datasets. ## Model Details ### Model Description EILEV BLIP-2-Flan-T5-xl is a VLM optimized for egocentric video. It can perform in-context learning over videos and texts. It was trained on Ego4D. ### Model Sources - **Repository:** https://github.com/yukw777/EILEV - **Paper:** https://arxiv.org/abs/2311.17041 - **Demo:** https://2e09-141-212-106-177.ngrok-free.app ## Bias, Risks, and Limitations EILEV BLIP-2-OPT-2.7B uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. EILEV BLIP-2-OPT-2.7B has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## How to Get Started with the Model Please check out the official repository: https://github.com/yukw777/EILEV
null
toshi456/llava-jp-1.3b-v1.0
# LLaVA-JP Model Card ## Model detail **Model type:** LLaVA-JP is a vision-language model that can converse about input images.<br> This model was trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) using [LLaVA](https://llava-vl.github.io/) method. **Training:** This model was initially trained with the Vision Projector using [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA) and STAIR Captions. <br> In the second phase, it was fine-tuned with LLaVA-Instruct-150K-JA and Japanese Visual Genome. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": parser = transformers.HfArgumentParser( (ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() model_path = 'toshi456/llava-jp-1.3b-v1.0' model_args.vision_tower = "openai/clip-vit-large-patch14-336" device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaGpt2ForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1024, padding_side="right", use_fast=False, ) model.eval() conv_mode = "v1" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor(image, return_tensors='pt')['pixel_values'].half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor(image, return_tensors='pt')['pixel_values'].to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): model.generate( inputs=input_ids, images=image_tensor, do_sample=True, temperature=0.01, top_p=1.0, max_new_tokens=256, streamer=streamer, use_cache=True, ) """ノートパソコン""" ``` ## Training dataset **Stage1 Pretrain** - [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA) - [Japanese STAIR Captions](http://captions.stair.center/) **Stage2 Fine-tuning** - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa) ## Acknowledgement - [LLaVA](https://llava-vl.github.io/) - [LLM-jp](https://llm-jp.nii.ac.jp/) ## License cc-by-nc-4.0
null
Vidensogende/image-captioning-with-blip
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
kmewhort/blip2-flan-t5-xxl-safetensors
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
unum-cloud/uform-gen
<Gallery /> <h1 align="center">UForm</h1> <h3 align="center"> Pocket-Sized Multimodal AI<br/> For Content Understanding and Generation<br/> </h3> ## Description UForm-Gen is a small generative vision-language model primarily designed for Image Captioning and Visual Question Answering. The model consists of two parts: 1. [`uform-vl-english`](https://huggingface.co/unum-cloud/uform-vl-english) visual encoder, 2. [`Sheared-LLaMA-1.3B`](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) language model tuned on instruction datasets. The model was pre-trained on: MSCOCO, SBU Captions, Visual Genome, VQAv2, GQA and a few internal datasets. ### Usage ```bash pip install uform ``` The generative model can be used to caption images, summarize their content, or answer questions about them. The exact behavior is controlled by prompts. ```python from uform.gen_model import VLMForCausalLM, VLMProcessor model = VLMForCausalLM.from_pretrained("unum-cloud/uform-gen") processor = VLMProcessor.from_pretrained("unum-cloud/uform-gen") # [cap] Narrate the contents of the image with precision. # [cap] Summarize the visual content of the image. # [vqa] What is the main subject of the image? prompt = "[cap] Summarize the visual content of the image." image = Image.open("zebra.jpg") inputs = processor(texts=[prompt], images=[image], return_tensors="pt") with torch.inference_mode(): output = model.generate( **inputs, do_sample=False, use_cache=True, max_new_tokens=128, eos_token_id=32001, pad_token_id=processor.tokenizer.pad_token_id ) prompt_len = inputs["input_ids"].shape[1] decoded_text = processor.batch_decode(output[:, prompt_len:])[0] ``` ## Evaluation For captioning evaluation we measure CLIPScore and RefCLIPScore¹. | Model | Size | Caption Length | CLIPScore | RefCLIPScore | | :---------------------------------- | ---: | -------------: | --------: | -----------: | | `llava-hf/llava-1.5-7b-hf` | 7B | Long | 0.878 | 0.529 | | `llava-hf/llava-1.5-7b-hf` | 7B | Short | 0.886 | 0.531 | | | | `Salesforce/instructblip-vicuna-7b` | 7B | Long | 0.902 | 0.534 | | `Salesforce/instructblip-vicuna-7b` | 7B | Short | 0.848 | 0.523 | | | | | `unum-cloud/uform-gen` | 1.5B | Long | 0.847 | 0.523 | | `unum-cloud/uform-gen` | 1.5B | Short | 0.842 | 0.522 | Results for VQAv2 evaluation. | Model | Size | Accuracy | | :------------------------- | ---: | -------: | | `llava-hf/llava-1.5-7b-hf` | 7B | 78.5 | | `unum-cloud/uform-gen` | 1.5B | 66.5 | ¹ We used `apple/DFN5B-CLIP-ViT-H-14-378` CLIP model. ## Speed On RTX 3090, the following performance is expected on text token generation using `float16`, equivalent PyTorch settings, and greedy decoding. | Model | Size | Speed | Speedup | | :---------------------------------- | ---: | ------------------: | --------: | | `llava-hf/llava-1.5-7b-hf` | 7B | ~ 40 tokens/second | | | `Salesforce/instructblip-vicuna-7b` | 7B | ~ 40 tokens/second | | | `unum-cloud/uform-gen` | 1.5B | ~ 140 tokens/second | __x 3.5__ |
null
unum-cloud/uform-gen-chat
<h1 align="center">UForm</h1> <h3 align="center"> Pocket-Sized Multimodal AI<br/> For Content Understanding and Generation<br/> </h3> ## Description UForm-Gen is a small generative vision-language model primarily designed for Image Captioning and Visual Question Answering. The model consists of two parts: 1. [UForm Vision Encoder](https://huggingface.co/unum-cloud/uform-vl-english) 2. [Sheared-LLaMA-1.3B](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) manually tuned on the instructions dataset The model was pre-trained on: MSCOCO, SBU Captions, Visual Genome, VQAv2, GQA and a few internal datasets. UForm-Gen-Chat is SFT version of [`UForm-Gen`](https://huggingface.co/unum-cloud/uform-gen) for multimodal chat. ### Usage ```bash pip install uform ``` For the CLI demo run the following: ```bash uform-chat --model unum-cloud/uform-gen-chat --image_path=zebra.jpg uform-chat --model unum-cloud/uform-gen-chat --image_path=zebra.jpg --device="cuda:0" --fp16 ``` Or if you want to use the model in your code: ```python from uform.gen_model import VLMForCausalLM, VLMProcessor model = VLMForCausalLM.from_pretrained("unum-cloud/uform-gen-chat") processor = VLMProcessor.from_pretrained("unum-cloud/uform-gen-chat") prompt = "What do you see?" image = Image.open("zebra.jpg") inputs = processor(texts=[prompt], images=[image], return_tensors="pt") with torch.inference_mode(): output = model.generate( **inputs, do_sample=False, use_cache=True, max_new_tokens=128, eos_token_id=32001, pad_token_id=processor.tokenizer.pad_token_id ) prompt_len = inputs["input_ids"].shape[1] decoded_text = processor.batch_decode(output[:, prompt_len:])[0] ``` ## Evaluation For captioning evaluation we measure CLIPScore and RefCLIPScore¹. | Model | Size | Caption Length | CLIPScore | RefCLIPScore | | :---------------------------------- | ---: | -------------: | --------: | -----------: | | `llava-hf/llava-1.5-7b-hf` | 7B | Long | 0.878 | 0.529 | | `llava-hf/llava-1.5-7b-hf` | 7B | Short | 0.886 | 0.531 | | | | `Salesforce/instructblip-vicuna-7b` | 7B | Long | 0.902 | 0.534 | | `Salesforce/instructblip-vicuna-7b` | 7B | Short | 0.848 | 0.523 | | | | | `unum-cloud/uform-gen-chat` | 1.5B | Long | 0.860 | 0.525 | | `unum-cloud/uform-gen-chat` | 1.5B | Short | 0.858 | 0.525 | ¹ We used `apple/DFN5B-CLIP-ViT-H-14-378` CLIP model.
null
mrSoul7766/git-base-instagram-cap
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-instagram-cap This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on an [mrSoul7766/instagram_post_captions](https://huggingface.co/datasets/mrSoul7766/instagram_post_captions). It achieves the following results on the evaluation set: - Loss: 0.0093 ### Usage you can leverage the capabilities provided by the Hugging Face Transformers library. Here's a basic example using Python: ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-to-text", model="mrSoul7766/git-base-instagram-cap") # Generate caption caption = pipe("/content/download (12).png",max_new_tokens =100) # Print the generated answer print(caption[0]['generated_text']) ``` ``` i love my blonde character in kim kardashian hollywood! i'm playing now who's playing with me? ``` ### Framework versions - Transformers 4.37.0.dev0 - Pytorch 2.1.0+cu121 - Datasets 2.16.1 - Tokenizers 0.15.0
null
4bit/uform-gen
<Gallery /> <h1 align="center">UForm</h1> <h3 align="center"> Pocket-Sized Multimodal AI<br/> For Content Understanding and Generation<br/> </h3> ## Description UForm-Gen is a small generative vision-language model primarily designed for Image Captioning and Visual Question Answering. The model consists of two parts: 1. [`uform-vl-english`](https://huggingface.co/unum-cloud/uform-vl-english) visual encoder, 2. [`Sheared-LLaMA-1.3B`](https://huggingface.co/princeton-nlp/Sheared-LLaMA-1.3B) language model tuned on instruction datasets. The model was pre-trained on: MSCOCO, SBU Captions, Visual Genome, VQAv2, GQA and a few internal datasets. ### Usage ```bash pip install uform ``` The generative model can be used to caption images, summarize their content, or answer questions about them. The exact behavior is controlled by prompts. ```python from uform.gen_model import VLMForCausalLM, VLMProcessor model = VLMForCausalLM.from_pretrained("unum-cloud/uform-gen") processor = VLMProcessor.from_pretrained("unum-cloud/uform-gen") # [cap] Narrate the contents of the image with precision. # [cap] Summarize the visual content of the image. # [vqa] What is the main subject of the image? prompt = "[cap] Summarize the visual content of the image." image = Image.open("zebra.jpg") inputs = processor(texts=[prompt], images=[image], return_tensors="pt") with torch.inference_mode(): output = model.generate( **inputs, do_sample=False, use_cache=True, max_new_tokens=128, eos_token_id=32001, pad_token_id=processor.tokenizer.pad_token_id ) prompt_len = inputs["input_ids"].shape[1] decoded_text = processor.batch_decode(output[:, prompt_len:])[0] ``` ## Evaluation For captioning evaluation we measure CLIPScore and RefCLIPScore¹. | Model | Size | Caption Length | CLIPScore | RefCLIPScore | | :---------------------------------- | ---: | -------------: | --------: | -----------: | | `llava-hf/llava-1.5-7b-hf` | 7B | Long | 0.878 | 0.529 | | `llava-hf/llava-1.5-7b-hf` | 7B | Short | 0.886 | 0.531 | | | | `Salesforce/instructblip-vicuna-7b` | 7B | Long | 0.902 | 0.534 | | `Salesforce/instructblip-vicuna-7b` | 7B | Short | 0.848 | 0.523 | | | | | `unum-cloud/uform-gen` | 1.5B | Long | 0.847 | 0.523 | | `unum-cloud/uform-gen` | 1.5B | Short | 0.842 | 0.522 | Results for VQAv2 evaluation. | Model | Size | Accuracy | | :------------------------- | ---: | -------: | | `llava-hf/llava-1.5-7b-hf` | 7B | 78.5 | | `unum-cloud/uform-gen` | 1.5B | 66.5 | ¹ We used `apple/DFN5B-CLIP-ViT-H-14-378` CLIP model. ## Speed On RTX 3090, the following performance is expected on text token generation using `float16`, equivalent PyTorch settings, and greedy decoding. | Model | Size | Speed | Speedup | | :---------------------------------- | ---: | ------------------: | --------: | | `llava-hf/llava-1.5-7b-hf` | 7B | ~ 40 tokens/second | | | `Salesforce/instructblip-vicuna-7b` | 7B | ~ 40 tokens/second | | | `unum-cloud/uform-gen` | 1.5B | ~ 140 tokens/second | __x 3.5__ |
null
adasdimchom/blip2-opt-6.7b-coco
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
adasdimchom/blip2-opt-2.7b-coco
# BLIP-2, OPT-2.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
adasdimchom/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
adityarajkishan/ImageCaptioningTransformers
null
gizmo-ai/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
gizmo-ai/blip-image-captioning-base
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
toshi456/llava-jp-1.3b-v1.0-siglip-so400m-patch14-384
# LLaVA-JP Model Card ## Model detail **Model type:** LLaVA-JP is a vision-language model that can converse about input images.<br> This model was trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) using [LLaVA](https://llava-vl.github.io/) method and [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) is used as Image Encoder. **Training:** This model was initially trained with the Vision Projector using [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA) and STAIR Captions. <br> In the second phase, it was fine-tuned with LLaVA-Instruct-150K-JA and Japanese Visual Genome. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": parser = transformers.HfArgumentParser( (ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() model_path = 'toshi456/llava-jp-1.3b-v1.0-siglip-so400m-patch14-384' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaGpt2ForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1024, padding_side="right", use_fast=False, ) model.eval() conv_mode = "v1" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor(image, return_tensors='pt')['pixel_values'].half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor(image, return_tensors='pt')['pixel_values'].to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): model.generate( inputs=input_ids, images=image_tensor, do_sample=True, temperature=0.01, top_p=1.0, max_new_tokens=256, streamer=streamer, use_cache=True, ) """猫の隣にはノートパソコンがある。<EOD|LLM-jp>""" ``` ## Training dataset **Stage1 Pretrain** - [LLaVA-CC3M-Pretrain-595K-JA](https://huggingface.co/datasets/toshi456/LLaVA-CC3M-Pretrain-595K-JA) - [Japanese STAIR Captions](http://captions.stair.center/) **Stage2 Fine-tuning** - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa) ## Acknowledgement - [LLaVA](https://llava-vl.github.io/) - [LLM-jp](https://llm-jp.nii.ac.jp/) ## License cc-by-nc-4.0
null
unum-cloud/uform-gen2-qwen-500m
<h1 align="center">UForm</h1> <h3 align="center"> Pocket-Sized Multimodal AI<br/> For Content Understanding and Generation<br/> </h3> ## Description UForm-Gen is a small generative vision-language model primarily designed for Image Captioning and Visual Question Answering. The model consists of two parts: 1. CLIP-like ViT-H/14 2. [Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) The model was pre-trained on the internal image captioning dataset and fine-tuned on public instructions datasets: SVIT, LVIS, VQAs datasets. The model took one day to train on a DGX-H100 with 8x H100 GPUs. Thanks to [Nebius.ai](https://nebius.ai) for providing the compute 🤗 ### Usage The generative model can be used to caption images, answer questions about them. Also it is suitable for a multimodal chat. ```python from transformers import AutoModel, AutoProcessor model = AutoModel.from_pretrained("unum-cloud/uform-gen2-qwen-500m", trust_remote_code=True) processor = AutoProcessor.from_pretrained("unum-cloud/uform-gen2-qwen-500m", trust_remote_code=True) prompt = "Question or Instruction" image = Image.open("image.jpg") inputs = processor(text=[prompt], images=[image], return_tensors="pt") with torch.inference_mode(): output = model.generate( **inputs, do_sample=False, use_cache=True, max_new_tokens=256, eos_token_id=151645, pad_token_id=processor.tokenizer.pad_token_id ) prompt_len = inputs["input_ids"].shape[1] decoded_text = processor.batch_decode(output[:, prompt_len:])[0] ``` You can check examples of different prompts in our demo space. ## Evaluation | Model | LLM Size | SQA | MME | MMBench | Average¹ | | :---------------------------------- | -------: | -----:| ------:| --------:| --------:| | UForm-Gen2-Qwen-500m | 0.5B | 45.5 | 880.1 | 42.0 | 29.31 | | MobileVLM v2 | 1.4B | 52.1 | 1302.8 | 57.7 | 36.81 | | LLaVA-Phi | 2.7B | 68.4 | 1335.1 | 59.8 | 42.95 | ¹MME scores were divided by 2000 before averaging.
null
turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1
# Heron BLIP Japanese StableLM Base 7B v1 ## Model Details Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/). ```python import torch from heron.models.video_blip import VideoBlipForConditionalGeneration, VideoBlipProcessor from transformers import LlamaTokenizer device_id = 0 device = f"cuda:{device_id}" MODEL_NAME = "turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1" model = VideoBlipForConditionalGeneration.from_pretrained( MODEL_NAME, torch_dtype=torch.float16, ignore_mismatched_sizes=True ) model = model.half() model.eval() model.to(device) # prepare a processor processor = VideoBlipProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁']) processor.tokenizer = tokenizer import requests from PIL import Image # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##human: この画像の面白い点は何ですか?\n##gpt: " # do preprocessing inputs = processor( text=text, images=image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(device) for k, v in inputs.items()} inputs["pixel_values"] = inputs["pixel_values"].to(device, torch.float16) # set eos token eos_token_id_list = [ processor.tokenizer.pad_token_id, processor.tokenizer.eos_token_id, int(tokenizer.convert_tokens_to_ids("##")) ] # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list, no_repeat_ngram_size=2) # print result print(processor.tokenizer.batch_decode(out)) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597) * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b) * **Language(s)**: Japanese ### Training This model was fully fine-tuned with [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA). ### Training Dataset - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{BlipJapaneseStableLM, url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)}, title = {Heron BLIP Japanese StableLM Base 7B}, author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{JapaneseInstructBLIPAlpha, url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)}, title = {Japanese InstructBLIP Alpha}, author = {Shing, Makoto and Akiba, Takuya} } ``` --- license: cc-by-nc-4.0 ---
null
benferns/instructblip-flan-t5-xl_8bit_nf4
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / nf4 / Safetensors_ -_Mediocre_ 🥱 # InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-flan-t5-xl") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xl") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
null
turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k
# Heron BLIP Japanese StableLM Base 7B llava-620k ## Model Details Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/). ```python import torch from heron.models.video_blip import VideoBlipForConditionalGeneration, VideoBlipProcessor from transformers import LlamaTokenizer device_id = 0 device = f"cuda:{device_id}" MODEL_NAME = "turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1" model = VideoBlipForConditionalGeneration.from_pretrained( MODEL_NAME, torch_dtype=torch.float16, ignore_mismatched_sizes=True ) model = model.half() model.eval() model.to(device) # prepare a processor processor = VideoBlipProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁']) processor.tokenizer = tokenizer import requests from PIL import Image # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##human: この画像の面白い点は何ですか?\n##gpt: " # do preprocessing inputs = processor( text=text, images=image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(device) for k, v in inputs.items()} inputs["pixel_values"] = inputs["pixel_values"].to(device, torch.float16) # set eos token eos_token_id_list = [ processor.tokenizer.pad_token_id, processor.tokenizer.eos_token_id, int(tokenizer.convert_tokens_to_ids("##")) ] # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list, no_repeat_ngram_size=2) # print result print(processor.tokenizer.batch_decode(out)) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597) * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b) * **Language(s)**: Japanese ### Training This model was fully fine-tuned with LLaVA-Instruct-620K-JA. ### Training Dataset - LLaVA-Instruct-620K-JA ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{BlipJapaneseStableLM, url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)}, title = {Heron BLIP Japanese StableLM Base 7B}, author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{JapaneseInstructBLIPAlpha, url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)}, title = {Japanese InstructBLIP Alpha}, author = {Shing, Makoto and Akiba, Takuya} } ``` --- license: cc-by-nc-4.0 ---
null
orzhan/git-base-minecraft
null
ishaangupta293/kosmos-2-patch14-24-dup-ms
# Kosmos-2: Grounding Multimodal Large Language Models to the World <a href="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" target="_blank"><figure><img src="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" width="384"><figcaption><b>[An image of a snowman warming himself by a fire.]</b></figcaption></figure></a> This Hub repository contains a HuggingFace's `transformers` implementation of [the original Kosmos-2 model](https://github.com/microsoft/unilm/tree/master/kosmos-2) from Microsoft. ## How to Get Started with the Model Use the code below to get started with the model. ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForVision2Seq model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224") processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") prompt = "<grounding>An image of" url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) # The original Kosmos-2 demo saves the image first then reload it. For some images, this will give slightly different image input and change the generation outputs. image.save("new_image.jpg") image = Image.open("new_image.jpg") inputs = processor(text=prompt, images=image, return_tensors="pt") generated_ids = model.generate( pixel_values=inputs["pixel_values"], input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], image_embeds=None, image_embeds_position_mask=inputs["image_embeds_position_mask"], use_cache=True, max_new_tokens=128, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] # Specify `cleanup_and_extract=False` in order to see the raw model generation. processed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False) print(processed_text) # `<grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a fire</phrase><object><patch_index_0005><patch_index_0911></object>.` # By default, the generated text is cleanup and the entities are extracted. processed_text, entities = processor.post_process_generation(generated_text) print(processed_text) # `An image of a snowman warming himself by a fire.` print(entities) # `[('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])]` ``` ## Tasks This model is capable of performing different tasks through changing the prompts. First, let's define a function to run a prompt. <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForVision2Seq model = AutoModelForVision2Seq.from_pretrained("microsoft/kosmos-2-patch14-224") processor = AutoProcessor.from_pretrained("microsoft/kosmos-2-patch14-224") url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) def run_example(prompt): inputs = processor(text=prompt, images=image, return_tensors="pt") generated_ids = model.generate( pixel_values=inputs["pixel_values"], input_ids=inputs["input_ids"], attention_mask=inputs["attention_mask"], image_embeds=None, image_embeds_position_mask=inputs["image_embeds_position_mask"], use_cache=True, max_new_tokens=128, ) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True)[0] _processed_text = processor.post_process_generation(generated_text, cleanup_and_extract=False) processed_text, entities = processor.post_process_generation(generated_text) print(processed_text) print(entities) print(_processed_text) ``` </details> Here are the tasks `Kosmos-2` could perform: <details> <summary> Click to expand </summary> ### Multimodal Grounding #### • Phrase Grounding ```python prompt = "<grounding><phrase> a snowman</phrase>" run_example(prompt) # a snowman is warming himself by the fire # [('a snowman', (0, 9), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('the fire', (32, 40), [(0.203125, 0.015625, 0.453125, 0.859375)])] # <grounding><phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> is warming himself by<phrase> the fire</phrase><object><patch_index_0006><patch_index_0878></object> ``` #### • Referring Expression Comprehension ```python prompt = "<grounding><phrase> a snowman next to a fire</phrase>" run_example(prompt) # a snowman next to a fire # [('a snowman next to a fire', (0, 24), [(0.390625, 0.046875, 0.984375, 0.828125)])] # <grounding><phrase> a snowman next to a fire</phrase><object><patch_index_0044><patch_index_0863></object> ``` ### Multimodal Referring #### • Referring expression generation ```python prompt = "<grounding><phrase> It</phrase><object><patch_index_0044><patch_index_0863></object> is" run_example(prompt) # It is snowman in a hat and scarf # [('It', (0, 2), [(0.390625, 0.046875, 0.984375, 0.828125)])] # <grounding><phrase> It</phrase><object><patch_index_0044><patch_index_0863></object> is snowman in a hat and scarf ``` ### Perception-Language Tasks #### • Grounded VQA ```python prompt = "<grounding> Question: What is special about this image? Answer:" run_example(prompt) # Question: What is special about this image? Answer: The image features a snowman sitting by a campfire in the snow. # [('a snowman', (71, 80), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a campfire', (92, 102), [(0.109375, 0.640625, 0.546875, 0.984375)])] # <grounding> Question: What is special about this image? Answer: The image features<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> sitting by<phrase> a campfire</phrase><object><patch_index_0643><patch_index_1009></object> in the snow. ``` #### • Grounded VQA with multimodal referring via bounding boxes ```python prompt = "<grounding> Question: Where is<phrase> the fire</phrase><object><patch_index_0005><patch_index_0911></object> next to? Answer:" run_example(prompt) # Question: Where is the fire next to? Answer: Near the snowman. # [('the fire', (19, 27), [(0.171875, 0.015625, 0.484375, 0.890625)]), ('the snowman', (50, 61), [(0.390625, 0.046875, 0.984375, 0.828125)])] # <grounding> Question: Where is<phrase> the fire</phrase><object><patch_index_0005><patch_index_0911></object> next to? Answer: Near<phrase> the snowman</phrase><object><patch_index_0044><patch_index_0863></object>. ``` ### Grounded Image captioning #### • Brief ```python prompt = "<grounding> An image of" run_example(prompt) # An image of a snowman warming himself by a campfire. # [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a campfire', (41, 51), [(0.109375, 0.640625, 0.546875, 0.984375)])] # <grounding> An image of<phrase> a snowman</phrase><object><patch_index_0044><patch_index_0863></object> warming himself by<phrase> a campfire</phrase><object><patch_index_0643><patch_index_1009></object>. ``` #### • Detailed ```python prompt = "<grounding> Describe this image in detail:" run_example(prompt) # Describe this image in detail: The image features a snowman sitting by a campfire in the snow. He is wearing a hat, scarf, and gloves, with a pot nearby and a cup nearby. The snowman appears to be enjoying the warmth of the fire, and it appears to have a warm and cozy atmosphere. # [('a campfire', (71, 81), [(0.171875, 0.015625, 0.484375, 0.984375)]), ('a hat', (109, 114), [(0.515625, 0.046875, 0.828125, 0.234375)]), ('scarf', (116, 121), [(0.515625, 0.234375, 0.890625, 0.578125)]), ('gloves', (127, 133), [(0.515625, 0.390625, 0.640625, 0.515625)]), ('a pot', (140, 145), [(0.078125, 0.609375, 0.265625, 0.859375)]), ('a cup', (157, 162), [(0.890625, 0.765625, 0.984375, 0.984375)])] # <grounding> Describe this image in detail: The image features a snowman sitting by<phrase> a campfire</phrase><object><patch_index_0005><patch_index_1007></object> in the snow. He is wearing<phrase> a hat</phrase><object><patch_index_0048><patch_index_0250></object>,<phrase> scarf</phrase><object><patch_index_0240><patch_index_0604></object>, and<phrase> gloves</phrase><object><patch_index_0400><patch_index_0532></object>, with<phrase> a pot</phrase><object><patch_index_0610><patch_index_0872></object> nearby and<phrase> a cup</phrase><object><patch_index_0796><patch_index_1023></object> nearby. The snowman appears to be enjoying the warmth of the fire, and it appears to have a warm and cozy atmosphere. ``` </details> ## Draw the bounding bboxes of the entities on the image Once you have the `entities`, you can use the following helper function to draw their bounding bboxes on the image: <details> <summary> Click to expand </summary> ```python import cv2 import numpy as np import os import requests import torch import torchvision.transforms as T from PIL import Image def is_overlapping(rect1, rect2): x1, y1, x2, y2 = rect1 x3, y3, x4, y4 = rect2 return not (x2 < x3 or x1 > x4 or y2 < y3 or y1 > y4) def draw_entity_boxes_on_image(image, entities, show=False, save_path=None): """_summary_ Args: image (_type_): image or image path collect_entity_location (_type_): _description_ """ if isinstance(image, Image.Image): image_h = image.height image_w = image.width image = np.array(image)[:, :, [2, 1, 0]] elif isinstance(image, str): if os.path.exists(image): pil_img = Image.open(image).convert("RGB") image = np.array(pil_img)[:, :, [2, 1, 0]] image_h = pil_img.height image_w = pil_img.width else: raise ValueError(f"invaild image path, {image}") elif isinstance(image, torch.Tensor): image_tensor = image.cpu() reverse_norm_mean = torch.tensor([0.48145466, 0.4578275, 0.40821073])[:, None, None] reverse_norm_std = torch.tensor([0.26862954, 0.26130258, 0.27577711])[:, None, None] image_tensor = image_tensor * reverse_norm_std + reverse_norm_mean pil_img = T.ToPILImage()(image_tensor) image_h = pil_img.height image_w = pil_img.width image = np.array(pil_img)[:, :, [2, 1, 0]] else: raise ValueError(f"invaild image format, {type(image)} for {image}") if len(entities) == 0: return image new_image = image.copy() previous_bboxes = [] # size of text text_size = 1 # thickness of text text_line = 1 # int(max(1 * min(image_h, image_w) / 512, 1)) box_line = 3 (c_width, text_height), _ = cv2.getTextSize("F", cv2.FONT_HERSHEY_COMPLEX, text_size, text_line) base_height = int(text_height * 0.675) text_offset_original = text_height - base_height text_spaces = 3 for entity_name, (start, end), bboxes in entities: for (x1_norm, y1_norm, x2_norm, y2_norm) in bboxes: orig_x1, orig_y1, orig_x2, orig_y2 = int(x1_norm * image_w), int(y1_norm * image_h), int(x2_norm * image_w), int(y2_norm * image_h) # draw bbox # random color color = tuple(np.random.randint(0, 255, size=3).tolist()) new_image = cv2.rectangle(new_image, (orig_x1, orig_y1), (orig_x2, orig_y2), color, box_line) l_o, r_o = box_line // 2 + box_line % 2, box_line // 2 + box_line % 2 + 1 x1 = orig_x1 - l_o y1 = orig_y1 - l_o if y1 < text_height + text_offset_original + 2 * text_spaces: y1 = orig_y1 + r_o + text_height + text_offset_original + 2 * text_spaces x1 = orig_x1 + r_o # add text background (text_width, text_height), _ = cv2.getTextSize(f" {entity_name}", cv2.FONT_HERSHEY_COMPLEX, text_size, text_line) text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2 = x1, y1 - (text_height + text_offset_original + 2 * text_spaces), x1 + text_width, y1 for prev_bbox in previous_bboxes: while is_overlapping((text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2), prev_bbox): text_bg_y1 += (text_height + text_offset_original + 2 * text_spaces) text_bg_y2 += (text_height + text_offset_original + 2 * text_spaces) y1 += (text_height + text_offset_original + 2 * text_spaces) if text_bg_y2 >= image_h: text_bg_y1 = max(0, image_h - (text_height + text_offset_original + 2 * text_spaces)) text_bg_y2 = image_h y1 = image_h break alpha = 0.5 for i in range(text_bg_y1, text_bg_y2): for j in range(text_bg_x1, text_bg_x2): if i < image_h and j < image_w: if j < text_bg_x1 + 1.35 * c_width: # original color bg_color = color else: # white bg_color = [255, 255, 255] new_image[i, j] = (alpha * new_image[i, j] + (1 - alpha) * np.array(bg_color)).astype(np.uint8) cv2.putText( new_image, f" {entity_name}", (x1, y1 - text_offset_original - 1 * text_spaces), cv2.FONT_HERSHEY_COMPLEX, text_size, (0, 0, 0), text_line, cv2.LINE_AA ) # previous_locations.append((x1, y1)) previous_bboxes.append((text_bg_x1, text_bg_y1, text_bg_x2, text_bg_y2)) pil_image = Image.fromarray(new_image[:, :, [2, 1, 0]]) if save_path: pil_image.save(save_path) if show: pil_image.show() return new_image # (The same image from the previous code example) url = "https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/snowman.png" image = Image.open(requests.get(url, stream=True).raw) # From the previous code example entities = [('a snowman', (12, 21), [(0.390625, 0.046875, 0.984375, 0.828125)]), ('a fire', (41, 47), [(0.171875, 0.015625, 0.484375, 0.890625)])] # Draw the bounding bboxes draw_entity_boxes_on_image(image, entities, show=True) ``` </details> Here is the annotated image: <a href="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" target="_blank"><img src="https://huggingface.co/microsoft/kosmos-2-patch14-224/resolve/main/annotated_snowman.jpg" width="500"></a> ## BibTex and citation info ``` @article{kosmos-2, title={Kosmos-2: Grounding Multimodal Large Language Models to the World}, author={Zhiliang Peng and Wenhui Wang and Li Dong and Yaru Hao and Shaohan Huang and Shuming Ma and Furu Wei}, journal={ArXiv}, year={2023}, volume={abs/2306} } @article{kosmos-1, title={Language Is Not All You Need: Aligning Perception with Language Models}, author={Shaohan Huang and Li Dong and Wenhui Wang and Yaru Hao and Saksham Singhal and Shuming Ma and Tengchao Lv and Lei Cui and Owais Khan Mohammed and Qiang Liu and Kriti Aggarwal and Zewen Chi and Johan Bjorck and Vishrav Chaudhary and Subhojit Som and Xia Song and Furu Wei}, journal={ArXiv}, year={2023}, volume={abs/2302.14045} } @article{metalm, title={Language Models are General-Purpose Interfaces}, author={Yaru Hao and Haoyu Song and Li Dong and Shaohan Huang and Zewen Chi and Wenhui Wang and Shuming Ma and Furu Wei}, journal={ArXiv}, year={2022}, volume={abs/2206.06336} } ```
null
advaitadasein/blip2-opt-6.7b
# BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
SRDdev/Nebula
null
tarekziade/distilvit
This model is a variation of https://huggingface.co/nlpconnect/vit-gpt2-image-captioning - Read the blog post here https://ziade.org/2024/03/17/distilvit-image-captioning-model - The training code is here: https://github.com/tarekziade/distilvit Results after after 3 epochs (and ~45 hours of training) - eval_loss: 0.19939416646957397 - eval_rouge1: 43.006 - eval_rouge2: 16.9939 - eval_rougeL: 38.8923 - eval_rougeLsum: 38.8877 - eval_gen_len: 11.327256736227712 - eval_runtime: 1816.5255 - eval_samples_per_second: 13.77 - eval_steps_per_second': 1.721 - train_runtime: 46263.3695 - train_samples_per_second: 38.373 - train_steps_per_second: 4.797 - train_loss: 0.05974134062104816
null
tarekziade/deit-tiny-distilgpt2
Variation of https://huggingface.co/tarekziade/distilvit Trained on 270k images from Flickr10k and COCO. Training source code: https://github.com/tarekziade/distilvit Results: - eval_loss: 0.2305169701576233 - eval_rouge1: 39.511 - eval_rouge2: 14.7798 - eval_rougeL: 35.9476 - eval_rougeLsum: 35.9497 - eval_gen_len: 11.695219762592236
null
unum-cloud/uform-gen2-dpo
<img src="Captions.jpg"> ## Description UForm-Gen2-dpo is a small generative vision-language model alined for Image Captioning and Visual Question Answering on preference datasets VLFeedback and LLaVA-Human-Preference-10K using Direct Preference Optimization (DPO). The model consists of two parts: 1. CLIP-like ViT-H/14 2. [Qwen1.5-0.5B-Chat](https://huggingface.co/Qwen/Qwen1.5-0.5B-Chat) The model took less than one day to train on a DGX-H100 with 8x H100 GPUs. Thanks to [Nebius.ai](https://nebius.ai) for providing the compute 🤗 ### Usage The generative model can be used to caption images, answer questions about them. Also it is suitable for a multimodal chat. ```python from transformers import AutoModel, AutoProcessor model = AutoModel.from_pretrained("unum-cloud/uform-gen2-dpo", trust_remote_code=True) processor = AutoProcessor.from_pretrained("unum-cloud/uform-gen2-dpo", trust_remote_code=True) prompt = "Question or Instruction" image = Image.open("image.jpg") inputs = processor(text=[prompt], images=[image], return_tensors="pt") with torch.inference_mode(): output = model.generate( **inputs, do_sample=False, use_cache=True, max_new_tokens=256, eos_token_id=151645, pad_token_id=processor.tokenizer.pad_token_id ) prompt_len = inputs["input_ids"].shape[1] decoded_text = processor.batch_decode(output[:, prompt_len:])[0] ``` You can check examples of different prompts in our demo space. ## Evaluation perception reasoning OCR artwork celebrity code_reasoning color commonsense_reasoning count existence landmark numerical_calculation position posters scene text_translation MME Benchmark | Model | perception| reasoning | OCR | artwork | celebrity | code_reasoning | color | commonsense_reasoning | count | existence | landmark | numerical_calculation | position | posters | scene | text_translation | | :---------------------------------- | --------: | --------: | -----:| ----------:| ----------:| --------------:| -----:| ---------------------:| -----:| ---------:| --------:| ---------------------:| --------:| -------:| -----:| ----------------:| | uform-gen2-dpo | 1,048.75 | 224.64 | 72.50 | 97.25 | 62.65 | 67.50 | 123.33 | 57.14 | 136.67 | 195.00 | 104.00 | 50.00 | 51.67 | 59.18 | 146.50 | 50.00 | | uform-gen2-qwen-500m | 863.40 | 236.43 | 57.50 | 93.00 | 67.06 | 57.50 | 78.33 | 81.43 | 53.33 | 150.00 | 98.00 | 50.00 | 50.00 | 62.93 | 153.25 | 47.50 |
null
turing-motors/heron-chat-git-ja-stablelm-base-7b-v1
# Heron GIT Japanese StableLM Base 7B ## Model Details Heron GIT Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/). ```python import torch from heron.models.git_llm.git_japanese_stablelm_alpha import GitJapaneseStableLMAlphaForCausalLM from transformers import AutoProcessor, LlamaTokenizer device_id = 0 device = f"cuda:{device_id}" MODEL_NAME = "turing-motors/heron-chat-git-ja-stablelm-base-7b-v1" model = GitJapaneseStableLMAlphaForCausalLM.from_pretrained( MODEL_NAME, torch_dtype=torch.float16, ignore_mismatched_sizes=True ) model.eval() model.to(device) # prepare a processor processor = AutoProcessor.from_pretrained(MODEL_NAME) tokenizer = LlamaTokenizer.from_pretrained( "novelai/nerdstash-tokenizer-v1", padding_side="right", additional_special_tokens=["▁▁"], ) processor.tokenizer = tokenizer import requests from PIL import Image # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##human: この画像の面白い点は何ですか?\n##gpt: " # do preprocessing inputs = processor( text=text, images=image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(device) for k, v in inputs.items()} # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., no_repeat_ngram_size=2) # print result print(processor.tokenizer.batch_decode(out)) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100) * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b) * **Language(s)**: Japanese ### Training 1. The GIT adaptor was trained with LLaVA-Pratrain-JA. 2. The LLM and the adapter were fully fine-tuned with LLaVA-Instruct-620K-JA-v2. ### Training Dataset 1. LLaVA-Pratrain-JA 2. LLaVA-Instruct-620K-JA-v2 ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{inoue2024heronbench, title={Heron-Bench: A Benchmark for Evaluating Vision Language Models in Japanese}, author={Yuichi Inoue and Kento Sasaki and Yuma Ochi and Kazuki Fujii and Kotaro Tanahashi and Yu Yamaguchi}, year={2024}, eprint={2404.07824}, archivePrefix={arXiv}, primaryClass={cs.CV} } ``` --- license: cc-by-nc-4.0 ---
null
sashakunitsyn/vlrm-blip2-opt-2.7b
# VLRM This repository contains the weights of BLIP-2 OPT-2.7B model fine-tuned by reinforcement learning method introduced in the paper [VLRM: Vision-Language Models act as Reward Models for Image Captioning](https://arxiv.org/abs/2404.01911). The RL-tuned model is able to generate longer and more comprehensive descriptions with zero computational overhead compared to the original model. You can find other details in the [GitHub Repository (to be done)](https://github.com/papermsucode). # Running the model ## Option 1 <details> <summary> Load the whole model from this repo </summary> ```python import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("sashakunitsyn/vlrm-blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("sashakunitsyn/vlrm-blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs, max_new_tokens=60) processor.decode(out[0], skip_special_tokens=True).strip() >>> 'a woman in a plaid shirt shaking hands with a yellow labrador retriever sitting on the ground at sunset on a beach in florida' ``` </details> ## Option 2 Since the fine-tuned layers take small part of the whole model, you can first load the original model, then load the RL-tuned weights. <details> <summary> Step 1. Load the original model </summary> ```python import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs, max_new_tokens=60) processor.decode(out[0], skip_special_tokens=True).strip() >>> 'a woman sitting on the beach with a dog' ``` </details> <details> <summary> Step 2. Load the RL-tuned weights </summary> Available checkpoints: - `vlrm-blip2-opt-2.7b.pt` (VLRM in the paper) - `vlrm-rs-blip2-opt-2.7b.pt` (VLRM-RS in the paper) ```python from huggingface_hub import hf_hub_download finetuned_weights_state_dict = torch.load(hf_hub_download(repo_id="sashakunitsyn/vlrm-blip2-opt-2.7b", filename="vlrm-blip2-opt-2.7b.pt")) model.load_state_dict(finetuned_weights_state_dict, strict=False) out = model.generate(**inputs, max_new_tokens=60) processor.decode(out[0], skip_special_tokens=True).strip() >>> 'a woman in a plaid shirt shaking hands with a yellow labrador retriever sitting on the ground at sunset on a beach in florida' ``` </details>
null
Wang9738/blip-image-captioning-base-Opticalvehicle-finetuned
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone) - and fine-tuned on [football dataset](https://huggingface.co/datasets/ybelkada/football-dataset). Google Colab notebook for fine-tuning: https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("ybelkada/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("ybelkada/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
atasoglu/vit-base-patch16-224-turkish-gpt2
# vit-base-patch16-224-turkish-gpt2 This vision encoder-decoder model utilizes the [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) as the encoder and [ytu-ce-cosmos/turkish-gpt2](https://huggingface.co/ytu-ce-cosmos/turkish-gpt2) as the decoder, and it has been fine-tuned on the [flickr8k-turkish](https://huggingface.co/datasets/atasoglu/flickr8k-turkish) dataset to generate image captions in Turkish. ## Usage ```py import torch from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer from PIL import Image device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_id = "atasoglu/vit-base-patch16-224-turkish-gpt2" img = Image.open("example.jpg") feature_extractor = ViTImageProcessor.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) model = VisionEncoderDecoderModel.from_pretrained(model_id) model.to(device) features = feature_extractor(images=[img], return_tensors="pt") pixel_values = features.pixel_values.to(device) generated_captions = tokenizer.batch_decode( model.generate(pixel_values, max_new_tokens=20), skip_special_tokens=True, ) print(generated_captions) ```
null
atasoglu/vit-base-patch16-224-turkish-gpt2-medium
# vit-base-patch16-224-turkish-gpt2-medium This vision encoder-decoder model utilizes the [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) as the encoder and [ytu-ce-cosmos/turkish-gpt2-medium](https://huggingface.co/ytu-ce-cosmos/turkish-gpt2-medium) as the decoder, and it has been fine-tuned on the [flickr8k-turkish](https://huggingface.co/datasets/atasoglu/flickr8k-turkish) dataset to generate image captions in Turkish. ## Usage ```py import torch from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer from PIL import Image device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_id = "atasoglu/vit-base-patch16-224-turkish-gpt2-medium" img = Image.open("example.jpg") feature_extractor = ViTImageProcessor.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) model = VisionEncoderDecoderModel.from_pretrained(model_id) model.to(device) features = feature_extractor(images=[img], return_tensors="pt") pixel_values = features.pixel_values.to(device) generated_captions = tokenizer.batch_decode( model.generate(pixel_values, max_new_tokens=20), skip_special_tokens=True, ) print(generated_captions) ```
null
aisak-ai/aisak-visual
# AISAK-Visual ## Overview: AISAK-Visual, part of the AISAK system, is a pretrained model for image captioning based on the BLIP framework. Altered by the AISAK team from the https://huggingface.co/Salesforce/blip-image-captioning-large model, this model utilizes a ViT base backbone for unified vision-language understanding and generation. ## Model Information: - **Model Name**: AISAK-Visual - **Version**: 2.0 - **Model Architecture**: Transformer with ViT base backbone - **Specialization**: AISAK-Visual is part of the broader AISAK system and is specialized in image captioning tasks. ## Intended Use: AISAK-Visual, as part of AISAK, is designed to provide accurate and contextually relevant captions for images. Whether used for conditional or unconditional image captioning tasks, AISAK-Visual offers strong performance across various vision-language understanding and generation tasks. ## Performance: AISAK-Visual, based on the BLIP framework, achieves state-of-the-art results on image captioning tasks, including image-text retrieval, image captioning, and VQA. Its generalization ability is demonstrated by its strong performance on video-language tasks in a zero-shot manner. ## Ethical Considerations: - **Bias Mitigation**: Efforts have been made to mitigate bias during training; however, users are encouraged to remain vigilant about potential biases in the model's output. - **Fair Use**: Users should exercise caution when using AISAK-Visual in sensitive contexts and ensure fair and ethical use of the generated image captions. ## Limitations: - While AISAK-Visual demonstrates proficiency in image captioning tasks, it may not be suitable for tasks requiring domain-specific knowledge. - Performance may vary when presented with highly specialized or out-of-domain images. ## Deployment: Inferencing for AISAK-Visual will be handled as part of the full deployment of the AISAK system in the future. The process is lengthy and intensive in many areas, emphasizing the goal of achieving the optimal system rather than the quickest. However, work is being done as fast as humanly possible. Updates will be provided as frequently as possible. ## Caveats: - Users should verify important decisions based on AISAK-Visual's image captions, particularly in critical or high-stakes scenarios. ## Model Card Information: - **Model Card Created**: February 1, 2024 - **Last Updated**: February 19, 2024 - **Contact Information**: For any inquiries or communication regarding AISAK, please contact me at [email protected]. **© 2024 Mandela Logan. All rights reserved.** No part of this model may be reproduced, distributed, or transmitted in any form or by any means, including photocopying, recording, or other electronic or mechanical methods, without the prior written permission of the copyright holder. Users are expressly prohibited from creating replications or spaces derived from this model, whether in whole or in part, without the explicit authorization of the copyright holder. Unauthorized use or reproduction of this model is strictly prohibited by copyright law.
null
abhijit2111/Pic2Story
This isi the BLIP salesforce large image captioning model with small adjustments to the paramaters on the back end for testing - note in particular the length of reply is increased. # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
atasoglu/vit-small-patch16-224-turkish-small-bert-uncased
# vit-small-patch16-224-turkish-small-bert-uncased This vision encoder-decoder model utilizes the [WinKawaks/vit-small-patch16-224](https://huggingface.co/WinKawaks/vit-small-patch16-224) as the encoder and [ytu-ce-cosmos/turkish-small-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-small-bert-uncased) as the decoder, and it has been fine-tuned on the [flickr8k-turkish](https://huggingface.co/datasets/atasoglu/flickr8k-turkish) dataset to generate image captions in Turkish. ## Usage ```py import torch from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer from PIL import Image device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_id = "atasoglu/vit-small-patch16-224-turkish-small-bert-uncased" img = Image.open("example.jpg") feature_extractor = ViTImageProcessor.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) model = VisionEncoderDecoderModel.from_pretrained(model_id) model.to(device) features = feature_extractor(images=[img], return_tensors="pt") pixel_values = features.pixel_values.to(device) generated_captions = tokenizer.batch_decode( model.generate(pixel_values, max_new_tokens=20), skip_special_tokens=True, ) print(generated_captions) ```
null
atasoglu/vit-tiny-patch16-224-turkish-small-bert-uncased
# vit-tiny-patch16-224-turkish-small-bert-uncased This vision encoder-decoder model utilizes the [WinKawaks/vit-tiny-patch16-224](https://huggingface.co/WinKawaks/vit-tiny-patch16-224) as the encoder and [ytu-ce-cosmos/turkish-small-bert-uncased](https://huggingface.co/ytu-ce-cosmos/turkish-small-bert-uncased) as the decoder, and it has been fine-tuned on the [flickr8k-turkish](https://huggingface.co/datasets/atasoglu/flickr8k-turkish) dataset to generate image captions in Turkish. ## Usage ```py import torch from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer from PIL import Image device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model_id = "atasoglu/vit-tiny-patch16-224-turkish-small-bert-uncased" img = Image.open("example.jpg") feature_extractor = ViTImageProcessor.from_pretrained(model_id) tokenizer = AutoTokenizer.from_pretrained(model_id) model = VisionEncoderDecoderModel.from_pretrained(model_id) model.to(device) features = feature_extractor(images=[img], return_tensors="pt") pixel_values = features.pixel_values.to(device) generated_captions = tokenizer.batch_decode( model.generate(pixel_values, max_new_tokens=20), skip_special_tokens=True, ) print(generated_captions) ```
null
unography/blip-large-long-cap
# LongCap: Finetuned [BLIP](https://huggingface.co/Salesforce/blip-image-captioning-large) for generating long captions of images, suitable for prompts for text-to-image generation and captioning text-to-image datasets ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue. ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue. ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue. ``` </details>
null
toshi456/llava-jp-1.3b-v1.1
# LLaVA-JP Model Card ## Model detail **Model type:** LLaVA-JP is a vision-language model that can converse about input images.<br> This model is an LVLM model trained using [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. supports the input of 768 x 768 high resolution images by scaling_on_scales method. **Training:** This model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br> In the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main **Comparing VLMs** |Model|JA-VG-VQA-500<br>(ROUGE-L)|JA-VLM-Bench-In-the-Wild<br>(ROUGE-L)|Heron-Bench(Detail)|Heron-Bench(Conv)|Heron-Bench(Complex)|Heron-Bench(Average) |-|-|-|-|-|-|-| |[Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm)|-|40.50|25.15|51.23|37.84|38.07| |[EvoVLM-JP-v1-7B](https://huggingface.co/SakanaAI/EvoVLM-JP-v1-7B)|**19.70**|**51.25**|50.31|44.42|40.47|45.07| |[Heron BLIP Japanese StableLM Base 7B llava-620k](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k)|14.51|33.26|49.09|41.51|45.72|45.44| |[Heron GIT Japanese StableLM Base 7B](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v1)|15.18|37.82|42.77|**54.20**|43.53|46.83| |[llava-jp-1.3b-v1.0-620k](https://huggingface.co/toshi456/llava-jp-1.3b-v1.0-620k)|12.69|44.58|**51.21**|41.05|45.95|44.84| |[llava-jp-1.3b-v1.1](https://huggingface.co/toshi456/llava-jp-1.3b-v1.1)|13.33|44.40|50.00|51.83|**48.98**|**50.39**| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630af71ffaaea618ebc973db/rnzCN-LFpK4iDL5RZ9oyI.png) ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": model_path = 'toshi456/llava-jp-1.3b-v1.1' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaGpt2ForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1532, padding_side="right", use_fast=False, ) model.eval() conv_mode = "v1" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') image_size = model.get_model().vision_tower.image_processor.size["height"] if model.get_model().vision_tower.scales is not None: image_size = model.get_model().vision_tower.image_processor.size["height"] * len(model.get_model().vision_tower.scales) if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): model.generate( inputs=input_ids, images=image_tensor, do_sample=True, temperature=0.1, top_p=1.0, max_new_tokens=256, streamer=streamer, use_cache=True, ) """猫の隣にはノートパソコンがあります。""" ``` ## Training dataset **Stage1 Pretrain** - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA) **Stage2 Fine-tuning** - [LLaVA-v1.5-Instruct-620K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-v1.5-Instruct-620K-JA) ## Acknowledgement - [LLaVA](https://llava-vl.github.io/) - [LLM-jp](https://llm-jp.nii.ac.jp/) - [scaling_on_scales](https://github.com/bfshi/scaling_on_scales/tree/master) ## License cc-by-nc-4.0
null
toshi456/llava-jp-1.3b-v1.0-620k
# LLaVA-JP Model Card ## Model detail **Model type:** LLaVA-JP is a vision-language model that can converse about input images.<br> This model was trained by fine-tuning [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) using [LLaVA](https://llava-vl.github.io/) method and [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) is used as Image Encoder. **Training:** This model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br> In the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": parser = transformers.HfArgumentParser( (ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() model_path = 'toshi456/llava-jp-1.3b-v1.0-620k' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaGpt2ForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1532, padding_side="right", use_fast=False, ) model.eval() conv_mode = "v1" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') image_size = model.get_model().vision_tower.image_processor.size["height"] if model.get_model().vision_tower.scales is not None: image_size = model.get_model().vision_tower.image_processor.size["height"] * len(model.get_model().vision_tower.scales) if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): model.generate( inputs=input_ids, images=image_tensor, do_sample=True, temperature=0.01, top_p=1.0, max_new_tokens=256, streamer=streamer, use_cache=True, ) """猫の隣にはノートパソコンがあります。""" ``` ## Training dataset **Stage1 Pretrain** - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA) **Stage2 Fine-tuning** - [LLaVA-v1.5-Instruct-620K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-v1.5-Instruct-620K-JA) ## Acknowledgement - [LLaVA](https://llava-vl.github.io/) - [LLM-jp](https://llm-jp.nii.ac.jp/) ## License cc-by-nc-4.0
null
unography/blip-large-long-cap-sam-llava
# LongCap: Finetuned [BLIP](https://huggingface.co/Salesforce/blip-image-captioning-large) for generating long captions of images, suitable for prompts for text-to-image generation and captioning text-to-image datasets ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue. ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue. ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-large-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-large-long-cap", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach, wearing a checkered shirt and a dog collar. the woman is interacting with the dog, which is positioned towards the left side of the image. the setting is a beachfront with a calm sea and a golden hue. ``` </details>
null
Revrse/icon-captioning-model
```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Revrse/icon-captioning-model") model = BlipForConditionalGeneration.from_pretrained("Revrse/icon-captioning-model") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ```
null
unography/blip-long-cap
# LongCap: Finetuned [BLIP](https://huggingface.co/Salesforce/blip-image-captioning-base) for generating long captions of images, suitable for prompts for text-to-image generation and captioning text-to-image datasets ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-long-cap") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250, num_beams=3, repetition_penalty=2.5) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the sand, interacting with a dog wearing a blue and white checkered collar. the dog is positioned to the left of the woman, who is holding something in their hand. the background features a serene beach setting with waves crashing onto the shore. there are no other animals or people visible in the image. the time of day appears to be either early morning or late afternoon, based on the lighting and shadows. ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-long-cap").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250, num_beams=3, repetition_penalty=2.5) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the sand, interacting with a dog wearing a blue and white checkered collar. the dog is positioned to the left of the woman, who is holding something in their hand. the background features a serene beach setting with waves crashing onto the shore. there are no other animals or people visible in the image. the time of day appears to be either early morning or late afternoon, based on the lighting and shadows. ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("unography/blip-long-cap") model = BlipForConditionalGeneration.from_pretrained("unography/blip-long-cap", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) pixel_values = inputs.pixel_values out = model.generate(pixel_values=pixel_values, max_length=250, num_beams=3, repetition_penalty=2.5) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the sand, interacting with a dog wearing a blue and white checkered collar. the dog is positioned to the left of the woman, who is holding something in their hand. the background features a serene beach setting with waves crashing onto the shore. there are no other animals or people visible in the image. the time of day appears to be either early morning or late afternoon, based on the lighting and shadows. ``` </details>
null
toshi456/llava-jp-karasu-1.1b-v1.0-620k
# LLaVA-JP Model Card ## Model detail **Model type:** LLaVA-JP is a vision-language model that can converse about input images.<br> This model was trained by fine-tuning [lightblue/karasu-1.1B](https://huggingface.co/lightblue/karasu-1.1B) using [LLaVA](https://llava-vl.github.io/) method and [google/siglip-so400m-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) is used as Image Encoder. **Training:** This model was initially trained with the Vision Projector using LLaVA-Pretrain-JA.<br> In the second phase, it was fine-tuned with LLaVA-v1.5-Instruct-620K-JA. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main **Comparing VLMs:** |Model|JA-VG-VQA-500<br>(ROUGE-L)|JA-VLM-Bench-In-the-Wild<br>(ROUGE-L)|Heron-Bench(Detail)|Heron-Bench(Conv)|Heron-Bench(Complex)|Heron-Bench(Average) |-|-|-|-|-|-|-| |[Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm)|-|40.50|25.15|51.23|37.84|38.07| |[EvoVLM-JP-v1-7B](https://huggingface.co/SakanaAI/EvoVLM-JP-v1-7B)|**19.70**|**51.25**|50.31|44.42|40.47|45.07| |[Heron BLIP Japanese StableLM Base 7B llava-620k](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k)|14.51|33.26|49.09|41.51|45.72|45.44| |[Heron GIT Japanese StableLM Base 7B](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v1)|15.18|37.82|42.77|**54.20**|43.53|46.83| |[llava-jp-1.3b-v1.0-620k](https://huggingface.co/toshi456/llava-jp-1.3b-v1.0-620k)|12.69|44.58|**51.21**|41.05|45.95|44.84| |[llava-jp-1.3b-v1.1](https://huggingface.co/toshi456/llava-jp-1.3b-v1.1)|13.33|44.40|50.00|51.83|**48.98**|**50.39**| |[llava-jp-karasu-1.1b-v1.0-620k](https://huggingface.co/toshi456/llava-jp-karasu-1.1b-v1.0-620k)|13.23|44.59|42.16|43.79|40.35|42.16| ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git -b develop ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_llama import LlavaLlamaForCausalLM from llava.train.arguments_dataclass import ModelArguments, DataArguments, TrainingArguments from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": parser = transformers.HfArgumentParser( (ModelArguments, DataArguments, TrainingArguments)) model_args, data_args, training_args = parser.parse_args_into_dataclasses() model_path = 'toshi456/llava-jp-karasu-1.1b-v1.0-620k' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaLlamaForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1532, padding_side="right", use_fast=False, ) model.eval() conv_mode = "karasu" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') image_size = model.get_model().vision_tower.image_processor.size["height"] if model.get_model().vision_tower.scales is not None: image_size = model.get_model().vision_tower.image_processor.size["height"] * len(model.get_model().vision_tower.scales) if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor( image, return_tensors='pt', size={"height": image_size, "width": image_size} )['pixel_values'].to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): model.generate( inputs=input_ids, images=image_tensor, do_sample=True, temperature=0.1, top_p=1.0, max_new_tokens=512, streamer=streamer, use_cache=True, ) """猫の隣にはノートパソコンがあります。""" ``` ## Training dataset **Stage1 Pretrain** - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA) **Stage2 Fine-tuning** - [LLaVA-v1.5-Instruct-620K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-v1.5-Instruct-620K-JA) ## Acknowledgement - [LLaVA](https://llava-vl.github.io/) ## License cc-by-nc-4.0
null
toshi456/chat-vector-llava-v1.5-7b-ja
# Chat-Vector-LLaVA-v1.5-7b-JA Model Card ## Model detail **Model type:** Chat-Vector-LLaVA-v1.5-7b-JA is a vision-language model that can converse about input images in Japanese.<br> This model was created by adding and subtracting the weights of the [llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b), [Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf), and [ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b) models using the Chat Vector method as follows. ``` ELYZA-japanese-Llama-2-7b + (llava-v1.5-7b - Llama-2-7b-hf) ``` Chat-Vector-LLaVA-v1.5-7b-JAは、入力画像について日本語で会話できるvision-language modelです。<br> このモデルはChat Vectorの手法で[llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b)と[Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf)と[ELYZA-japanese-Llama-2-7b](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b)のモデルの重みを以下の通り加減算することで作成しました。 ``` ELYZA-japanese-Llama-2-7b + (llava-v1.5-7b - Llama-2-7b-hf) ``` **Comparing VLMs** |Model|JA-VG-VQA-500<br>(ROUGE-L)|JA-VLM-Bench-In-the-Wild<br>(ROUGE-L)|Heron-Bench(Detail)|Heron-Bench(Conv)|Heron-Bench(Complex)|Heron-Bench(Average) |-|-|-|-|-|-|-| |[Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm)|-|40.50|25.15|51.23|37.84|38.07| |[EvoVLM-JP-v1-7B](https://huggingface.co/SakanaAI/EvoVLM-JP-v1-7B)|**19.70**|**51.25**|50.31|44.42|40.47|45.07| |[Heron BLIP Japanese StableLM Base 7B llava-620k](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k)|14.51|33.26|49.09|41.51|45.72|45.44| |[Heron GIT Japanese StableLM Base 7B](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v1)|15.18|37.82|42.77|**54.20**|43.53|46.83| |[llava-jp-1.3b-v1.0-620k](https://huggingface.co/toshi456/llava-jp-1.3b-v1.0-620k)|12.69|44.58|51.21|41.05|45.95|44.84| |[llava-jp-1.3b-v1.1](https://huggingface.co/toshi456/llava-jp-1.3b-v1.1)|13.33|44.40|50.00|51.83|**48.98**|**50.39**| |[chat-vector-llava-v1.5-7b-ja](https://huggingface.co/toshi456/chat-vector-llava-v1.5-7b-ja)|18.64|42.23|**53.61**|44.36|44.48|46.10| ![image/png](https://cdn-uploads.huggingface.co/production/uploads/630af71ffaaea618ebc973db/jSW9RYPccrxaqrxntwtUb.png) ## How to use the model > [!WARNING] > The code for the demo worked with 4.34.1 of transformers, but did not work properly with 4.37.2. We have not tested the code in between versions or in the latest version.<br><br> > デモ用のコードはtransformersの4.34.1では動作しましたが、4.37.2では正常に動作しませんでした。間のバージョンや最新のバージョンでは動作確認していません。 **1. Download dependencies** ``` git clone https://github.com/tosiyuki/vlm-chat-vector-ja.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM from llava.mm_utils import tokenizer_image_token, process_images if __name__ == "__main__": model_path = 'toshi456/chat-vector-llava-v1.5-7b-ja' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaLlamaForCausalLM.from_pretrained( model_path, device_map=device, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch.float16, ).eval() tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1024, padding_side="right", use_fast=False, ) model.get_model().vision_tower.load_model() model = model.to(device) eos_token_id_list = [ tokenizer.eos_token_id, tokenizer.bos_token_id, ] # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') if not isinstance(image, list): image = [image] image_tensor = process_images(image, model.get_model().vision_tower.image_processor, model.config) if type(image_tensor) is list: image_tensor = [image.to(model.device, dtype=torch.float16) for image in image_tensor] else: image_tensor = image_tensor.to(model.device, dtype=torch.float16) # create prompt # ユーザー: <image>\n{prompt} conv_mode = "llava_llama_2" conv = conv_templates[conv_mode].copy() prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # parameter temperature = 0.0 top_p = 1.0 max_new_tokens=256 # predict with torch.inference_mode(): model.generate( inputs=input_ids, images=image_tensor, do_sample=True if temperature > 0 else False, temperature=temperature, top_p=top_p, max_new_tokens=max_new_tokens, streamer=streamer, use_cache=True, eos_token_id=eos_token_id_list, ) """猫の隣には、コンピューター(パソコン)があります。<s>""" ``` ## Acknowledgement - [LLaVA](https://llava-vl.github.io/) - [Chat Vector](https://arxiv.org/abs/2310.04799) ## License cc-by-nc-4.0
null
evlinzxxx/my_model_ViTB-16
# Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, GPT2Tokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("evlinzxxx/my_model_ViTB-16") feature_extractor = ViTImageProcessor.from_pretrained("evlinzxxx/my_model_ViTB-16") tokenizer = GPT2Tokenizer.from_pretrained("evlinzxxx/my_model_ViTB-16") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) def show_image_and_captions(url): # get the image and display it display(load_image(url)) # get the captions on various models our_caption = get_caption(model, image_processor, tokenizer, url) # print the captions print(f"Our caption: {our_caption}") show_image_and_captions("/content/drive/MyDrive/try/test_400/gl_16.jpg") # ['navigate around the obstacle ahead adjusting your route to bypass the parked car.'] ```
null
anonymoussubmission2024/vlrm-blip2-opt-2.7b
# VLRM This repository contains the weights of BLIP-2 OPT-2.7B model fine-tuned by reinforcement learning method introduced in the paper VLRM: Vision-Language Models Act as Reward Models for Image Captioning. The RL-tuned model is able to generate longer and more comprehensive descriptions with zero computational overhead compared to the original model. # CLIP Recall CLIP Recall calculation scripts are provided in `validate` folder together with `README.md` and `captions.txt`. # Running the model ## Option 1 <details> <summary> Load the whole model from this repo </summary> ```python import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("anonymoussubmission2024/vlrm-blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("anonymoussubmission2024/vlrm-blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs, max_new_tokens=60) processor.decode(out[0], skip_special_tokens=True).strip() >>> 'a woman in a plaid shirt shaking hands with a yellow labrador retriever sitting on the ground at sunset on a beach in florida' ``` </details> ## Option 2 Since the fine-tuned layers take small part of the whole model, you can first load the original model, then load the RL-tuned weights. <details> <summary> Step 1. Load the original model </summary> ```python import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs, max_new_tokens=60) processor.decode(out[0], skip_special_tokens=True).strip() >>> 'a woman sitting on the beach with a dog' ``` </details> <details> <summary> Step 2. Load the RL-tuned weights </summary> Available checkpoints: - `vlrm-blip2-opt-2.7b.pt` (VLRM in the paper) - `vlrm-rs-blip2-opt-2.7b.pt` (VLRM-RS in the paper) ```python from huggingface_hub import hf_hub_download finetuned_weights_state_dict = torch.load(hf_hub_download(repo_id="anonymoussubmission2024/vlrm-blip2-opt-2.7b", filename="vlrm-blip2-opt-2.7b.pt")) model.load_state_dict(finetuned_weights_state_dict, strict=False) out = model.generate(**inputs, max_new_tokens=60) processor.decode(out[0], skip_special_tokens=True).strip() >>> 'a woman in a plaid shirt shaking hands with a yellow labrador retriever sitting on the ground at sunset on a beach in florida' ``` </details>
null
aayushgs/Salesforce-blip-image-captioning-large-custom-handler
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
tarekziade/vit-base-patch16-224-in21k-distilgpt2
null
evlinzxxx/best_model_ViTB16_GPT2
# Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, GPT2Tokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("evlinzxxx/best_model_ViTB16_GPT2") feature_extractor = ViTImageProcessor.from_pretrained("evlinzxxx/best_model_ViTB16_GPT2") tokenizer = GPT2Tokenizer.from_pretrained("evlinzxxx/best_model_ViTB16_GPT2") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) def show_image_and_captions(url): # get the image and display it display(load_image(url)) # get the captions on various models our_caption = get_caption(model, image_processor, tokenizer, url) # print the captions print(f"Our caption: {our_caption}") show_image_and_captions("/content/drive/MyDrive/try/test_400/gl_16.jpg") # ['navigate around the obstacle ahead adjusting your route to bypass the parked car.'] ```
null
dxkrnn/blipball
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone) - and fine-tuned on [football dataset](https://huggingface.co/datasets/ybelkada/football-dataset). Google Colab notebook for fine-tuning: https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("ybelkada/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("ybelkada/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
pltnhan311/image-captioning_vit-gpt2
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning using transformers ![](https://ankur3107.github.io/assets/images/vision-encoder-decoder.png) * https://ankur3107.github.io/blogs/the-illustrated-image-captioning-using-transformers/ # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") feature_extractor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['doctor.e16ba4e4.jpg']) # ['a woman in a hospital bed with a woman in a hospital bed'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning") image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png") # [{'generated_text': 'a soccer game with a player jumping to catch the ball '}] ``` # Contact for any help * https://huggingface.co/ankur310794 * https://twitter.com/ankur310794 * http://github.com/ankur3107 * https://www.linkedin.com/in/ankur310794
null
pltnhan311/image-captioning-vit-gpt2-flick8k
null
shinyice/chatvector-llava-v1.5-plus-houou-v3-7b
# Chatvector-llava-v1.5-plus-Houou-v3-7b Model Card # Model Details ※好奇心から生まれたモデルです。精度は保証できませんが、v1.6を用いたものよりは良い気がしています。<br> chatvector-llava-v1.5-plus-houou-v3-7bは日本語で画像を説明することが可能なVLMです。<br> [Chat Vector](https://arxiv.org/abs/2310.04799)の手法に影響を受けています。 このモデルはChat Vectorを参考に[llava-v1.5-7b](https://huggingface.co/liuhaotian/llava-v1.5-7b)と[houou-instruction-7b-v3](https://huggingface.co/moneyforward/houou-instruction-7b-v3)、[Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) の重みを以下のように加減算することで作成してみました。<br> ``` houou-instruction-7b-v3 + (llava-v1.5-7b - Llama-2-7b-hf) ``` 次のプログラムは引用させていただいたサイトにあったものをベースにしています。以下文献もぜひご覧ください。 ## Uses ```sh git clone https://github.com/haotian-liu/LLaVA.git cd LLaVA pip install -e . ``` ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.language_model.llava_llama import LlavaLlamaForCausalLM from llava.mm_utils import tokenizer_image_token, process_images model_path = "shinyice/chatvector-llava-v1.5-plus-houou-v3-7b" device = "cuda" if torch.cuda.is_available() else "cpu" model = LlavaLlamaForCausalLM.from_pretrained( model_path, device_map=device, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch.float16, ).eval() tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1024, padding_side="right", use_fast=False, ) model.get_model().vision_tower.load_model() model = model.to(device) eos_token_id_list = [ tokenizer.eos_token_id, tokenizer.bos_token_id, ] image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') if not isinstance(image, list): image = [image] image_tensor = process_images(image, model.get_model().vision_tower.image_processor, model.config) image_sizes = [img.size for img in image] if isinstance(image_tensor, list): image_tensor = [img.to(model.device, dtype=torch.float16) for img in image_tensor] else: image_tensor = image_tensor.to(device, dtype=torch.float16) image_sizes_tensor = torch.tensor(image_sizes, dtype=torch.int32, device=device) conv_mode = "v1" conv = conv_templates[conv_mode].copy() prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) temperature = 0.0 top_p = 1.0 max_new_tokens = 256 with torch.inference_mode(): output = model.generate( inputs=input_ids, images=image_tensor, image_sizes=image_sizes_tensor, do_sample=True if temperature > 0 else False, temperature=temperature, top_p=top_p, max_new_tokens=max_new_tokens, use_cache=True, eos_token_id=eos_token_id_list, ) print(tokenizer.decode(output[0])) ``` ## Bibliography - [Chat VectorでLLaVAを日本語対応させる](https://zenn.dev/toshi_456/articles/0166a6eaa81c7b) - [Chat Vectorを使って日本語LLMをチャットモデルに改造する](https://qiita.com/jovyan/items/ee6affa5ee5bdaada6b4)
null
toshi456/ConvLLaVA-JP-1.3b-768
# ConvLLaVA-JP Model Card ## Model detail **Model type:** ConvLLaVA-JP is a vision-language model that can converse about input images.<br> This model is an LVLM model trained using [laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. Input of 768 x 768 high resolution. **Training:** This model was initially trained with Vision Projector and Stage 5 using LLaVA-Pretrain-JA.<br> In the second phase, it was trained Image Encoder, Vision Projector, Stage 5 and LLM using LLaVA-Pretrain-JA.<br> In the third phase, it was fine-tuned with Vision Projector and LLM using LLaVA-v1.5-Instruct-620K-JA. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main **Comparing VLMs** |Model|JA-VG-VQA-500<br>(ROUGE-L)|JA-VLM-Bench-In-the-Wild<br>(ROUGE-L)|Heron-Bench(Detail)|Heron-Bench(Conv)|Heron-Bench(Complex)|Heron-Bench(Average) |-|-|-|-|-|-|-| |[Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm)|-|40.50|25.15|51.23|37.84|38.07| |[EvoVLM-JP-v1-7B](https://huggingface.co/SakanaAI/EvoVLM-JP-v1-7B)|**19.70**|**51.25**|50.31|44.42|40.47|45.07| |[Heron BLIP Japanese StableLM Base 7B llava-620k](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k)|14.51|33.26|49.09|41.51|45.72|45.44| |[Heron GIT Japanese StableLM Base 7B](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v1)|15.18|37.82|42.77|**54.20**|43.53|46.83| |[llava-jp-1.3b-v1.0-620k](https://huggingface.co/toshi456/llava-jp-1.3b-v1.0-620k)|12.69|44.58|**51.21**|41.05|45.95|44.84| |[llava-jp-1.3b-v1.1](https://huggingface.co/toshi456/llava-jp-1.3b-v1.1)|13.33|44.40|50.00|51.83|**48.98**|**50.39**| |[ConvLLaVA-JP-1.3b-768](https://huggingface.co/toshi456/ConvLLaVA-JP-1.3b-768)|12.05|42.80|44.24|40.00|48.16|44.96| |[ConvLLaVA-JP-1.3b-1280](https://huggingface.co/toshi456/ConvLLaVA-JP-1.3b-1280)|11.88|43.64|38.95|44.79|41.24|42.31| ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": model_path = 'toshi456/ConvLLaVA-JP-1.3b-768' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaGpt2ForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1532, padding_side="right", use_fast=False, ) model.eval() conv_mode = "v1" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor(image).unsqueeze(0).half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor(image).unsqueeze(0).to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): output_id = model.generate( inputs=input_ids, images=image_tensor, do_sample=False, temperature=1.0, top_p=1.0, max_new_tokens=256, streamer=streamer, use_cache=True, ) """猫の隣にはノートパソコンがあります。""" ``` ## Training dataset **Stage1 and Stage2 Pretrain** - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA) **Stage3 Fine-tuning** - [LLaVA-v1.5-Instruct-620K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-v1.5-Instruct-620K-JA) ## Acknowledgement - [ConvLLaVA](https://arxiv.org/abs/2405.15738) - [LLM-jp](https://llm-jp.nii.ac.jp/) - [Open CLIP](https://github.com/mlfoundations/open_clip) ## License cc-by-nc-4.0
null
pniedziela96/blip-image-captioning-base-pokemon-finetune
# Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated. - **Developed by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Dataset Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
null
toshi456/ConvLLaVA-JP-1.3b-1280
# ConvLLaVA-JP Model Card ## Model detail **Model type:** ConvLLaVA-JP is a vision-language model that can converse about input images.<br> This model is an LVLM model trained using [laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft](https://huggingface.co/laion/CLIP-convnext_large_d_320.laion2B-s29B-b131K-ft) as the image encoder and [llm-jp/llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) as the text decoder. Input of 1280 x 1280 high resolution. **Training:** This model was initially trained with Vision Projector and Stage 5 using LLaVA-Pretrain-JA.<br> In the second phase, it was trained Image Encoder, Vision Projector, Stage 5 and LLM using LLaVA-Pretrain-JA.<br> In the third phase, it was fine-tuned with Vision Projector and LLM using LLaVA-v1.5-Instruct-620K-JA. resources for more information: https://github.com/tosiyuki/LLaVA-JP/tree/main **Comparing VLMs** |Model|JA-VG-VQA-500<br>(ROUGE-L)|JA-VLM-Bench-In-the-Wild<br>(ROUGE-L)|Heron-Bench(Detail)|Heron-Bench(Conv)|Heron-Bench(Complex)|Heron-Bench(Average) |-|-|-|-|-|-|-| |[Japanese Stable VLM](https://huggingface.co/stabilityai/japanese-stable-vlm)|-|40.50|25.15|51.23|37.84|38.07| |[EvoVLM-JP-v1-7B](https://huggingface.co/SakanaAI/EvoVLM-JP-v1-7B)|**19.70**|**51.25**|50.31|44.42|40.47|45.07| |[Heron BLIP Japanese StableLM Base 7B llava-620k](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v1-llava-620k)|14.51|33.26|49.09|41.51|45.72|45.44| |[Heron GIT Japanese StableLM Base 7B](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v1)|15.18|37.82|42.77|**54.20**|43.53|46.83| |[llava-jp-1.3b-v1.0-620k](https://huggingface.co/toshi456/llava-jp-1.3b-v1.0-620k)|12.69|44.58|**51.21**|41.05|45.95|44.84| |[llava-jp-1.3b-v1.1](https://huggingface.co/toshi456/llava-jp-1.3b-v1.1)|13.33|44.40|50.00|51.83|**48.98**|**50.39**| |[ConvLLaVA-JP-1.3b-768](https://huggingface.co/toshi456/ConvLLaVA-JP-1.3b-768)|12.05|42.80|44.24|40.00|48.16|44.96| |[ConvLLaVA-JP-1.3b-1280](https://huggingface.co/toshi456/ConvLLaVA-JP-1.3b-1280)|11.88|43.64|38.95|44.79|41.24|42.31| ## How to use the model **1. Download dependencies** ``` git clone https://github.com/tosiyuki/LLaVA-JP.git ``` **2. Inference** ```python import requests import torch import transformers from PIL import Image from transformers.generation.streamers import TextStreamer from llava.constants import DEFAULT_IMAGE_TOKEN, IMAGE_TOKEN_INDEX from llava.conversation import conv_templates, SeparatorStyle from llava.model.llava_gpt2 import LlavaGpt2ForCausalLM from llava.train.dataset import tokenizer_image_token if __name__ == "__main__": model_path = 'toshi456/ConvLLaVA-JP-1.3b-1280' device = "cuda" if torch.cuda.is_available() else "cpu" torch_dtype = torch.bfloat16 if device=="cuda" else torch.float32 model = LlavaGpt2ForCausalLM.from_pretrained( model_path, low_cpu_mem_usage=True, use_safetensors=True, torch_dtype=torch_dtype, device_map=device, ) tokenizer = transformers.AutoTokenizer.from_pretrained( model_path, model_max_length=1532, padding_side="right", use_fast=False, ) model.eval() conv_mode = "v1" conv = conv_templates[conv_mode].copy() # image pre-process image_url = "https://huggingface.co/rinna/bilingual-gpt-neox-4b-minigpt4/resolve/main/sample.jpg" image = Image.open(requests.get(image_url, stream=True).raw).convert('RGB') if device == "cuda": image_tensor = model.get_model().vision_tower.image_processor(image).unsqueeze(0).half().cuda().to(torch_dtype) else: image_tensor = model.get_model().vision_tower.image_processor(image).unsqueeze(0).to(torch_dtype) # create prompt # ユーザー: <image>\n{prompt} prompt = "猫の隣には何がありますか?" inp = DEFAULT_IMAGE_TOKEN + '\n' + prompt conv.append_message(conv.roles[0], inp) conv.append_message(conv.roles[1], None) prompt = conv.get_prompt() input_ids = tokenizer_image_token( prompt, tokenizer, IMAGE_TOKEN_INDEX, return_tensors='pt' ).unsqueeze(0) if device == "cuda": input_ids = input_ids.to(device) input_ids = input_ids[:, :-1] # </sep>がinputの最後に入るので削除する stop_str = conv.sep if conv.sep_style != SeparatorStyle.TWO else conv.sep2 keywords = [stop_str] streamer = TextStreamer(tokenizer, skip_prompt=True, timeout=20.0) # predict with torch.inference_mode(): output_id = model.generate( inputs=input_ids, images=image_tensor, do_sample=False, temperature=1.0, top_p=1.0, max_new_tokens=256, streamer=streamer, use_cache=True, ) """猫の隣にはノートパソコンがあります。""" ``` ## Training dataset **Stage1 and Stage2 Pretrain** - [LLaVA-Pretrain-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Pretrain-JA) **Stage3 Fine-tuning** - [LLaVA-v1.5-Instruct-620K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-v1.5-Instruct-620K-JA) ## Acknowledgement - [ConvLLaVA](https://arxiv.org/abs/2405.15738) - [LLM-jp](https://llm-jp.nii.ac.jp/) - [Open CLIP](https://github.com/mlfoundations/open_clip) ## License cc-by-nc-4.0
null
tarekziade/vit-base-patch16-224-distilgpt2
# distilvit This model is a work in progress. Fine-tuned version of those base models: - a VIT model for the image encoder: https://huggingface.co/google/vit-base-patch16-224-in21k - a Distilled GPT-2 model for the text decoder: https://huggingface.co/distilbert/distilgpt2 This model was trained on: - Flickr30k : https://huggingface.co/datasets/nlphuji/flickr30k - COCO 2017: https://cocodataset.org You can get that checkpoint using the 3083a3cef6e3c8dd90df3f088074bbe836b0f403 commit. It was then further fine-tuned on : - [Flickr30k debiased](https://huggingface.co/datasets/Mozilla/flickr30k-transformed-captions) - [DocOrNot](https://huggingface.co/datasets/Mozilla/docornot) - [Alt Text Validation](https://huggingface.co/datasets/Mozilla/alt-text-validation) For the latter, the dataset was annotated by our team to correct the alt text generated by the model, using the [checkvite tool](https://github.com/mozila/checkvite). You can find the code used to create the model here: https://github.com/mozilla/distilvit
null
sooh-j/blip-image-captioning-base
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
tarekziade/test-push
# distilvit This model is a work in progress. Fine-tuned version of those base models: - a VIT model for the image encoder: https://huggingface.co/google/vit-base-patch16-224-in21k - a Distilled GPT-2 model for the text decoder: https://huggingface.co/distilbert/distilgpt2 This model was trained on: - Flickr30k : https://huggingface.co/datasets/nlphuji/flickr30k - COCO 2017: https://cocodataset.org You can get that checkpoint using the 3083a3cef6e3c8dd90df3f088074bbe836b0f403 commit. It was then further fine-tuned on : - Flickr30k debiased: https://huggingface.co/datasets/Mozilla/flickr30k-transformed-captions - DocOrNot: https://huggingface.co/datasets/Mozilla/docornot You can find the code used to create the model here: https://github.com/mozilla/distilvit ### Framework versions - Transformers 4.40.2 - Pytorch 2.3.0+cu121 - Datasets 2.19.1 - Tokenizers 0.19.1
null
zayed/P2
This isi the BLIP salesforce large image captioning model with small adjustments to the paramaters on the back end for testing - note in particular the length of reply is increased. # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
nanxiz/zcabnzh-bp
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
tarekziade/distilvit-pexels-frozen
# distilvit This model is a work in progress. Fine-tuned version of those base models: - a VIT model for the image encoder: https://huggingface.co/google/vit-base-patch16-224-in21k - a Distilled GPT-2 model for the text decoder: https://huggingface.co/distilbert/distilgpt2 This model was trained on: - Flickr30k : https://huggingface.co/datasets/nlphuji/flickr30k - COCO 2017: https://cocodataset.org You can get that checkpoint using the 3083a3cef6e3c8dd90df3f088074bbe836b0f403 commit. It was then further fine-tuned on : - [Flickr30k debiased](https://huggingface.co/datasets/Mozilla/flickr30k-transformed-captions) - [DocOrNot](https://huggingface.co/datasets/Mozilla/docornot) - [Alt Text Validation](https://huggingface.co/datasets/Mozilla/alt-text-validation) For the latter, the dataset was annotated by our team to correct the alt text generated by the model, using the [checkvite tool](https://github.com/mozila/checkvite). You can find the code used to create the model here: https://github.com/mozilla/distilvit
null
cristianglezm/ViT-GPT2-FlowerCaptioner
# ViT-GPT2-FlowerCaptioner This model is a fine-tuned version of [nlpconnect/vit-gpt2-image-captioning](https://huggingface.co/nlpconnect/vit-gpt2-image-captioning) on the [FlowerEvolver-dataset](https://huggingface.co/datasets/cristianglezm/FlowerEvolver-Dataset) dataset. It achieves the following results on the evaluation set: - Loss: 0.4930 - Rouge1: 68.3498 - Rouge2: 46.7534 - Rougel: 62.3763 - Rougelsum: 65.9575 - Gen Len: 49.82 ## sample running code with python ```python from transformers import pipeline device = torch.device("cuda" if torch.cuda.is_available() else "cpu") FlowerCaptioner = pipeline("image-to-text", model="cristianglezm/ViT-GPT2-FlowerCaptioner", device=device) FlowerCaptioner(["flower1.png"]) # A flower with 12 petals in a smooth gradient of green and blue. # The center is green with black accents. The stem is long and green. ``` with javascript ```javascript import { pipeline } from '@xenova/transformers'; // Allocate a pipeline for image-to-text let pipe = await pipeline('image-to-text', 'cristianglezm/ViT-GPT2-FlowerCaptioner-ONNX'); let out = await pipe('flower image url'); // A flower with 12 petals in a smooth gradient of green and blue. // The center is green with black accents. The stem is long and green. ``` ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 25 ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | 0.6986 | 1.0 | 100 | 0.5339 | 64.9813 | 42.4686 | 58.2586 | 63.3933 | 47.25 | | 0.3408 | 2.0 | 200 | 0.3263 | 67.5461 | 46.5219 | 62.7962 | 65.6509 | 47.39 | | 0.2797 | 3.0 | 300 | 0.2829 | 65.0704 | 42.0682 | 58.4268 | 63.2368 | 56.8 | | 0.2584 | 4.0 | 400 | 0.2588 | 65.5074 | 45.227 | 60.2469 | 63.4253 | 52.25 | | 0.2589 | 5.0 | 500 | 0.2607 | 66.7346 | 45.8264 | 61.7373 | 64.8857 | 50.64 | | 0.2179 | 6.0 | 600 | 0.2697 | 63.8334 | 42.997 | 58.1585 | 61.7704 | 52.43 | | 0.1662 | 7.0 | 700 | 0.2631 | 68.6188 | 48.3329 | 63.9474 | 66.6006 | 46.94 | | 0.161 | 8.0 | 800 | 0.2749 | 69.0046 | 48.1421 | 63.7844 | 66.8317 | 49.74 | | 0.1207 | 9.0 | 900 | 0.3117 | 70.0357 | 48.9002 | 64.416 | 67.7582 | 48.66 | | 0.0909 | 10.0 | 1000 | 0.3408 | 65.9578 | 45.2324 | 60.2838 | 63.7493 | 46.92 | | 0.0749 | 11.0 | 1100 | 0.3516 | 67.4244 | 46.1985 | 61.6408 | 65.5371 | 46.61 | | 0.0665 | 12.0 | 1200 | 0.3730 | 68.6911 | 47.7089 | 63.0381 | 66.6956 | 47.89 | | 0.0522 | 13.0 | 1300 | 0.3891 | 67.2365 | 45.4165 | 61.4063 | 64.857 | 48.91 | | 0.0355 | 14.0 | 1400 | 0.4128 | 69.1494 | 47.9278 | 63.3334 | 66.5969 | 50.55 | | 0.0309 | 15.0 | 1500 | 0.4221 | 66.2447 | 44.937 | 60.1403 | 63.8541 | 50.71 | | 0.0265 | 16.0 | 1600 | 0.4343 | 67.8178 | 46.7084 | 61.8173 | 65.4375 | 50.85 | | 0.0158 | 17.0 | 1700 | 0.4577 | 67.9846 | 45.9562 | 61.6353 | 65.7207 | 50.81 | | 0.0166 | 18.0 | 1800 | 0.4731 | 69.0971 | 47.7001 | 62.856 | 66.7796 | 50.01 | | 0.0121 | 19.0 | 1900 | 0.4657 | 68.1397 | 46.4258 | 62.2696 | 65.9332 | 49.15 | | 0.0095 | 20.0 | 2000 | 0.4793 | 68.6497 | 47.9446 | 63.0466 | 66.5409 | 50.96 | | 0.0086 | 21.0 | 2100 | 0.4780 | 68.4363 | 46.7296 | 62.359 | 66.2626 | 50.02 | | 0.0068 | 22.0 | 2200 | 0.4863 | 67.5415 | 46.0821 | 61.57 | 65.4613 | 49.5 | | 0.0061 | 23.0 | 2300 | 0.4892 | 68.1283 | 46.5802 | 62.0832 | 66.0203 | 50.21 | | 0.006 | 24.0 | 2400 | 0.4912 | 68.1723 | 46.3239 | 62.2007 | 65.6725 | 49.89 | | 0.0057 | 25.0 | 2500 | 0.4930 | 68.3498 | 46.7534 | 62.3763 | 65.9575 | 49.82 | ### Framework versions - Transformers 4.43.4 - Pytorch 2.4.1+cu124 - Datasets 2.20.0 - Tokenizers 0.19.1
null
mo-thecreator/ViT-GPT2-Image_Captioning_model
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViT-GPT2 This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4134 - Rouge2 Fmeasure: 0.1166 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Fmeasure | |:-------------:|:------:|:----:|:---------------:|:---------------:| | No log | 0.9987 | 496 | 2.4901 | 0.1077 | | 2.5089 | 1.9995 | 993 | 2.4292 | 0.1141 | | 2.4103 | 2.9962 | 1488 | 2.4134 | 0.1166 | ### Framework versions - Transformers 4.44.2 - Pytorch 2.4.0 - Datasets 3.0.0 - Tokenizers 0.19.1
null
Srivardhan369/attention_mechanism_369
null
vidi-deshp/clip-gpt2-finetuned
# Fine-Tuned CLIP-GPT2 Model for Image Captioning This is a fine-tuned version of CLIP-GPT2 for real-time image captioning to aid the visually impaired. ## Model Details: - **Base Model:** CLIP ViT-B/32 - **Fine-Tuned On:** VizWiz dataset - **Format:** SafeTensors - **Usage:** ```python from transformers import CLIPProcessor, CLIPModel from PIL import Image model = CLIPModel.from_pretrained("vidi-deshp/clip-gpt2-finetuned") processor = CLIPProcessor.from_pretrained("openai/clip-vit-base-patch32") image = Image.open("sample.jpg") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs)
null
omarsabri8756/blip-Arabic-flickr-8k
# BLIP Image Captioning - Arabic (Flickr8k Arabic) This model is a fine-tuned version of [`Salesforce/blip-image-captioning-large`](https://huggingface.co/Salesforce/blip-image-captioning-large), adapted for **image captioning in Arabic** using the **Flickr8K Arabic dataset**. It takes an input image and generates a relevant caption in Arabic, describing the image content. ### Model Sources - **Paper:** Based on ["BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation"](https://arxiv.org/abs/2201.12086) ## How to Get Started with the Model ```python from transformers import BlipProcessor, BlipForConditionalGeneration from PIL import Image import torch import matplotlib.pyplot as plt # Load model and processor processor = BlipProcessor.from_pretrained("omarsabri8756/blip-Arabic-flickr-8k") model = BlipForConditionalGeneration.from_pretrained("omarsabri8756/blip-Arabic-flickr-8k") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model = model.to(device) # Load an image from local path image_path = "path/to/your/image.jpg" image = Image.open(image_path).convert("RGB") # Show image plt.imshow(image) plt.axis('off') plt.title("Input Image") plt.show() # Generate enhanced Arabic caption with better parameters model.eval() with torch.no_grad(): pixel_values = processor(images=image, return_tensors="pt").pixel_values.to(device) generated_output = model.generate( pixel_values=pixel_values, max_length=75, min_length=20, num_beams=5, repetition_penalty=1.5, length_penalty=1.0, no_repeat_ngram_size=3, early_stopping=True ) caption = processor.batch_decode(generated_output, skip_special_tokens=True)[0] print(caption) # Prints Arabic caption ``` ## Training Details ### Training Data This model was fine-tuned on the Flickr8k Arabic dataset, which consists of 8,000 images, each with 4 reference Arabic captions. The dataset provides a diverse collection of everyday scenes and activities described in Modern Standard Arabic. - **Dataset:** Flickr8k Arabic - **Size:** 8,000 images with 32,000 captions ### Training Procedure The model was fine-tuned from the original BLIP model by adapting its language generation capabilities to Arabic text. #### Training Hyperparameters - **Training regime:** fp16 mixed precision - **Optimizer:** AdamW - **Learning rate:** 5e-5 - **per_device_train_batch_size:** 2 - **per_device_eval_batch_size:** 16 - **gradient_accumulation_steps:** 14 - **Total training batch size:** 28 - **Epochs:** 5 - **LR scheduler:** Cosine with warmup - **Weight decay:** 0.01 ## Evaluation ### Testing Data, & Metrics #### Testing Data The model was evaluated on the Flickr8k Arabic test split, which contains 1,000 images with 4 reference captions each. #### Metrics - **BLEU-1:** 65.80 - **BLEU-2:** 51.33 - **BLEU-3:** 38.72 - **BLEU-4:** 28.75 - **METEOR:** 46.29 ### Results The model performs well on common scenes and activities, generating grammatically correct and contextually appropriate Arabic captions. Performance decreases slightly for unusual scenes or culturally specific contexts not well-represented in the training data. ## Bias, Risks, and Limitations - The model was trained on Flickr8k Arabic, which may not represent the full diversity of images and linguistic expressions in Arabic-speaking regions - May produce stereotypical or culturally insensitive descriptions - Performance may vary across different Arabic dialects and regional expressions - Limited ability to correctly describe culturally specific items, events, or contexts - May struggle with complex scenes or unusual visual elements ## Recommendations - Users should review generated captions before using them in sensitive contexts - Consider post-processing or human review for public-facing applications - Test across diverse image types relevant to your use case - Be aware that the model may reflect biases present in the training data - Consider regional and dialectal differences when evaluating caption quality
null
gw099/art-describer-5k
# Art Describer 5K This model is a fine-tuned version of the BLIP image captioning model, specifically trained to describe artworks. It was trained on 5,000 examples of public domain artwork with their corresponding text descriptions. ## Model Details - **Base Model**: BLIP (Salesforce/blip-image-captioning-base) - **Training Data**: 5,000 public domain artwork images with text descriptions - **Training Method**: Fine-tuned using DirectML - **Purpose**: Specialized in describing artwork, paintings, and visual art pieces ## Usage ### Using Pipeline (Recommended) ```python from transformers import pipeline from PIL import Image # Load the image captioning pipeline captioner = pipeline("image-to-text", model="gw099/art-describer-5k") # Load an image image = Image.open("path/to/artwork.jpg") # Generate caption caption = captioner(image)[0]['generated_text'] print(caption) ``` ## Training Details This model was fine-tuned on a curated dataset of 5,000 public domain artwork images, each paired with descriptive text. The training data includes various styles of artwork, from classical paintings to modern sculptures. The model was specifically trained to: - Provide detailed descriptions of artwork - Identify artistic styles and techniques - Describe colors, composition, and visual elements - Generate natural, art-focused captions
null
hibikigf88/blip-caption-newyorker
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # blip-caption-newyorker This model is a fine-tuned version of [Salesforce/blip-image-captioning-base](https://huggingface.co/Salesforce/blip-image-captioning-base) on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments - lr_scheduler_type: linear - num_epochs: 1 ### Framework versions - Transformers 4.52.3 - Pytorch 2.7.0 - Datasets 3.6.0 - Tokenizers 0.21.1
null