model_id
stringlengths
8
65
model_card
stringlengths
0
15.7k
model_labels
listlengths
Salesforce/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Salesforce/blip-image-captioning-base
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Salesforce/blip2-opt-2.7b
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). ### Memory requirements The memory requirements differ based on the precision one uses. One can use 4-bit inference using [Bitsandbytes](https://huggingface.co/blog/4bit-transformers-bitsandbytes), which greatly reduce the memory requirements. | dtype | Largest Layer or Residual Group | Total Size | Training using Adam | |-------------------|---------------------------------|------------|----------------------| | float32 | 490.94 MB | 14.43 GB | 57.72 GB | | float16/bfloat16 | 245.47 MB | 7.21 GB | 28.86 GB | | int8 | 122.73 MB | 3.61 GB | 14.43 GB | | int4 | 61.37 MB | 1.8 GB | 7.21 GB | #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details>
null
microsoft/git-base
# GIT (GenerativeImage2Text), base-sized GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-DSD-FineTune
# LLaVA-OneVision-Qwen2-0.5b Fine-tuned on DataSeeds.AI Dataset This model is a LoRA (Low-Rank Adaptation) fine-tuned version of [lmms-lab/llava-onevision-qwen2-0.5b-ov](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-ov) specialized for photography scene analysis and description generation. The model was presented in the paper [Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from DataSeeds' Annotated Imagery](https://huggingface.co/papers/2506.05673). The model was fine-tuned on the [DataSeeds.AI Sample Dataset (DSD)](https://huggingface.co/datasets/Dataseeds/DataSeeds.AI-Sample-Dataset-DSD) to enhance its capabilities in generating detailed, accurate descriptions of photographic content. ## Model Description - **Base Model**: [LLaVA-OneVision-Qwen2-0.5b](https://huggingface.co/lmms-lab/llava-onevision-qwen2-0.5b-ov) - **Vision Encoder**: [SigLIP-SO400M-patch14-384](https://huggingface.co/google/siglip-so400m-patch14-384) - **Language Model**: Qwen2-0.5B (896M parameters) - **Fine-tuning Method**: LoRA (Low-Rank Adaptation) with PEFT - **Total Parameters**: ~917M (513M trainable during fine-tuning, 56% of total) - **Multimodal Projector**: 1.84M parameters (100% trainable) - **Precision**: BFloat16 - **Task**: Photography scene analysis and detailed image description ### LoRA Configuration - **LoRA Rank (r)**: 32 - **LoRA Alpha**: 32 - **LoRA Dropout**: 0.1 - **Target Modules**: `v_proj`, `k_proj`, `q_proj`, `up_proj`, `gate_proj`, `down_proj`, `o_proj` - **Tunable Components**: `mm_mlp_adapter`, `mm_language_model` ## Training Details ### Dataset The model was fine-tuned on the DataSeeds.AI Sample Dataset, a curated collection of photography images with detailed scene descriptions focusing on: - Compositional elements and camera perspectives - Lighting conditions and visual ambiance - Product identification and technical details - Photographic style and mood analysis ### Training Configuration | Parameter | Value | |-----------|-------| | **Learning Rate** | 1e-5 | | **Optimizer** | AdamW | | **Learning Rate Schedule** | Cosine decay | | **Warmup Ratio** | 0.03 | | **Weight Decay** | 0.01 | | **Batch Size** | 2 | | **Gradient Accumulation Steps** | 8 (effective batch size: 16) | | **Training Epochs** | 3 | | **Max Sequence Length** | 8192 | | **Max Gradient Norm** | 0.5 | | **Precision** | BFloat16 | | **Hardware** | Single NVIDIA A100 40GB | | **Training Time** | 30 hours | ### Training Strategy - **Validation Frequency**: Every 50 steps for precise checkpoint selection - **Best Checkpoint**: Step 1,750 (epoch 2.9) with validation loss of 1.83 - **Mixed Precision**: BFloat16 with gradient checkpointing for memory efficiency - **System Prompt**: Consistent template requesting scene descriptions across all samples ## Performance ### Quantitative Results The fine-tuned model shows significant improvements across all evaluation metrics compared to the base model: | Metric | Base Model | Fine-tuned | Absolute Δ | Relative Δ | |--------|------------|------------|------------|------------| | **BLEU-4** | 0.0199 | **0.0246** | +0.0048 | **+24.09%** | | **ROUGE-L** | 0.2089 | **0.2140** | +0.0051 | **+2.44%** | | **BERTScore F1** | 0.2751 | **0.2789** | +0.0039 | **+1.40%** | | **CLIPScore** | 0.3247 | **0.3260** | +0.0013 | **+0.41%** | ### Key Improvements - **Enhanced N-gram Precision**: 24% improvement in BLEU-4 indicates significantly better word sequence accuracy - **Better Sequential Information**: ROUGE-L improvement shows enhanced capture of longer matching sequences - **Improved Semantic Understanding**: BERTScore gains demonstrate better contextual relationships - **Maintained Visual-Semantic Alignment**: CLIPScore preservation with slight improvement ### Inference Performance - **Processing Speed**: 2.30 seconds per image (NVIDIA A100 40GB) - **Memory Requirements**: Optimized for single GPU inference ## Usage ### Installation ```bash pip install transformers torch peft pillow ``` ### Basic Usage ```python from peft import PeftModel from transformers import AutoModelForCausalLM, AutoTokenizer, AutoProcessor import torch from PIL import Image # Load base model and processor base_model = AutoModelForCausalLM.from_pretrained( "lmms-lab/llava-onevision-qwen2-0.5b-ov", torch_dtype=torch.bfloat16, device_map="auto", trust_remote_code=True ) processor = AutoProcessor.from_pretrained("lmms-lab/llava-onevision-qwen2-0.5b-ov") # Load LoRA adapter model = PeftModel.from_pretrained( base_model, "Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-DSD-FineTune" ) # Load and process image image = Image.open("your_image.jpg") prompt = "Describe this image in detail, focusing on the composition, lighting, and visual elements." inputs = processor(prompt, image, return_tensors="pt").to(model.device) # Generate description with torch.no_grad(): outputs = model.generate( **inputs, max_new_tokens=512, do_sample=True, temperature=0.7, top_p=0.9 ) description = processor.decode(outputs[0], skip_special_tokens=True) print(description) ``` ### Advanced Usage with Custom Prompts ```python # Photography-specific prompts that work well with this model prompts = [ "Analyze the photographic composition and lighting in this image.", "Describe the technical aspects and visual mood of this photograph.", "Provide a detailed scene description focusing on the subject and environment." ] for prompt in prompts: inputs = processor(prompt, image, return_tensors="pt").to(model.device) outputs = model.generate(**inputs, max_new_tokens=256, temperature=0.7) description = processor.decode(outputs[0], skip_special_tokens=True) print(f"Prompt: {prompt}") print(f"Description: {description} ") ``` ## Model Architecture The model maintains the LLaVA-OneVision architecture with the following components: - **Vision Encoder**: SigLIP-SO400M with hierarchical feature extraction - **Language Model**: Qwen2-0.5B with 24 layers, 14 attention heads - **Multimodal Projector**: 2-layer MLP with GELU activation (mlp2x_gelu) - **Image Processing**: Supports "anyres_max_9" aspect ratio with dynamic grid pinpoints - **Context Length**: 32,768 tokens with sliding window attention ### Technical Specifications - **Hidden Size**: 896 - **Intermediate Size**: 4,864 - **Attention Heads**: 14 (2 key-value heads) - **RMS Norm Epsilon**: 1e-6 - **RoPE Theta**: 1,000,000 - **Image Token Index**: 151646 - **Max Image Grid**: Up to 2304×2304 pixels with dynamic tiling ## Training Data The DataSeeds.AI Sample Dataset contains curated photography images with comprehensive annotations including: - **Scene Descriptions**: Detailed textual descriptions of visual content - **Technical Metadata**: Camera settings, composition details - **Style Analysis**: Photographic techniques and artistic elements - **Quality Annotations**: Professional photography standards The dataset focuses on enhancing the model's ability to: - Identify specific products and technical details accurately - Describe lighting conditions and photographic ambiance - Analyze compositional elements and camera perspectives - Generate contextually aware scene descriptions ## Limitations and Considerations ### Model Limitations - **Domain Specialization**: Optimized for photography; may have reduced performance on general vision-language tasks - **Base Model Inheritance**: Inherits limitations from LLaVA-OneVision base model - **Training Data Bias**: May reflect biases present in the DataSeeds.AI dataset - **Language Support**: Primarily trained and evaluated on English descriptions ### Recommended Use Cases - ✅ Photography scene analysis and description - ✅ Product photography captioning - ✅ Technical photography analysis - ✅ Visual content generation for photography applications - ⚠️ General-purpose vision-language tasks (may have reduced performance) - ❌ Non-photographic image analysis (not optimized for this use case) ### Ethical Considerations - The model may perpetuate biases present in photography datasets - Generated descriptions should be reviewed for accuracy in critical applications - Consider potential cultural biases in photographic style interpretation ## Citation If you use this model in your research or applications, please cite: ```bibtex @article{abdoli2025peerranked, title={Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from DataSeeds' Annotated Imagery}, author={Sajjad Abdoli and Freeman Lewin and Gediminas Vasiliauskas and Fabian Schonholz}, journal={arXiv preprint arXiv:2506.05673}, year={2025}, } @misc{llava-onevision-dsd-finetune-2024, title={LLaVA-OneVision Fine-tuned on DataSeeds.AI Dataset for Photography Scene Analysis}, author={DataSeeds.AI}, year={2024}, publisher={Hugging Face}, url={https://huggingface.co/Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-DSD-FineTune}, note={LoRA fine-tuned model for enhanced photography description generation} } @article{li2024llavaonevision, title={LLaVA-OneVision: Easy Visual Task Transfer}, author={Li, Bo and Zhang, Yuanhan and Guo, Dong and Zhang, Renrui and Li, Feng and Zhang, Hao and Zhang, Kaichen and Liu, Yanwei and Wang, Ziwei and Gao, Peng}, journal={arXiv preprint arXiv:2408.03326}, year={2024} } @article{hu2022lora, title={LoRA: Low-Rank Adaptation of Large Language Models}, author={Hu, Edward J and Shen, Yelong and Wallis, Phillip and Allen-Zhu, Zeyuan and Li, Yuanzhi and Wang, Shean and Wang, Lu and Chen, Weizhu}, journal={arXiv preprint arXiv:2106.09685}, year={2021} } ``` ## License This model is released under the Apache 2.0 license, consistent with the base LLaVA-OneVision model licensing terms. ## Acknowledgments - **Base Model**: Thanks to LMMS Lab for the LLaVA-OneVision model - **Vision Encoder**: Thanks to Google Research for the SigLIP model - **Dataset**: GuruShots photography community for the source imagery - **Framework**: Hugging Face PEFT library for efficient fine-tuning capabilities --- *For questions, issues, or collaboration opportunities, please visit the [model repository](https://huggingface.co/Dataseeds/LLaVA-OneVision-Qwen2-0.5b-ov-DSD-FineTune) or contact the DataSeeds.AI team.*
null
Dataseeds/BLIP2-opt-2.7b-DSD-FineTune
# BLIP2-OPT-2.7B Fine-tuned on DataSeeds.AI Dataset Code: https://github.com/DataSeeds-ai/DSD-finetune-blip-llava This model is a fine-tuned version of [Salesforce/blip2-opt-2.7b](https://huggingface.co/Salesforce/blip2-opt-2.7b) specialized for photography scene analysis and technical description generation. The model was fine-tuned on the [DataSeeds.AI Sample Dataset (DSD)](https://huggingface.co/datasets/Dataseeds/DataSeeds-Sample-Dataset-DSD) to enhance its capabilities in generating detailed photographic descriptions with focus on composition, lighting, and technical aspects. The model was presented in the paper [Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from DataSeeds' Annotated Imagery](https://huggingface.co/papers/2506.05673). ## Model Description - **Base Model**: [BLIP2-OPT-2.7B](https://huggingface.co/Salesforce/blip2-opt-2.7b) - **Vision Encoder**: EVA-CLIP ViT-g/14 - **Language Model**: OPT-2.7B (2.7 billion parameters) - **Architecture**: BLIP-2 with Q-Former bridging vision and language - **Fine-tuning Approach**: Full model fine-tuning (ViT unfrozen) - **Task**: Photography scene analysis and detailed image captioning - **Precision**: Mixed Precision (AMP enabled) - **Image Resolution**: 364×364 pixels ## Training Details ### Dataset The model was fine-tuned on the DataSeeds.AI Sample Dataset (DSD), containing 10,610 curated photography images with comprehensive annotations: - **Training Set**: 9,549 image-text pairs (90%) - **Validation Set**: 1,061 image-text pairs (10%) - **Content Focus**: Technical scene descriptions, compositional analysis, lighting conditions - **Annotation Quality**: Professional photography standards with 15+ word descriptions ### Training Configuration | Parameter | Value | |-----------|-------| | **Learning Rate** | 1e-5 | | **Optimizer** | AdamW | | **LR Schedule** | Linear warmup + Cosine decay | | **Warmup Steps** | 17 | | **Weight Decay** | 0.01 | | **Batch Size** | 8 | | **Gradient Accumulation** | 1 | | **Training Epochs** | 10 | | **Mixed Precision** | AMP enabled | | **Hardware** | Single NVIDIA A100 80GB | | **Vision Encoder** | Unfrozen (trainable) | | **Gradient Checkpointing** | Enabled | ### Generation Configuration | Parameter | Value | |-----------|-------| | **Max Length** | 100 tokens | | **Min Length** | 8 tokens | | **Num Beams** | 5 | | **Beam Search** | Enabled | | **Task** | Captioning | ### Checkpoint Selection - **Selection Metric**: Aggregate validation score (CIDEr + BLEU-4) - **Best Epoch**: Epoch 1 (aggregate score: 0.2626) - **Training Loss**: Decreased from 2.780 (epoch 1) to 1.692 (epoch 10) - **Final Model**: `checkpoint_best.pth` (selected based on validation performance) ## Performance ### Quantitative Results The fine-tuned model shows significant improvements in lexical overlap metrics, with notable trade-offs in semantic understanding: | Metric | Base Model | Fine-tuned | Absolute Δ | Relative Δ | |--------|------------|------------|------------|------------| | **BLEU-4** | 0.001 | **0.047** | +0.046 | **+4600%*** | | **ROUGE-L** | 0.126 | **0.242** | +0.116 | **+92.06%** | | **BERTScore F1** | 0.0545 | **-0.0537** | -0.1082 | **-198.53%** | | **CLIPScore** | 0.2854 | **0.2583** | -0.0271 | **-9.49%** | *_Note: The extreme BLEU-4 improvement is due to the very low baseline (0.001), making relative improvements appear dramatic._ ### Key Observations **Strengths:** - **Enhanced Lexical Matching**: Substantial improvements in BLEU-4 and ROUGE-L indicate better n-gram alignment with reference descriptions - **Photography Terminology**: Model learned to incorporate photographic vocabulary and technical terms - **Compositional Awareness**: Improved description of camera angles, lighting conditions, and visual elements **Trade-offs:** - **Semantic Coherence**: Negative BERTScore suggests divergence from reference semantic patterns - **Visual-Text Alignment**: Moderate decrease in CLIPScore indicates reduced image-text semantic alignment - **Repetitive Patterns**: Tendency toward formulaic descriptions with some repetition ### Performance Characteristics The fine-tuning results reveal a model that has specialized in lexical pattern matching for photography descriptions but with trade-offs in semantic understanding. This suggests the model is particularly suited for applications requiring technical photography terminology rather than general-purpose image captioning. ## Usage ### Installation ```bash pip install transformers torch pillow ``` ### Basic Usage ```python from transformers import Blip2Processor, Blip2ForConditionalGeneration import torch from PIL import Image # Load model and processor processor = Blip2Processor.from_pretrained("Dataseeds/BLIP2-opt-2.7b-DSD-FineTune") model = Blip2ForConditionalGeneration.from_pretrained( "Dataseeds/BLIP2-opt-2.7b-DSD-FineTune", torch_dtype=torch.float16, device_map="auto" ) # Load and process image image = Image.open("your_image.jpg") inputs = processor(image, return_tensors="pt").to(model.device, torch.float16) # Generate caption generated_ids = model.generate( **inputs, max_length=100, min_length=8, num_beams=5, do_sample=False ) caption = processor.decode(generated_ids[0], skip_special_tokens=True) print(f"Generated caption: {caption}") ``` ## Model Architecture The model maintains the BLIP-2 architecture with the following components: ### Core Architecture - **Vision Encoder**: EVA-CLIP ViT-g/14 (unfrozen during fine-tuning) - **Q-Former**: 32-layer transformer bridging vision and language modalities - **Language Model**: OPT-2.7B (2.7 billion parameters) - **Architecture**: BLIP-2 with Q-Former bridging vision and language - **Bootstrapping**: Two-stage pre-training methodology preserved ### Technical Specifications - **Vision Resolution**: 364×364 pixels - **Vision Patch Size**: 14×14 - **Q-Former Queries**: 32 learnable queries - **Language Model Layers**: 32 - **Total Parameters**: ~2.7B (language model) + vision components - **Precision**: Mixed precision (FP16/FP32) ### Fine-tuning Approach - **Vision Encoder**: Trainable (freeze_vit: False) - **Q-Former**: Trainable - **Language Model**: Trainable - **Full Model**: End-to-end fine-tuning enabled - **Gradient Checkpointing**: Memory optimization enabled ## Training Data & Methodology ### Dataset Characteristics The DataSeeds.AI Sample Dataset (DSD) focuses on: - **Technical Photography**: Camera settings, composition analysis - **Lighting Descriptions**: Ambient, directional, studio lighting analysis - **Subject Matter**: Diverse photographic subjects and styles - **Annotation Style**: Technical scene descriptions (20-30 words typical) ### Data Processing - **Image Preprocessing**: `blip2_image_train` (364×364 resolution) - **Text Processing**: `blip_caption` processor - **Evaluation**: `blip_image_eval` for consistent validation - **Format**: Input-output pairs with scene analysis prompts ### Recommended Use Cases - ✅ Photography scene analysis and technical descriptions - ✅ Camera composition and lighting analysis - ✅ Product photography captioning - ✅ Photography education and training applications - ⚠️ General-purpose image captioning (may show repetitive patterns) - ❌ Non-photographic content analysis (not optimized) ## Citation If you use this model in your research or applications, please cite: ```bibtex @article{abdoli2025peerranked, title={Peer-Ranked Precision: Creating a Foundational Dataset for Fine-Tuning Vision Models from GuruShots' Annotated Imagery}, author={Sajjad Abdoli and Freeman Lewin and Gediminas Vasiliauskas and Fabian Schonholz}, journal={arXiv preprint arXiv:2506.05673}, year={2025}, } @misc{blip2-opt-dsd-finetune-2024, title={BLIP2-OPT-2.7B Fine-tuned on DataSeeds.AI Dataset for Photography Analysis}, author={Dataseeds}, year={2024}, publisher={Hugging Face}, url={https://huggingface.co/Dataseeds/BLIP2-opt-2.7b-DSD-FineTune}, note={Fine-tuned model for photography scene analysis and technical description} } @inproceedings{li2023blip2, title={BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models}, author={Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, booktitle={International Conference on Machine Learning}, pages={19730--19742}, year={2023}, organization={PMLR} } @article{zhang2022opt, title={OPT: Open Pre-trained Transformer Language Models}, author={Zhang, Susan and Roller, Stephen and Goyal, Naman and Artetxe, Mikel and Chen, Moya and Chen, Shuohui and Dewan, Christopher and Diab, Mona and Li, Xian and Lin, Xi Victoria and others}, journal={arXiv preprint arXiv:2205.01068}, year={2022} } ``` ## License This model is released under the MIT license, consistent with the base BLIP2 model licensing terms. ## Acknowledgments - **Base Model**: Salesforce Research for BLIP2 architecture and pre-training - **Language Model**: Meta AI for OPT-2.7B foundation - **Vision Encoder**: OpenAI for EVA-CLIP vision components - **Dataset**: GuruShots photography community for source imagery - **Framework**: Hugging Face Transformers for model infrastructure ## Training Artifacts This repository includes comprehensive training artifacts: - **Checkpoints**: All epoch checkpoints (0-9) plus best performing checkpoint - **Evaluation Results**: Validation metrics for each epoch - **Training Logs**: Complete training and evaluation logs - **Configuration**: Original training configuration and hyperparameters --- *For questions, issues, or collaboration opportunities, please visit the [model repository](https://huggingface.co/Dataseeds/BLIP2-opt-2.7b-DSD-FineTune) or contact the DataSeeds.AI team.*
null
nlpconnect/vit-gpt2-image-captioning
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning using transformers ![](https://ankur3107.github.io/assets/images/vision-encoder-decoder.png) * https://ankur3107.github.io/blogs/the-illustrated-image-captioning-using-transformers/ # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") feature_extractor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['doctor.e16ba4e4.jpg']) # ['a woman in a hospital bed with a woman in a hospital bed'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning") image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png") # [{'generated_text': 'a soccer game with a player jumping to catch the ball '}] ``` # Contact for any help * https://huggingface.co/ankur310794 * https://twitter.com/ankur310794 * http://github.com/ankur3107 * https://www.linkedin.com/in/ankur310794
null
Salesforce/instructblip-vicuna-7b
# InstructBLIP model InstructBLIP model using [Vicuna-7b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-7b") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-7b") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
null
ogulcanakca/blip-itu-turkish-captions-finetuned
# Türkçe Görüntü Altyazılama: BLIP ile Bir Başlangıç Noktası ## Projeye Genel Bakış ve Katkısı Bu proje, `Salesforce/blip-image-captioning-base` modelinin, `ituperceptron/image-captioning-turkish` veri kümesinin "long_captions" bölümünden alınan bir alt küme üzerinde **Türkçe görüntü altyazıları üretmek** amacıyla ince ayar (fine-tuning) yapılmasına odaklanmaktadır. Amaç, görseller için otomatik, akıcı ve anlamlı Türkçe açıklamalar üretebilen bir model geliştirmek ve bu alanda bir "orta karar ama değerli bir başlangıç noktası" oluşturmaktır. Bu çalışma, özellikle Türkçe gibi, İngilizceye kıyasla daha az kaynak ve önceden eğitilmiş modele sahip dillerde VLM (Görsel Dil Modeli) yeteneklerini keşfetmeyi ve geliştirmeyi hedefler. ## Veri Kümesi * **Adı:** Türkçe Görüntü Altyazılama Veri Kümesi * **Kaynak:** `ituperceptron/image-captioning-turkish` * **Yapılandırma/Split:** `long_captions` * **İçerik:** Çeşitli görseller ve bu görsellere ait insanlar tarafından yazılmış uzun, açıklayıcı Türkçe altyazılar. * **Kullanılan Alt Küme:** Bu projede, eğitim ve değerlendirme için 60.000 örnekten oluşan bir alt küme kullanılmıştır. ## Model * **Temel Model:** `Salesforce/blip-image-captioning-base` * **Görev:** Görüntüden Metne Çeviri (Image-to-Text) / Görüntü Altyazılama (Image Captioning) BLIP (Bootstrapping Language-Image Pre-training), görüntü ve metin arasındaki ilişkiyi anlamak ve bu ilişkiden yola çıkarak metin üretmek üzere tasarlanmış güçlü bir VLM mimarisidir. ## Temel Özellikler ve Yaklaşım * `Salesforce/blip-image-captioning-base` modeli, Türkçe altyazılama görevine adapte etmek için `ituperceptron/image-captioning-turkish` veri kümesi üzerinde fine tunining yapılmıştır. * Eğitimde, P100 GPU gibi kısıtlı kaynaklarda çalışabilmek için bellek optimizasyon teknikleri (örn: `per_device_eval_batch_size=1`, FP16 eğitimi, `PYTORCH_CUDA_ALLOC_CONF` ortam değişkeni, üretilen maksimum token sayısının sınırlandırılması) kullanılmıştır. * Hugging Face `Transformers`, `Datasets` ve `Evaluate` kütüphaneleri, veri işleme, model eğitimi ve metriklerin hesaplanması için temel araçlar olarak kullanılmıştır. * Modelin performansı SacreBLEU, ROUGE ve METEOR gibi standart görüntü altyazılama metrikleri ile değerlendirilmiştir. ## Performans (İnce Ayar Sonrası) Model, `ituperceptron/image-captioning-turkish` veri kümesinin **test seti** üzerinde aşağıdaki performansı göstermiştir: * **SacreBLEU:** `0.1129` * **ROUGE-L:** `0.2852` * **ROUGE-1:** `0.3486` * **METEOR:** `0.2978` Bu metrikler, modelin Türkçe altyazı üretme yeteneğini önemli ölçüde kazandığını ve zero-shot performansına kıyasla büyük bir gelişme kaydettiğini göstermektedir. Özellikle kısıtlı veri ve donanım kaynakları göz önüne alındığında, bu sonuçlar Türkçe VLM alanında umut verici bir başlangıç noktası sunmaktadır. ## Kullanım (Örnek) Bu fine tuning yapılmış model, Hugging Face Hub'dan doğrudan yüklenebilir ve Türkçe görüntü altyazıları üretmek için kullanılabilir: ```python from transformers import BlipProcessor, BlipForConditionalGeneration from PIL import Image import torch # 1. Modelinizi ve İşlemcinizi Hugging Face Hub'dan Yükleyin MODEL_ID = "ogulcanakca/blip-base-turkish-image-captioning" # modelimizin Hub ID'si try: processor = BlipProcessor.from_pretrained(MODEL_ID) model = BlipForConditionalGeneration.from_pretrained(MODEL_ID) print(f"'{MODEL_ID}' başarıyla yüklendi.") except Exception as e: print(f"Hata: Model veya Processor yüklenemedi: {e}") raise model.eval() device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) print(f"Model '{device}' cihazına taşındı ve değerlendirme modunda.") # 2. Altyazı Üretmek İstediğiniz Görüntüyü Yükleyin image_path = "path/to/your/image.jpg" # <<< KENDİ GÖRÜNTÜNÜZÜN YOLUNU GİRİN try: image_pil = Image.open(image_path).convert("RGB") print(f"'{image_path}' başarıyla yüklendi.") except FileNotFoundError: print(f"HATA: Görüntü '{image_path}' bulunamadı.") exit() # 3. Görüntüyü İşleyin ve Altyazı Üretin # BLIP altyazılama modelleri genellikle text prompt'a ihtiyaç duymaz. inputs = processor(images=image_pil, return_tensors="pt").to(device, model.dtype) # model.dtype ile aynı hassasiyette print("\nModel ile altyazı üretiliyor...") with torch.no_grad(): generated_ids = model.generate( **inputs, max_length=70, # Eğitimdeki TEXT_MAX_LENGTH ile uyumlu veya istediğiniz bir değer # num_beams=3, # İsteğe bağlı: Daha kaliteli ama yavaş üretim için # early_stopping=True # num_beams > 1 ise ) generated_caption = processor.decode(generated_ids[0], skip_special_tokens=True) print("\nÜretilen Türkçe Altyazı:") print(generated_caption) # Görüntüyü göstermek için (isteğe bağlı): # import matplotlib.pyplot as plt # plt.imshow(image_pil) # plt.title(generated_caption) # plt.axis('off') # plt.show() ``` ## Eğitim Hiperparametreleri (Örnek) * Base Model: Salesforce/blip-image-captioning-base * Learning Rate: 2e-5 * Training Epochs: 3 * Training Batch Size per Device: 4 * Evaluation Batch Size per Device: 1 * Gradient Accumulation Steps: 4 (Effective batch size: 16) * Warmup Ratio: 0.1 * Weight Decay: 0.01 * Optimizer: AdamW (adamw_torch) * Mixed Precision: FP16 * Maximum Text Length (Training): 70 * This approach was taken in anticipation that it would become disconnected from context as the length increased. ## Ortam Bilgileri ve Süreler * Model Boyutu: Salesforce/blip-image-captioning-base (~990MB pytorch_model.bin olarak) * Veri Kümesi: ituperceptron/image-captioning-turkish (long_captions split'inden 60.000 örnek) * GPU: NVIDIA P100 (Kaggle üzerinde) * Toplam Eğitim Süresi (3 epoch için): 6:57:27.93 * Saniyede eğitilen örnek sayısı: 7.186 * Saniyede atılan eğitim adımı: 0.449 * Çıkarım Hızı: Yaklaşık 5 örnek/saniye (Trainer.predict() ile test seti üzerinde (NVIDIA P100)) ### CITATION ``` @misc{ogulcanakca_blip_turkish_captioning_2025, author = {Oğulcan Akca}, title = {Fine-tuned BLIP-Base for Turkish Image Captioning}, year = {2025}, publisher = {Hugging Face}, journal = {Hugging Face Model Hub}, howpublished = {{[https://huggingface.co/ogulcanakca/blip-base-turkish-image-captioning](https://huggingface.co/ogulcanakca/blip-base-turkish-image-captioning)}} } ``` * BLIP: ``` @inproceedings{li2022blip, title={BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, author={Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, booktitle={International Conference on Machine Learning (ICML)}, year={2022} } ``` ## Notebook'lar * [blip-turkish-image-captioning-0shot noteobok'u](https://www.kaggle.com/code/oulcanakca/blip-turkish-image-captioning-0shot) * [blip-turkish-image-captioning-fine-tune notebook'u](https://www.kaggle.com/code/oulcanakca/blip-turkish-image-captioning-fine-tune) * [blip-turkish-image-captioning-test notebook'u](https://www.kaggle.com/code/oulcanakca/blip-turkish-image-captioning-test) * [blip-turkish-image-captioning-inference notebook'u](https://www.kaggle.com/code/oulcanakca/blip-turkish-image-captioning-inference)
null
adalbertojunior/image_captioning_portuguese
Image Captioning in Portuguese trained with ViT and GPT2 [DEMO](https://huggingface.co/spaces/adalbertojunior/image_captioning_portuguese) Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC)
null
deepklarity/poster2plot
# Poster2Plot An image captioning model to generate movie/t.v show plot from poster. It generates decent plots but is no way perfect. We are still working on improving the model. ## Live demo on Hugging Face Spaces: https://huggingface.co/spaces/deepklarity/poster2plot # Model Details The base model uses a Vision Transformer (ViT) model as an image encoder and GPT-2 as a decoder. We used the following models: * Encoder: [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) * Decoder: [gpt2](https://huggingface.co/gpt2) # Datasets Publicly available IMDb datasets were used to train the model. # How to use ## In PyTorch ```python import torch import re import requests from PIL import Image from transformers import AutoTokenizer, AutoFeatureExtractor, VisionEncoderDecoderModel # Pattern to ignore all the text after 2 or more full stops regex_pattern = "[.]{2,}" def post_process(text): try: text = text.strip() text = re.split(regex_pattern, text)[0] except Exception as e: print(e) pass return text def predict(image, max_length=64, num_beams=4): pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) with torch.no_grad(): output_ids = model.generate( pixel_values, max_length=max_length, num_beams=num_beams, return_dict_in_generate=True, ).sequences preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) pred = post_process(preds[0]) return pred model_name_or_path = "deepklarity/poster2plot" device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Load model. model = VisionEncoderDecoderModel.from_pretrained(model_name_or_path) model.to(device) print("Loaded model") feature_extractor = AutoFeatureExtractor.from_pretrained(model.encoder.name_or_path) print("Loaded feature_extractor") tokenizer = AutoTokenizer.from_pretrained(model.decoder.name_or_path, use_fast=True) if model.decoder.name_or_path == "gpt2": tokenizer.pad_token = tokenizer.eos_token print("Loaded tokenizer") url = "https://upload.wikimedia.org/wikipedia/en/2/26/Moana_Teaser_Poster.jpg" with Image.open(requests.get(url, stream=True).raw) as image: pred = predict(image) print(pred) ```
null
gagan3012/ViTGPT2I2A
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ViTGPT2I2A This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the vizwiz dataset. It achieves the following results on the evaluation set: - Loss: 0.0708 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 4 - total_eval_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 0.1528 | 0.17 | 1000 | 0.0869 | | 0.0899 | 0.34 | 2000 | 0.0817 | | 0.084 | 0.51 | 3000 | 0.0790 | | 0.0814 | 0.68 | 4000 | 0.0773 | | 0.0803 | 0.85 | 5000 | 0.0757 | | 0.077 | 1.02 | 6000 | 0.0745 | | 0.0739 | 1.19 | 7000 | 0.0740 | | 0.0719 | 1.37 | 8000 | 0.0737 | | 0.0717 | 1.54 | 9000 | 0.0730 | | 0.0731 | 1.71 | 10000 | 0.0727 | | 0.0708 | 1.88 | 11000 | 0.0720 | | 0.0697 | 2.05 | 12000 | 0.0717 | | 0.0655 | 2.22 | 13000 | 0.0719 | | 0.0653 | 2.39 | 14000 | 0.0719 | | 0.0657 | 2.56 | 15000 | 0.0712 | | 0.0663 | 2.73 | 16000 | 0.0710 | | 0.0654 | 2.9 | 17000 | 0.0708 | | 0.0645 | 3.07 | 18000 | 0.0716 | | 0.0616 | 3.24 | 19000 | 0.0712 | | 0.0607 | 3.41 | 20000 | 0.0712 | | 0.0611 | 3.58 | 21000 | 0.0711 | | 0.0615 | 3.76 | 22000 | 0.0711 | | 0.0614 | 3.93 | 23000 | 0.0710 | | 0.0594 | 4.1 | 24000 | 0.0716 | | 0.0587 | 4.27 | 25000 | 0.0715 | | 0.0574 | 4.44 | 26000 | 0.0715 | | 0.0579 | 4.61 | 27000 | 0.0715 | | 0.0581 | 4.78 | 28000 | 0.0715 | | 0.0579 | 4.95 | 29000 | 0.0715 | ### Framework versions - Transformers 4.16.2 - Pytorch 1.10.2+cu113 - Datasets 1.18.3 - Tokenizers 0.11.0
null
bipin/image-caption-generator
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Image-caption-generator This model is trained on [Flickr8k](https://www.kaggle.com/datasets/nunenuh/flickr8k) dataset to generate captions given an image. It achieves the following results on the evaluation set: - eval_loss: 0.2536 - eval_runtime: 25.369 - eval_samples_per_second: 63.818 - eval_steps_per_second: 8.002 - epoch: 4.0 - step: 3236 # Running the model using transformers library 1. Load the pre-trained model from the model hub ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer import torch from PIL import Image model_name = "bipin/image-caption-generator" # load model model = VisionEncoderDecoderModel.from_pretrained(model_name) feature_extractor = ViTFeatureExtractor.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained("gpt2") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) ``` 2. Load the image for which the caption is to be generated(note: replace the value of `img_name` with image of your choice) ```python ### replace the value with your image img_name = "flickr_data.jpg" img = Image.open(img_name) if img.mode != 'RGB': img = img.convert(mode="RGB") ``` 3. Pre-process the image ```python pixel_values = feature_extractor(images=[img], return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) ``` 4. Generate the caption ```python max_length = 128 num_beams = 4 # get model prediction output_ids = model.generate(pixel_values, num_beams=num_beams, max_length=max_length) # decode the generated prediction preds = tokenizer.decode(output_ids[0], skip_special_tokens=True) print(preds) ``` ## Training procedure The procedure used to train this model can be found [here](https://bipinkrishnan.github.io/ml-recipe-book/image_captioning.html). ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Framework versions - Transformers 4.16.2 - Pytorch 1.9.1 - Datasets 1.18.4 - Tokenizers 0.11.6
null
yuewu/toc_titler
A model that inputs chemistry journal article table of contents (ToC) images and generates appropriate titles. Trained on all JACS ToCs and titles.
null
dhansmair/flamingo-tiny
Flamingo Model (tiny version) pretrained on Image Captioning on the Conceptual Captions (3M) dataset. Source Code: https://github.com/dhansmair/flamingo-mini Demo Space: https://huggingface.co/spaces/dhansmair/flamingo-tiny-cap Flamingo-mini: https://huggingface.co/spaces/dhansmair/flamingo-mini-cap
null
dhansmair/flamingo-mini
Flamingo Model pretrained on Image Captioning on the Conceptual Captions (3M) dataset. Source Code: https://github.com/dhansmair/flamingo-mini Demo Space: https://huggingface.co/spaces/dhansmair/flamingo-mini-cap Flamingo-tiny: https://huggingface.co/spaces/dhansmair/flamingo-tiny-cap
null
Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically
This is an image captioning model training by Zayn ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer model = VisionEncoderDecoderModel.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") feature_extractor = ViTFeatureExtractor.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") tokenizer = AutoTokenizer.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 20 num_beams = 8 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['Image URL.jpg'])
null
microsoft/git-base-coco
# GIT (GenerativeImage2Text), base-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on COCO. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
microsoft/git-base-textcaps
# GIT (GenerativeImage2Text), base-sized, fine-tuned on TextCaps GIT (short for GenerativeImage2Text) model, base-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. Next, the model was fine-tuned on TextCaps. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
microsoft/git-large
# GIT (GenerativeImage2Text), large-sized GIT (short for GenerativeImage2Text) model, large-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
microsoft/git-large-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. Next, the model was fine-tuned on COCO. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
microsoft/git-large-textcaps
# GIT (GenerativeImage2Text), large-sized, fine-tuned on TextCaps GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. Next, the model was fine-tuned on TextCaps. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
ybelkada/blip-image-captioning-base-football-finetuned
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone) - and fine-tuned on [football dataset](https://huggingface.co/datasets/ybelkada/football-dataset). Google Colab notebook for fine-tuning: https://colab.research.google.com/drive/1lbqiSiA0sDF7JDWPeS0tccrM85LloVha?usp=sharing | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("ybelkada/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("ybelkada/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesfoce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
tuman/vit-rugpt2-image-captioning
# First image captioning model for russian language vit-rugpt2-image-captioning This is an image captioning model trained on translated version (en-ru) of dataset COCO2014. # Model Details Model was initialized `google/vit-base-patch16-224-in21k` for encoder and `sberbank-ai/rugpt3large_based_on_gpt2` for decoder. # Metrics on test data * Bleu: 8.672 * Bleu precision 1: 30.567 * Bleu precision 2: 7.895 * Bleu precision 3: 3.261 # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("vit-rugpt2-image-captioning") feature_extractor = ViTFeatureExtractor.from_pretrained("vit-rugpt2-image-captioning") tokenizer = AutoTokenizer.from_pretrained("vit-rugpt2-image-captioning") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_caption(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_caption(['train2014/COCO_train2014_000000295442.jpg']) # ['Самолет на взлетно-посадочной полосе аэропорта.'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="vit-rugpt2-image-captioning") image_to_text("train2014/COCO_train2014_000000296754.jpg") # [{'generated_text': 'Человек идет по улице с зонтом.'}] ``` # Contact for any help * https://huggingface.co/tuman * https://github.com/tumanov-a * https://t.me/tumanov_av
null
microsoft/git-large-r
# GIT (GenerativeImage2Text), large-sized, R* *R means "re-trained by removing some offensive captions in cc12m dataset". GIT (short for GenerativeImage2Text) model, large-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
microsoft/git-large-r-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO, R* R = re-trained by removing some offensive captions in cc12m dataset GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. Next, the model was fine-tuned on COCO. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
microsoft/git-large-r-textcaps
# GIT (GenerativeImage2Text), large-sized, fine-tuned on TextCaps, R* R = re-trained by removing some offensive captions in cc12m dataset GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on TextCaps. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/transformers/main/model_doc/git.html). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. Next, the model was fine-tuned on TextCaps. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
tifa-benchmark/promptcap-coco-vqa
This is the repo for the paper [PromptCap: Prompt-Guided Task-Aware Image Captioning](https://arxiv.org/abs/2211.09699). This paper is accepted to ICCV 2023 as [PromptCap: Prompt-Guided Image Captioning for VQA with GPT-3](https://openaccess.thecvf.com/content/ICCV2023/html/Hu_PromptCap_Prompt-Guided_Image_Captioning_for_VQA_with_GPT-3_ICCV_2023_paper.html). We introduce PromptCap, a captioning model that can be controlled by natural language instruction. The instruction may contain a question that the user is interested in. For example, "what is the boy putting on?". PromptCap also supports generic caption, using the question "what does the image describe?" PromptCap can serve as a light-weight visual plug-in (much faster than BLIP-2) for LLM like GPT-3, ChatGPT, and other foundation models like Segment Anything and DINO. It achieves SOTA performance on COCO captioning (150 CIDEr). When paired with GPT-3, and conditioned on user question, PromptCap get SOTA performance on knowledge-based VQA tasks (60.4% on OK-VQA and 59.6% on A-OKVQA) # QuickStart ## Installation ``` pip install promptcap ``` Two pipelines are included. One is for image captioning, and the other is for visual question answering. ## Captioning Pipeline Please follow the prompt format, which will give the best performance. Generate a prompt-guided caption by following: ```python import torch from promptcap import PromptCap model = PromptCap("tifa-benchmark/promptcap-coco-vqa") # also support OFA checkpoints. e.g. "OFA-Sys/ofa-large" if torch.cuda.is_available(): model.cuda() prompt = "please describe this image according to the given question: what piece of clothing is this boy putting on?" image = "glove_boy.jpeg" print(model.caption(prompt, image)) ``` To try generic captioning, just use "what does the image describe?" ```python prompt = "what does the image describe?" image = "glove_boy.jpeg" print(model.caption(prompt, image)) ``` PromptCap also support taking OCR inputs: ```python prompt = "please describe this image according to the given question: what year was this taken?" image = "dvds.jpg" ocr = "yip AE Mht juor 02/14/2012" print(model.caption(prompt, image, ocr)) ``` ## Visual Question Answering Pipeline Different from typical VQA models, which are doing classification on VQAv2, PromptCap is open-domain and can be paired with arbitrary text-QA models. Here we provide a pipeline for combining PromptCap with UnifiedQA. ```python import torch from promptcap import PromptCap_VQA # QA model support all UnifiedQA variants. e.g. "allenai/unifiedqa-v2-t5-large-1251000" vqa_model = PromptCap_VQA(promptcap_model="tifa-benchmark/promptcap-coco-vqa", qa_model="allenai/unifiedqa-t5-base") if torch.cuda.is_available(): vqa_model.cuda() question = "what piece of clothing is this boy putting on?" image = "glove_boy.jpeg" print(vqa_model.vqa(question, image)) ``` Similarly, PromptCap supports OCR inputs ```python question = "what year was this taken?" image = "dvds.jpg" ocr = "yip AE Mht juor 02/14/2012" print(vqa_model.vqa(question, image, ocr=ocr)) ``` Because of the flexibility of Unifiedqa, PromptCap also supports multiple-choice VQA ```python question = "what piece of clothing is this boy putting on?" image = "glove_boy.jpeg" choices = ["gloves", "socks", "shoes", "coats"] print(vqa_model.vqa_multiple_choice(question, image, choices)) ``` ## Bibtex ``` @article{hu2022promptcap, title={PromptCap: Prompt-Guided Task-Aware Image Captioning}, author={Hu, Yushi and Hua, Hang and Yang, Zhengyuan and Shi, Weijia and Smith, Noah A and Luo, Jiebo}, journal={arXiv preprint arXiv:2211.09699}, year={2022} } ```
null
Salesforce/blip2-flan-t5-xl
# BLIP-2, Flan T5-xl, pre-trained only BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
Salesforce/blip2-opt-6.7b
# BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
Salesforce/blip2-opt-2.7b-coco
# BLIP-2, OPT-2.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
Salesforce/blip2-opt-6.7b-coco
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
Salesforce/blip2-flan-t5-xl-coco
# BLIP-2, Flan T5-xl, fine-tuned on COCO BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
Salesforce/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
jaimin/image_caption
# Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("jaimin/image_caption") feature_extractor = ViTFeatureExtractor.from_pretrained("jaimin/image_caption") tokenizer = AutoTokenizer.from_pretrained("jaimin/image_caption") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="jaimin/image_caption") ```
null
Tomatolovve/DemoTest
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning using transformers ![](https://ankur3107.github.io/assets/images/vision-encoder-decoder.png) * https://ankur3107.github.io/blogs/the-illustrated-image-captioning-using-transformers/ # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") feature_extractor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['doctor.e16ba4e4.jpg']) # ['a woman in a hospital bed with a woman in a hospital bed'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning") image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png") # [{'generated_text': 'a soccer game with a player jumping to catch the ball '}] ``` # Contact for any help * https://huggingface.co/ankur310794 * https://twitter.com/ankur310794 * http://github.com/ankur3107 * https://www.linkedin.com/in/ankur310794
null
Maciel/Muge-Image-Caption
### 功能介绍 该模型功能主要是对图片生成文字描述。模型结构使用Encoder-Decoder结构,其中Encoder端使用BEiT模型,Decoder使用GPT模型。 使用中文Muge数据集训练语料,训练5k步,最终验证集loss为0.3737,rouge1为20.419,rouge2为7.3553,rougeL为17.3753,rougeLsum为17.376。 [Github项目地址](https://github.com/Macielyoung/Chinese-Image-Caption) ### 如何使用 ```python from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer from PIL import Image pretrained = "Maciel/Muge-Image-Caption" model = VisionEncoderDecoderModel.from_pretrained(pretrained) feature_extractor = ViTFeatureExtractor.from_pretrained(pretrained) tokenizer = AutoTokenizer.from_pretrained(pretrained) image_path = "https://huggingface.co/Maciel/Muge-Image-Caption/blob/main/%E9%AB%98%E8%B7%9F%E9%9E%8B.jpg" image = Image.open(image_path) if image.mode != "RGB": image = image.convert("RGB") pixel_values = feature_extractor(images=image, return_tensors="pt").pixel_values output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] print(preds) ```
null
baseplate/vit-gpt2-image-captioning
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning using transformers ![](https://ankur3107.github.io/assets/images/vision-encoder-decoder.png) * https://ankur3107.github.io/blogs/the-illustrated-image-captioning-using-transformers/ # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") feature_extractor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['doctor.e16ba4e4.jpg']) # ['a woman in a hospital bed with a woman in a hospital bed'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning") image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png") # [{'generated_text': 'a soccer game with a player jumping to catch the ball '}] ``` # Contact for any help * https://huggingface.co/ankur310794 * https://twitter.com/ankur310794 * http://github.com/ankur3107 * https://www.linkedin.com/in/ankur310794
null
memegpt/blip2_endpoint
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
kpyu/video-blip-opt-2.7b-ego4d
# VideoBLIP, OPT-2.7b, fine-tuned on Ego4D VideoBLIP model, leveraging [BLIP-2](https://arxiv.org/abs/2301.12597) with [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters) as its LLM backbone. ## Model description VideoBLIP is an augmented BLIP-2 that can handle videos. ## Bias, Risks, Limitations, and Ethical Considerations VideoBLIP-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > VideoBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, please refer to the [official repository](https://github.com/yukw777/VideoBLIP).
null
kpyu/video-blip-flan-t5-xl-ego4d
# VideoBLIP, Flan T5-xl, fine-tuned on Ego4D VideoBLIP model, leveraging [BLIP-2](https://arxiv.org/abs/2301.12597) with [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model with 2.7 billion parameters) as its LLM backbone. ## Model description VideoBLIP is an augmented BLIP-2 that can handle videos. ## Bias, Risks, Limitations, and Ethical Considerations VideoBLIP-OPT uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. VideoBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, please refer to the [official repository](https://github.com/yukw777/VideoBLIP).
null
wangjin2000/git-base-finetune
# GIT (GenerativeImage2Text), base-sized GIT (short for GenerativeImage2Text) model, base-sized version. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-base", which is a smaller variant of GIT trained on 10 million image-text pairs. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
Salesforce/instructblip-flan-t5-xl
# InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-flan-t5-xl") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xl") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip). ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
null
noamrot/FuseCap_Image_Captioning
# FuseCap: Leveraging Large Language Models for Enriched Fused Image Captions A framework designed to generate semantically rich image captions. ## Resources - 💻 **Project Page**: For more details, visit the official [project page](https://rotsteinnoam.github.io/FuseCap/). - 📝 **Read the Paper**: You can find the paper [here](https://arxiv.org/abs/2305.17718). - 🚀 **Demo**: Try out our BLIP-based model [demo](https://huggingface.co/spaces/noamrot/FuseCap) trained using FuseCap. - 📂 **Code Repository**: The code for FuseCap can be found in the [GitHub repository](https://github.com/RotsteinNoam/FuseCap). - 🗃️ **Datasets**: The fused captions datasets can be accessed from [here](https://github.com/RotsteinNoam/FuseCap#datasets). #### Running the model Our BLIP-based model can be run using the following code, ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration import torch device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') processor = BlipProcessor.from_pretrained("noamrot/FuseCap") model = BlipForConditionalGeneration.from_pretrained("noamrot/FuseCap").to(device) img_url = 'https://huggingface.co/spaces/noamrot/FuseCap/resolve/main/bike.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') text = "a picture of " inputs = processor(raw_image, text, return_tensors="pt").to(device) out = model.generate(**inputs, num_beams = 3) print(processor.decode(out[0], skip_special_tokens=True)) ``` ## Upcoming Updates The official codebase, datasets and trained models for this project will be released soon. ## BibTeX ``` Citation @inproceedings{rotstein2024fusecap, title={Fusecap: Leveraging large language models for enriched fused image captions}, author={Rotstein, Noam and Bensa{\"\i}d, David and Brody, Shaked and Ganz, Roy and Kimmel, Ron}, booktitle={Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision}, pages={5689--5700}, year={2024} } ```
null
Salesforce/instructblip-flan-t5-xxl
# InstructBLIP model InstructBLIP model using [Flan-T5-xxl](https://huggingface.co/google/flan-t5-xxl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-flan-t5-xxl") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xxl") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
null
Salesforce/instructblip-vicuna-13b
# InstructBLIP model InstructBLIP model using [Vicuna-13b](https://github.com/lm-sys/FastChat#model-weights) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-vicuna-13b") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-vicuna-13b") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip). ## Ethical Considerations This release is for research purposes only in support of an academic paper. Our models, datasets, and code are not specifically designed or evaluated for all downstream purposes. We strongly recommend users evaluate and address potential concerns related to accuracy, safety, and fairness before deploying this model. We encourage users to consider the common limitations of AI, comply with applicable laws, and leverage best practices when selecting use cases, particularly for high-risk scenarios where errors or misuse could significantly impact people’s lives, rights, or safety. For further guidance on use cases, refer to our AUP and AI AUP.
null
muhualing/vit
# nlpconnect/vit-gpt2-image-captioning This is an image captioning model trained by @ydshieh in [flax ](https://github.com/huggingface/transformers/tree/main/examples/flax/image-captioning) this is pytorch version of [this](https://huggingface.co/ydshieh/vit-gpt2-coco-en-ckpts). # The Illustrated Image Captioning using transformers ![](https://ankur3107.github.io/assets/images/vision-encoder-decoder.png) * https://ankur3107.github.io/blogs/the-illustrated-image-captioning-using-transformers/ # Sample running code ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("nlpconnect/vit-gpt2-image-captioning") feature_extractor = ViTImageProcessor.from_pretrained("nlpconnect/vit-gpt2-image-captioning") tokenizer = AutoTokenizer.from_pretrained("nlpconnect/vit-gpt2-image-captioning") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds predict_step(['doctor.e16ba4e4.jpg']) # ['a woman in a hospital bed with a woman in a hospital bed'] ``` # Sample running code using transformers pipeline ```python from transformers import pipeline image_to_text = pipeline("image-to-text", model="nlpconnect/vit-gpt2-image-captioning") image_to_text("https://ankur3107.github.io/assets/images/image-captioning-example.png") # [{'generated_text': 'a soccer game with a player jumping to catch the ball '}] ``` # Contact for any help * https://huggingface.co/ankur310794 * https://twitter.com/ankur310794 * http://github.com/ankur3107 * https://www.linkedin.com/in/ankur310794
null
paragon-AI/blip2-image-to-text
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
captioner/caption-gen
null
movementso/blip-image-captioning-large
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
trojblue/blip2-opt-6.7b-coco-fp16
# BLIP-2, OPT-6.7b, Fine-tuned on COCO - Unofficial FP16 Version This repository contains an unofficial version of the BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b), which has been fine-tuned on COCO and converted to FP16 for reduced model size and memory footprint. The original model, BLIP-2, was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). For a comprehensive understanding of the model, its description, intended uses, limitations, and instructions on usage with different hardware and precision settings, please refer to the [official model card](https://huggingface.co/Salesforce/blip2-opt-6.7b-coco). ## Unofficial FP16 Version This version of the BLIP-2 model has been converted to use FP16 precision, which effectively reduces the model size and memory requirements. The conversion to FP16 can potentially accelerate the model's computation time on hardware with FP16 support, although it might slightly affect the model's performance due to reduced numerical precision. This unofficial FP16 version is ideal for situations where storage, memory, or computational resources are limited. Please note, this is an **unofficial** repository and not maintained or endorsed by the original authors of the model. The FP16 conversion was conducted independently and any potential issues, limitations or discrepancies with the original model are not the responsibility of the original authors. ### How to use The usage of this FP16 version of the model is similar to the original model. For specific code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). Please ensure to test the performance and accuracy of this FP16 model thoroughly in your specific use-case to confirm it meets your needs. This version can be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as a prompt to the model *Disclaimer: This is an unofficial version of the model and any potential issues or discrepancies from the official model are not the responsibility of the original authors.*
null
LanguageMachines/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
LanguageMachines/blip2-opt-2.7b
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
LanguageMachines/blip2-flan-t5-xl
# BLIP-2, Flan T5-xl, pre-trained only BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
Mediocreatmybest/blip2-opt-2.7b-fp16-sharded
__Quality of Life duplication:__ _duplicated_from: ybelkada/blip2-opt-2.7b-fp16-sharded_ _Tokenizer duplicated from: Salesforce/blip2-opt-2.7b_ _Safetensors Added_ 🥱- _Mediocre_ # BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
sheraz179/blip2-flan-t5-xl
# BLIP-2, Flan T5-xl, pre-trained only BLIP-2 model, leveraging [Flan T5-xl](https://huggingface.co/google/flan-t5-xl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
jeff-RQ/new-test-model
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
PD0AUTOMATIONAL/blip2-endpoint
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
PD0AUTOMATIONAL/blip-large-endpoint
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://s3.amazonaws.com/moonup/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Mediocreatmybest/blip2-opt-2.7b_8bit
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / fp4 / float16 / Safetensors_ -🥱 _Mediocre_ # BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
Mediocreatmybest/blip2-opt-6.7b_8bit
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / fp4 / float16 / Safetensors_ -🥱 _Mediocre_ # BLIP-2, OPT-6.7b, pre-trained only BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
jeff-RQ/blip2-opt-6.7b-coco
# BLIP-2, OPT-6.7b, fine-tuned on COCO BLIP-2 model, leveraging [OPT-6.7b](https://huggingface.co/facebook/opt-6.7b) (a large language model with 6.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example).
null
sheraz179/blip2-flan-t5-xxl
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
Gregor/mblip-mt0-xl
# mBLIP mT0-XL This is the model checkpoint for our work [mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs](https://arxiv.org/abs/2307.06930). ## Model description mBLIP is a [BLIP-2](https://arxiv.org/abs/2301.12597) model which consists of 3 sub-models: a Vision Transformer (ViT), a Query-Transformer (Q-Former) and a large language model (LLM). The Q-Former and ViT have both been initialized by an English BLIP-2 checkpoint ([blip2-flan-t5-xl](https://huggingface.co/Gregor/mblip-mt0-xl)) and then re-aligned to the multilingual LLM ([mt0-xl](https://huggingface.co/bigscience/mt0-xl)) using a [multilingual task mixture](https://huggingface.co/datasets/Gregor/mblip-train). <img src="https://github.com/gregor-ge/mBLIP/blob/main/architecture.png" alt="The mBLIP architecture" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) in 96 languages. #### Languages mBLIP was trained on the following 96 languages: ` af, am, ar, az, be, bg, bn, ca, ceb, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, ga, gd, gl, gu, ha, hi, ht, hu, hy, id, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, ur, uz, vi, xh, yi, yo, zh, zu ` ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and prompt text in a zero-shot setup or alternatively finetune it for downstream applications. We strongly recommend LoRA applied to the LLM when finetuning and to use bf16 as data type - standard fp16 can cause NaN loss. See [our repository](https://github.com/gregor-ge/mBLIP) for the code used to train and finetune this model. ## Bias, Risks, Limitations, and Ethical Considerations While mBLIP can work in theory with up to 100 languages, in practice, we expect best results when prompted in high-resource languages like English, German, Spanish, etc. mBLIP inherits the risk, limitations, and biases from the models used to initialize it. mBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the BLIP-2 [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Gregor/mblip-mt0-xl") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`bfloat16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", torch_dtype=torch.bfloat16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) >**Important:** Paper results only use int8 for the LLM weights while this loads all weights in int8. > We see that this gives slightly worse results but currently int8 for some model parts is not supported by HuggingFace. <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-mt0-xl") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-mt0-xl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ## Citation If you use our model, please cite the following: ``` @article{geigle2023mblip, author = {Gregor Geigle and Abhay Jain and Radu Timofte and Goran Glava\v{s}}, title = {mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs}, journal = {arXiv}, volume = {abs/2307.06930}, year = {2023}, url = {https://arxiv.org/abs/2307.06930}, eprinttype = {arXiv}, eprint = {2307.06930}, } ```
null
Fireworks12/git-base-pokemon
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.1817 - Wer Score: 9.0938 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 2 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 7.3974 | 0.7 | 50 | 4.5248 | 4.5234 | | 2.2794 | 1.4 | 100 | 0.4021 | 5.1680 | | 0.1697 | 2.1 | 150 | 0.1398 | 1.5039 | | 0.0816 | 2.8 | 200 | 0.1458 | 9.9570 | | 0.0556 | 3.5 | 250 | 0.1417 | 2.5234 | | 0.043 | 4.2 | 300 | 0.1448 | 12.8086 | | 0.0285 | 4.9 | 350 | 0.1469 | 7.3867 | | 0.021 | 5.59 | 400 | 0.1505 | 13.0312 | | 0.0205 | 6.29 | 450 | 0.1499 | 6.3281 | | 0.0179 | 6.99 | 500 | 0.1527 | 13.0234 | | 0.0157 | 7.69 | 550 | 0.1552 | 6.3047 | | 0.015 | 8.39 | 600 | 0.1571 | 6.7656 | | 0.015 | 9.09 | 650 | 0.1579 | 10.2305 | | 0.0137 | 9.79 | 700 | 0.1585 | 11.4219 | | 0.0132 | 10.49 | 750 | 0.1598 | 5.8320 | | 0.0132 | 11.19 | 800 | 0.1591 | 12.0508 | | 0.013 | 11.89 | 850 | 0.1612 | 7.9492 | | 0.0117 | 12.59 | 900 | 0.1621 | 8.1758 | | 0.0123 | 13.29 | 950 | 0.1632 | 12.9961 | | 0.0125 | 13.99 | 1000 | 0.1613 | 10.2031 | | 0.0116 | 14.69 | 1050 | 0.1642 | 5.7930 | | 0.0112 | 15.38 | 1100 | 0.1636 | 6.1719 | | 0.0112 | 16.08 | 1150 | 0.1652 | 7.2422 | | 0.0107 | 16.78 | 1200 | 0.1644 | 12.9961 | | 0.0108 | 17.48 | 1250 | 0.1661 | 5.0117 | | 0.0109 | 18.18 | 1300 | 0.1658 | 7.3242 | | 0.0108 | 18.88 | 1350 | 0.1691 | 6.0547 | | 0.0101 | 19.58 | 1400 | 0.1690 | 6.9141 | | 0.0103 | 20.28 | 1450 | 0.1692 | 7.1680 | | 0.0107 | 20.98 | 1500 | 0.1702 | 12.3281 | | 0.0099 | 21.68 | 1550 | 0.1708 | 10.75 | | 0.0103 | 22.38 | 1600 | 0.1714 | 9.5586 | | 0.0101 | 23.08 | 1650 | 0.1713 | 12.9805 | | 0.0098 | 23.78 | 1700 | 0.1712 | 11.4883 | | 0.0095 | 24.48 | 1750 | 0.1711 | 9.3320 | | 0.0096 | 25.17 | 1800 | 0.1738 | 8.6523 | | 0.0097 | 25.87 | 1850 | 0.1717 | 11.5078 | | 0.0091 | 26.57 | 1900 | 0.1735 | 7.9570 | | 0.0092 | 27.27 | 1950 | 0.1729 | 9.8242 | | 0.0093 | 27.97 | 2000 | 0.1721 | 10.5078 | | 0.0087 | 28.67 | 2050 | 0.1732 | 9.3906 | | 0.009 | 29.37 | 2100 | 0.1760 | 8.0664 | | 0.009 | 30.07 | 2150 | 0.1769 | 10.5312 | | 0.0086 | 30.77 | 2200 | 0.1743 | 10.8555 | | 0.0087 | 31.47 | 2250 | 0.1772 | 10.2188 | | 0.0089 | 32.17 | 2300 | 0.1757 | 11.6016 | | 0.0088 | 32.87 | 2350 | 0.1765 | 8.9297 | | 0.0082 | 33.57 | 2400 | 0.1754 | 9.6484 | | 0.0082 | 34.27 | 2450 | 0.1770 | 12.3711 | | 0.0084 | 34.97 | 2500 | 0.1761 | 10.1523 | | 0.0076 | 35.66 | 2550 | 0.1774 | 9.1055 | | 0.0077 | 36.36 | 2600 | 0.1788 | 8.7852 | | 0.0079 | 37.06 | 2650 | 0.1782 | 11.8086 | | 0.0071 | 37.76 | 2700 | 0.1784 | 10.5234 | | 0.0075 | 38.46 | 2750 | 0.1789 | 8.8828 | | 0.0072 | 39.16 | 2800 | 0.1796 | 8.5664 | | 0.0071 | 39.86 | 2850 | 0.1804 | 9.5391 | | 0.0069 | 40.56 | 2900 | 0.1796 | 9.4062 | | 0.0068 | 41.26 | 2950 | 0.1797 | 8.9883 | | 0.0067 | 41.96 | 3000 | 0.1809 | 10.5273 | | 0.0062 | 42.66 | 3050 | 0.1801 | 10.4531 | | 0.0062 | 43.36 | 3100 | 0.1803 | 7.2188 | | 0.0063 | 44.06 | 3150 | 0.1808 | 8.7930 | | 0.0058 | 44.76 | 3200 | 0.1804 | 10.5156 | | 0.0057 | 45.45 | 3250 | 0.1807 | 11.1328 | | 0.0059 | 46.15 | 3300 | 0.1812 | 8.6875 | | 0.0055 | 46.85 | 3350 | 0.1811 | 10.2773 | | 0.0053 | 47.55 | 3400 | 0.1814 | 10.0391 | | 0.0054 | 48.25 | 3450 | 0.1817 | 8.5391 | | 0.0053 | 48.95 | 3500 | 0.1818 | 8.9688 | | 0.005 | 49.65 | 3550 | 0.1817 | 9.0938 | ### Framework versions - Transformers 4.30.2 - Pytorch 2.0.1+cu118 - Datasets 2.13.1 - Tokenizers 0.13.3
null
michelecafagna26/blip-base-captioning-ft-hl-actions
## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Actions [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **action generation of images** ## Model fine-tuning 🏋️‍ - Trained for 6 epochs - lr: 5e−5, - Adam optimizer, - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 123.07 | 17.16 | 32.16 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-actions") model = BlipForConditionalGeneration.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-actions").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is holding an umbrella." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/blip-base-captioning-ft-hl-scenes
## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Scenes [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **scene generation of images** ## Model fine-tuning 🏋️‍ - Trained for 10 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L | |--------|------------|---------| | 116.70 | 26.46 | 35.30 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-scenes") model = BlipForConditionalGeneration.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-scenes").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "the picture is taken near a lake." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/blip-base-captioning-ft-hl-rationales
## BLIP-base fine-tuned for Image Captioning on High-Level descriptions of Rationales [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **rationale generation of images** ## Model fine-tuning 🏋️‍ - Trained for of 6 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L | |--------|------------|---------| | 46.11 | 6.21 | 19.74 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-rationales") model = BlipForConditionalGeneration.from_pretrained("michelecafagna26/blip-base-captioning-ft-hl-rationales").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is on vacation." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/git-base-captioning-ft-hl-actions
## GIT-base fine-tuned for Image Captioning on High-Level descriptions of Actions [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **action generation of images** ## Model fine-tuning 🏋️‍ - Trained for 10 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 110.63 | 15.21 | 30.45 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("git-base-captioning-ft-hl-actions") model = AutoModelForCausalLM.from_pretrained("git-base-captioning-ft-hl-actions").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is holding an umbrella." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/git-base-captioning-ft-hl-scenes
## GIT-base fine-tuned for Image Captioning on High-Level descriptions of Scenes [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **scene generation of images** ## Model fine-tuning 🏋️‍ - Trained for 10 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 103.00 | 24.67 | 33.90 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("git-base-captioning-ft-hl-scenes") model = AutoModelForCausalLM.from_pretrained("git-base-captioning-ft-hl-scenes").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "in a beach" ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/git-base-captioning-ft-hl-rationales
## GIT-base fine-tuned for Image Captioning on High-Level descriptions of Rationales [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **rationale generation of images** ## Model fine-tuning 🏋️‍ - Trained for of 10 - lr: 5e−5 - Adam optimizer . half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 42.58 | 5.9 | 18.55 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("git-base-captioning-ft-hl-rationales") model = AutoModelForCausalLM.from_pretrained("git-base-captioning-ft-hl-rationales").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is enjoying the sunny day." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/blip-base-captioning-ft-hl-narratives
## BLIP-base fine-tuned for Narrative Image Captioning [BLIP](https://arxiv.org/abs/2201.12086) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ - Trained for a 3 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 79.39 | 11.70 | 26.17 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("blip-base-captioning-ft-hl-narratives") model = BlipForConditionalGeneration.from_pretrained("blip-base-captioning-ft-hl-narratives").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is holding an umbrella near a lake and is on vacation." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/git-base-captioning-ft-hl-narratives
## GIT-base fine-tuned for Narrative Image Captioning [GIT](https://arxiv.org/abs/2205.14100) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ - Trained for a 3 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 75.78 | 11.11 | 27.61 | ## Model in Action 🚀 ```python import requests from PIL import Image from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("git-base-captioning-ft-hl-narratives") model = AutoModelForCausalLM.from_pretrained("git-base-captioning-ft-hl-narratives").to("cuda") img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') inputs = processor(raw_image, return_tensors="pt").to("cuda") pixel_values = inputs.pixel_values generated_ids = model.generate(pixel_values=pixel_values, max_length=50, do_sample=True, top_k=120, top_p=0.9, early_stopping=True, num_return_sequences=1) processor.batch_decode(generated_ids, skip_special_tokens=True) >>> "she is posing for a photo on the beach, she wants to post on her social media." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/clipcap-base-captioning-ft-hl-narratives
## ClipCap fine-tuned for Narrative Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Narratives](https://huggingface.co/datasets/michelecafagna26/hl-narratives) for **high-level narrative descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretrained on COCO - Trained for 3 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |--------|------------|--------| | 63.91 | 8.15 | 24.53 | ## Demo [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1xcaJOxaAp8TRd8a6x1XnAptVjHQRv3Zj?usp=sharing) ## Installation ```bash pip install git+https://github.com/michelecafagna26/CLIPCap.git ``` ## Download the model ```bash git lfs install # if not installed git clone https://huggingface.co/michelecafagna26/clipcap-base-captioning-ft-hl-narratives ``` ## Model in Action 🚀 ```python from clipcap import ClipCaptionModel from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, ) import torch import clip import requests from PIL import Image model_path = "clipcap-base-captioning-ft-hl-narratives/pytorch_model.pt" # change accordingly # load clip device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") prefix_length = 10 # load ClipCap model = ClipCaptionModel(prefix_length, tokenizer=tokenizer) model.from_pretrained(model_path) model = model.eval() model = model.to(device) # load the image img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl-narratives/--/default/train/3/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # extract the prefix image = preprocess(raw_image).unsqueeze(0).to(device) with torch.no_grad(): prefix = clip_model.encode_image(image).to( device, dtype=torch.float32 ) prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) # generate the caption model.generate_beam(embed=prefix_embed)[0] # >> "He is riding a skateboard in a skate park, he wants to skate." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/clipcap-base-captioning-ft-hl-actions
## ClipCap fine-tuned for Action Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level action descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretrained on COCO - Trained for 10 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |---------|------------|--------| | 176.54 | 27.37 | 39.15 | ## Demo [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1Rw9_oNNfP2QsIpekmJhRHAXv_6MX-0ur?usp=sharing) ## Installation ```bash pip install git+https://github.com/michelecafagna26/CLIPCap.git ``` ## Download the model ```bash git lfs install # if not installed git clone https://huggingface.co/michelecafagna26/clipcap-base-captioning-ft-hl-actions ``` ## Model in Action 🚀 ```python from clipcap import ClipCaptionModel from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, ) import torch import clip import requests from PIL import Image model_path = "clipcap-base-captioning-ft-hl-actions/pytorch_model.pt" # change accordingly # load clip device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") prefix_length = 10 # load ClipCap model = ClipCaptionModel(prefix_length, tokenizer=tokenizer) model.from_pretrained(model_path) model = model.eval() model = model.to(device) # load the image img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # extract the prefix image = preprocess(raw_image).unsqueeze(0).to(device) with torch.no_grad(): prefix = clip_model.encode_image(image).to( device, dtype=torch.float32 ) prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) # generate the caption model.generate_beam(embed=prefix_embed)[0] # >> "she is posing for a photo." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/clipcap-base-captioning-ft-hl-scenes
## ClipCap fine-tuned for Scene Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level scene descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretrained on COCO - Trained for 9 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |---------|------------|--------| | 145.93 | 36.73 | 42.83 | ## Demo [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1redkE4W2GusU_GVDuY_R2PXbh9ZDuTqf?usp=sharing) ## Installation ```bash pip install git+https://github.com/michelecafagna26/CLIPCap.git ``` ## Download the model ```bash git lfs install # if not installed git clone https://huggingface.co/michelecafagna26/clipcap-base-captioning-ft-hl-scenes ``` ## Model in Action 🚀 ```python from clipcap import ClipCaptionModel from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, ) import torch import clip import requests from PIL import Image model_path = "clipcap-base-captioning-ft-hl-scenes/pytorch_model.pt" # change accordingly # load clip device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") prefix_length = 10 # load ClipCap model = ClipCaptionModel(prefix_length, tokenizer=tokenizer) model.from_pretrained(model_path) model = model.eval() model = model.to(device) # load the image img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # extract the prefix image = preprocess(raw_image).unsqueeze(0).to(device) with torch.no_grad(): prefix = clip_model.encode_image(image).to( device, dtype=torch.float32 ) prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) # generate the caption model.generate_beam(embed=prefix_embed)[0] # >> "the picture is taken on the beach." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
michelecafagna26/clipcap-base-captioning-ft-hl-rationales
## ClipCap fine-tuned for Rationale Image Captioning [ClipCap](https://arxiv.org/abs/2111.09734) base trained on the [HL Dataset](https://huggingface.co/datasets/michelecafagna26/hl) for **high-level rationale descriptions generation** ## Model fine-tuning 🏋️‍ We fine-tune LM + Mapping Network starting from the model pretrained on COCO - Trained for 8 epochs - lr: 5e−5 - Adam optimizer - half-precision (fp16) ## Test set metrics 🧾 | Cider | SacreBLEU | Rouge-L| |---------|------------|--------| | 78.04 | 11.71 | 25.76 | ## Demo [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/drive/1191fsBtOW1p_Qy17lVpr-2TaFqL24acE?usp=sharing) ## Installation ```bash pip install git+https://github.com/michelecafagna26/CLIPCap.git ``` ## Download the model ```bash git lfs install # if not installed git clone https://huggingface.co/michelecafagna26/clipcap-base-captioning-ft-hl-rationales ``` ## Model in Action 🚀 ```python from clipcap import ClipCaptionModel from transformers import ( GPT2Tokenizer, GPT2LMHeadModel, ) import torch import clip import requests from PIL import Image model_path = "clipcap-base-captioning-ft-hl-rationales/pytorch_model.pt" # change accordingly # load clip device = "cuda" if torch.cuda.is_available() else "cpu" clip_model, preprocess = clip.load("ViT-B/32", device=device, jit=False) tokenizer = GPT2Tokenizer.from_pretrained("gpt2") prefix_length = 10 # load ClipCap model = ClipCaptionModel(prefix_length, tokenizer=tokenizer) model.from_pretrained(model_path) model = model.eval() model = model.to(device) # load the image img_url = 'https://datasets-server.huggingface.co/assets/michelecafagna26/hl/--/default/train/0/image/image.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # extract the prefix image = preprocess(raw_image).unsqueeze(0).to(device) with torch.no_grad(): prefix = clip_model.encode_image(image).to( device, dtype=torch.float32 ) prefix_embed = model.clip_project(prefix).reshape(1, prefix_length, -1) # generate the caption model.generate_beam(embed=prefix_embed)[0] # >> "she is posing for a photo." ``` ## BibTex and citation info ```BibTeX @inproceedings{cafagna2023hl, title={{HL} {D}ataset: {V}isually-grounded {D}escription of {S}cenes, {A}ctions and {R}ationales}, author={Cafagna, Michele and van Deemter, Kees and Gatt, Albert}, booktitle={Proceedings of the 16th International Natural Language Generation Conference (INLG'23)}, address = {Prague, Czech Republic}, year={2023} } ```
null
getZuma/image-captioning
# Update to existing Salesforce model card: **Added handler to run the model on hugging face inference pipeline.** Input: ``` { "inputs": "<Base64 Image>", "prompts": "<Prompt Text here>" } ``` Output: ``` { "captions":"<Generated Image caption>" } ``` # BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
daniyal214/finetuned-blip-chest-xrays
null
daniyal214/finetuned-git-large-chest-xrays
null
Mediocreatmybest/instructblip-flan-t5-xl_8bit
# InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-flan-t5-xl") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xl") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
null
Mediocreatmybest/instructblip-flan-t5-xxl_8bit
# BLIP-2, Flan T5-xxl, pre-trained only BLIP-2 model, leveraging [Flan T5-xxl](https://huggingface.co/google/flan-t5-xxl) (a large language model). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-FlanT5 uses off-the-shelf Flan-T5 as the language model. It inherits the same risks and limitations from [Flan-T5](https://arxiv.org/pdf/2210.11416.pdf): > Language models, including Flan-T5, can potentially be used for language generation in a harmful way, according to Rae et al. (2021). Flan-T5 should not be used directly in any application, without a prior assessment of safety and fairness concerns specific to the application. BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example), or refer to the snippets below depending on your usecase: #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, Blip2ForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-flan-t5-xxl") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-flan-t5-xxl", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details>
null
stabilityai/japanese-instructblip-alpha
# Japanese InstructBLIP Alpha ![japanese-instructblip-icon](./japanese-instructblip-parrot.png) ## Model Details Japanese InstructBLIP Alpha is a vision-language instruction-following model that enables to generate Japanese descriptions for input images and optionally input texts such as questions. ## Usage First install additional dependencies in [requirements.txt](./requirements.txt): ```sh pip install sentencepiece einops ``` ```python import torch from transformers import LlamaTokenizer, AutoModelForVision2Seq, BlipImageProcessor from PIL import Image import requests # helper function to format input prompts def build_prompt(prompt="", sep="\n\n### "): sys_msg = "以下は、タスクを説明する指示と、文脈のある入力の組み合わせです。要求を適切に満たす応答を書きなさい。" p = sys_msg roles = ["指示", "応答"] user_query = "与えられた画像について、詳細に述べてください。" msgs = [": \n" + user_query, ": "] if prompt: roles.insert(1, "入力") msgs.insert(1, ": \n" + prompt) for role, msg in zip(roles, msgs): p += sep + role + msg return p # load model model = AutoModelForVision2Seq.from_pretrained("stabilityai/japanese-instructblip-alpha", trust_remote_code=True) processor = BlipImageProcessor.from_pretrained("stabilityai/japanese-instructblip-alpha") tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁']) device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) # prepare inputs url = "https://images.unsplash.com/photo-1582538885592-e70a5d7ab3d3?ixlib=rb-4.0.3&ixid=M3wxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8fA%3D%3D&auto=format&fit=crop&w=1770&q=80" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "" # input empty string for image captioning. You can also input questions as prompts prompt = build_prompt(prompt) inputs = processor(images=image, return_tensors="pt") text_encoding = tokenizer(prompt, add_special_tokens=False, return_tensors="pt") text_encoding["qformer_input_ids"] = text_encoding["input_ids"].clone() text_encoding["qformer_attention_mask"] = text_encoding["attention_mask"].clone() inputs.update(text_encoding) # generate outputs = model.generate( **inputs.to(device, dtype=model.dtype), num_beams=5, max_new_tokens=32, min_length=1, ) generated_text = tokenizer.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) # 桜と東京スカイツリー ``` ## Model Details * **Developed by**: [Stability AI](https://stability.ai/) * **Model type**: [InstructBLIP](https://arxiv.org/abs/2305.06500) * **Language(s)**: Japanese * **License**: [JAPANESE STABLELM RESEARCH LICENSE AGREEMENT](./LICENSE). ### Training Japanese InstructBLIP Alpha leverages the [InstructBLIP](https://arxiv.org/abs/2305.06500) architecture. It consists of 3 components: a frozen vision image encoder, a Q-Former, and a frozen LLM. The vision encoder and the Q-Former were initialized with [Salesforce/instructblip-vicuna-7b](https://huggingface.co/Salesforce/instructblip-vicuna-7b). For the frozen LLM, [Japanese-StableLM-Instruct-Alpha-7B](https://huggingface.co/stabilityai/japanese-stablelm-instruct-alpha-7b) model was used. During training, only Q-Former was trained. ### Training Dataset The training dataset includes the following public datasets: - [CC12M](https://github.com/google-research-datasets/conceptual-12m) with captions translated into Japanese - [MS-COCO](https://cocodataset.org/#home) with [STAIR Captions](http://captions.stair.center/) - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa) ## Use and Limitations ### Intended Use This model is intended to be used by the open-source community in chat-like applications in adherence with the research license. ### Limitations and bias Although the aforementioned datasets help to steer the base language models into "safer" distributions of text, not all biases and toxicity can be mitigated through fine-tuning. We ask that users be mindful of such potential issues that can arise in generated responses. Do not treat model outputs as substitutes for human judgment or as sources of truth. Please use responsibly. ## How to cite ```bibtex @misc{JapaneseInstructBLIPAlpha, url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)}, title = {Japanese InstructBLIP Alpha}, author = {Shing, Makoto and Akiba, Takuya} } ``` ## Citations ```bibtex @misc{dai2023instructblip, title = {InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning}, author = {Wenliang Dai and Junnan Li and Dongxu Li and Anthony Meng Huat Tiong and Junqi Zhao and Weisheng Wang and Boyang Li and Pascale Fung and Steven Hoi}, year = {2023}, eprint = {2305.06500}, archivePrefix = {arXiv}, primaryClass = {cs.CV} } ``` ## Contact * For questions and comments about the model, please join [Stable Community Japan](https://discord.com/invite/StableJP). * For future announcements / information about Stability AI models, research, and events, please follow https://twitter.com/StabilityAI_JP. * For business and partnership inquiries, please contact [email protected]. ビジネスや協業に関するお問い合わせは[email protected]にご連絡ください。
null
AIris-Channel/vit-gpt2-verifycode-caption
世萌验证码识别模型,训练集60000张图片,基于vit-gpt2微调 ## Use in Transformers ```python from transformers import VisionEncoderDecoderModel, ViTImageProcessor, AutoTokenizer import torch from PIL import Image model = VisionEncoderDecoderModel.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption") feature_extractor = ViTImageProcessor.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption") tokenizer = AutoTokenizer.from_pretrained("AIris-Channel/vit-gpt2-verifycode-caption") device = torch.device("cuda" if torch.cuda.is_available() else "cpu") model.to(device) max_length = 16 num_beams = 4 gen_kwargs = {"max_length": max_length, "num_beams": num_beams} def predict_step(image_paths): images = [] for image_path in image_paths: i_image = Image.open(image_path) if i_image.mode != "RGB": i_image = i_image.convert(mode="RGB") images.append(i_image) pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values pixel_values = pixel_values.to(device) output_ids = model.generate(pixel_values, **gen_kwargs) preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True) preds = [pred.strip() for pred in preds] return preds pred=predict_step(['ZZZTVESE.jpg']) print(pred) #zzztvese ```
null
Mediocreatmybest/instructblip-flan-t5-xxl_8bit_nf4
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / nf4 / Safetensors_ -_Mediocre_ 🥱 # InstructBLIP model InstructBLIP model using [Flan-T5-xxl](https://huggingface.co/google/flan-t5-xxl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-flan-t5-xxl") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xxl") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
null
Mediocreatmybest/instructblip-flan-t5-xl_8bit_nf4
Quantization with [bitsandbytes](https://github.com/TimDettmers/bitsandbytes) _8-bit / nf4 / Safetensors_ -_Mediocre_ 🥱 # InstructBLIP model InstructBLIP model using [Flan-T5-xl](https://huggingface.co/google/flan-t5-xl) as language model. InstructBLIP was introduced in the paper [InstructBLIP: Towards General-purpose Vision-Language Models with Instruction Tuning](https://arxiv.org/abs/2305.06500) by Dai et al. Disclaimer: The team releasing InstructBLIP did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description InstructBLIP is a visual instruction tuned version of [BLIP-2](https://huggingface.co/docs/transformers/main/model_doc/blip-2). Refer to the paper for details. ![InstructBLIP architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/instructblip_architecture.jpg) ## Intended uses & limitations Usage is as follows: ``` from transformers import InstructBlipProcessor, InstructBlipForConditionalGeneration import torch from PIL import Image import requests model = InstructBlipForConditionalGeneration.from_pretrained("Salesforce/instructblip-flan-t5-xl") processor = InstructBlipProcessor.from_pretrained("Salesforce/instructblip-flan-t5-xl") device = "cuda" if torch.cuda.is_available() else "cpu" model.to(device) url = "https://raw.githubusercontent.com/salesforce/LAVIS/main/docs/_static/Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw).convert("RGB") prompt = "What is unusual about this image?" inputs = processor(images=image, text=prompt, return_tensors="pt").to(device) outputs = model.generate( **inputs, do_sample=False, num_beams=5, max_length=256, min_length=1, top_p=0.9, repetition_penalty=1.5, length_penalty=1.0, temperature=1, ) generated_text = processor.batch_decode(outputs, skip_special_tokens=True)[0].strip() print(generated_text) ``` ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/instructblip).
null
alexgk/git-large-coco
# GIT (GenerativeImage2Text), large-sized, fine-tuned on COCO GIT (short for GenerativeImage2Text) model, large-sized version, fine-tuned on COCO. It was introduced in the paper [GIT: A Generative Image-to-text Transformer for Vision and Language](https://arxiv.org/abs/2205.14100) by Wang et al. and first released in [this repository](https://github.com/microsoft/GenerativeImage2Text). Disclaimer: The team releasing GIT did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description GIT is a Transformer decoder conditioned on both CLIP image tokens and text tokens. The model is trained using "teacher forcing" on a lot of (image, text) pairs. The goal for the model is simply to predict the next text token, giving the image tokens and previous text tokens. The model has full access to (i.e. a bidirectional attention mask is used for) the image patch tokens, but only has access to the previous text tokens (i.e. a causal attention mask is used for the text tokens) when predicting the next text token. ![GIT architecture](https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/git_architecture.jpg) This allows the model to be used for tasks like: - image and video captioning - visual question answering (VQA) on images and videos - even image classification (by simply conditioning the model on the image and asking it to generate a class for it in text). ## Intended uses & limitations You can use the raw model for image captioning. See the [model hub](https://huggingface.co/models?search=microsoft/git) to look for fine-tuned versions on a task that interests you. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/model_doc/git#transformers.GitForCausalLM.forward.example). ## Training data From the paper: > We collect 0.8B image-text pairs for pre-training, which include COCO (Lin et al., 2014), Conceptual Captions (CC3M) (Sharma et al., 2018), SBU (Ordonez et al., 2011), Visual Genome (VG) (Krishna et al., 2016), Conceptual Captions (CC12M) (Changpinyo et al., 2021), ALT200M (Hu et al., 2021a), and an extra 0.6B data following a similar collection procedure in Hu et al. (2021a). => however this is for the model referred to as "GIT" in the paper, which is not open-sourced. This checkpoint is "GIT-large", which is a smaller variant of GIT trained on 20 million image-text pairs. Next, the model was fine-tuned on COCO. See table 11 in the [paper](https://arxiv.org/abs/2205.14100) for more details. ### Preprocessing We refer to the original repo regarding details for preprocessing during training. During validation, one resizes the shorter edge of each image, after which center cropping is performed to a fixed-size resolution. Next, frames are normalized across the RGB channels with the ImageNet mean and standard deviation. ## Evaluation results For evaluation results, we refer readers to the [paper](https://arxiv.org/abs/2205.14100).
null
turing-motors/heron-chat-git-ELYZA-fast-7b-v0
# Heron GIT Japanese ELYZA Llama 2 Fast 7B ![heron](./heron_image.png) ## Model Details Heron GIT Japanese ELYZA Llama 2 Fast 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository). ```python import requests from PIL import Image import torch from transformers import AutoProcessor from heron.models.git_llm.git_gpt_neox import GitGPTNeoXForCausalLM device_id = 0 # prepare a pretrained model model = GitGPTNeoXForCausalLM.from_pretrained( 'turing-motors/heron-chat-git-ELYZA-fast-7b-v0', torch_dtype=torch.float16 ) model.eval() model.to(f"cuda:{device_id}") # prepare a processor processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-ELYZA-fast-7b-v0') # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##human: これは何の写真ですか?\n##gpt: " # do preprocessing inputs = processor( text, image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()} # set eos token eos_token_id_list = [ processor.tokenizer.pad_token_id, processor.tokenizer.eos_token_id, ] # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list) # print result print(processor.tokenizer.batch_decode(out)[0]) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100) * **Lamguage Model**: [ELYZA Japanese Llama-2 7B fast instruct](https://huggingface.co/elyza/ELYZA-japanese-Llama-2-7b-fast-instruct) * **Language(s)**: Japanese ### Training This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with LLaVA-Instruct-150K-JA and Japanese Visual Genome using LoRA. ### Training Dataset - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) - [Japanese STAIR Captions](http://captions.stair.center/) - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa) ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{GitElyzaFast, url = {[https://huggingface.co/turing-motors/heron-chat-git-ELYZA-fast-7b-v0](https://huggingface.co/turing-motors/heron-chat-git-ELYZA-fast-7b-v0)}, title = {Heron GIT Japanese ELYZA Llama 2 Fast 7B}, author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- license: cc-by-nc-4.0 ---
null
turing-motors/heron-chat-git-ja-stablelm-base-7b-v0
# Heron GIT Japanese StableLM Base 7B ![heron](./heron_image.png) ## Model Details Heron GIT Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/tree/dev-0.0.1#1-clone-this-repository). ```python import requests from PIL import Image import torch from transformers import AutoProcessor from heron.models.git_llm.git_japanese_stablelm_alpha import GitJapaneseStableLMAlphaForCausalLM device_id = 0 # prepare a pretrained model model = GitJapaneseStableLMAlphaForCausalLM.from_pretrained( 'turing-motors/heron-chat-git-ja-stablelm-base-7b-v0', torch_dtype=torch.float16 ) model.eval() model.to(f"cuda:{device_id}") # prepare a processor processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-ja-stablelm-base-7b-v0') # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##human: これは何の写真ですか?\n##gpt: " # do preprocessing inputs = processor( text, image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()} # set eos token eos_token_id_list = [ processor.tokenizer.pad_token_id, processor.tokenizer.eos_token_id, ] # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list) # print result print(processor.tokenizer.batch_decode(out)[0]) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100) * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b) * **Language(s)**: Japanese ### Training This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with LLaVA-Instruct-150K-JA and Japanese Visual Genome using LoRA. ### Training Dataset - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) - [Japanese STAIR Captions](http://captions.stair.center/) - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa) ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{GitJapaneseStableLM, url = {[https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-git-ja-stablelm-base-7b-v0)}, title = {Heron GIT Japanese StableLM Base 7B}, author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{JapaneseInstructBLIPAlpha, url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)}, title = {Japanese InstructBLIP Alpha}, author = {Shing, Makoto and Akiba, Takuya} } ``` --- license: cc-by-nc-4.0 ---
null
turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0
# Heron BLIP Japanese StableLM Base 7B ![heron](./heron_image.png) ## DEMO You can play the demo of this model [here](https://huggingface.co/spaces/turing-motors/heron_chat_blip). ## Model Details Heron BLIP Japanese StableLM Base 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/tree/dev-0.0.1#1-clone-this-repository). ```python import torch from heron.models.video_blip import VideoBlipForConditionalGeneration, VideoBlipProcessor from transformers import LlamaTokenizer device_id = 0 device = f"cuda:{device_id}" max_length = 512 MODEL_NAME = "turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0" model = VideoBlipForConditionalGeneration.from_pretrained( MODEL_NAME, torch_dtype=torch.float16, ignore_mismatched_sizes=True ) model = model.half() model.eval() model.to(device) # prepare a processor processor = VideoBlipProcessor.from_pretrained("Salesforce/blip2-opt-2.7b") tokenizer = LlamaTokenizer.from_pretrained("novelai/nerdstash-tokenizer-v1", additional_special_tokens=['▁▁']) processor.tokenizer = tokenizer import requests from PIL import Image # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##human: この画像の面白い点は何ですか?\n##gpt: " # do preprocessing inputs = processor( text=text, images=image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(device) for k, v in inputs.items()} inputs["pixel_values"] = inputs["pixel_values"].to(device, torch.float16) # set eos token eos_token_id_list = [ processor.tokenizer.pad_token_id, processor.tokenizer.eos_token_id, int(tokenizer.convert_tokens_to_ids("##")) ] # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list, no_repeat_ngram_size=2) # print result print(processor.tokenizer.batch_decode(out)) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [BLIP2](https://arxiv.org/abs/2301.12597) * **Lamguage Model**: [Japanese StableLM Base Alpha](https://huggingface.co/stabilityai/japanese-stablelm-base-alpha-7b) * **Language(s)**: Japanese ### Training This model was initially trained with the Adaptor using STAIR Captions. In the second phase, it was fine-tuned with [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) and Japanese Visual Genome using LoRA. ### Training Dataset - [LLaVA-Instruct-150K-JA](https://huggingface.co/datasets/turing-motors/LLaVA-Instruct-150K-JA) - [Japanese STAIR Captions](http://captions.stair.center/) - [Japanese Visual Genome VQA dataset](https://github.com/yahoojapan/ja-vg-vqa) ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{BlipJapaneseStableLM, url = {[https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0](https://huggingface.co/turing-motors/heron-chat-blip-ja-stablelm-base-7b-v0)}, title = {Heron BLIP Japanese StableLM Base 7B}, author = {Kotaro Tanahashi, Yuichi Inoue, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{JapaneseInstructBLIPAlpha, url = {[https://huggingface.co/stabilityai/japanese-instructblip-alpha](https://huggingface.co/stabilityai/japanese-instructblip-alpha)}, title = {Japanese InstructBLIP Alpha}, author = {Shing, Makoto and Akiba, Takuya} } ``` --- license: cc-by-nc-4.0 ---
null
turing-motors/heron-preliminary-git-Llama-2-70b-v0
# Heron GIT Llama 2 70B Preliminary ![heron](./heron_image.png) ## Model Details Heron GIT Llama 2 70B Preliminary is a vision-language model that was pretrained with image-text pairs.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. <b>*Note: This model is a preliminary trained version. Its accuracy and performance are under verification, and we do not provide any guarantees. We plan to update it with a further trained version in the future.*</b> ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository). ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100) * **Lamguage Model**: [Llama-2 70B chat hf](https://huggingface.co/meta-llama/Llama-2-70b-chat-hf) * **Language(s)**: English * **License**: This model is licensed under [the LLAMA 2 Community License](https://github.com/facebookresearch/llama/blob/main/LICENSE). ### Training This model was trained with the Adaptor using M3IT Coco Captions. ### Training Dataset - [MMInstruction M3IT](https://huggingface.co/datasets/MMInstruction/M3IT) ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{GitElyzaFast, url = {[https://huggingface.co/turing-motors/heron-preliminary-git-Llama-2-70b-v0](https://huggingface.co/turing-motors/heron-preliminary-git-Llama-2-70b-v0)}, title = {Heron GIT Llama 2 70B Preliminary}, author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- license: llama2 ---
null
turing-motors/heron-chat-git-Llama-2-7b-v0
# Heron GIT Llama 2 Fast 7B ![heron](./heron_image.png) ## Model Details Heron GIT Llama 2 7B is a vision-language model that can converse about input images.<br> This model was trained using [the heron library](https://github.com/turingmotors/heron). Please refer to the code for details. ## Usage Follow [the installation guide](https://github.com/turingmotors/heron/#1-clone-this-repository). ```python import requests from PIL import Image import torch from transformers import AutoProcessor from heron.models.git_llm.git_llama import GitLlamaConfig, GitLlamaForCausalLM device_id = 0 # prepare a pretrained model model = GitLlamaForCausalLM.from_pretrained( 'turing-motors/heron-chat-git-Llama-2-7b-v0', torch_dtype=torch.float16 ) model.eval() model.to(f"cuda:{device_id}") # prepare a processor processor = AutoProcessor.from_pretrained('turing-motors/heron-chat-git-Llama-2-7b-v0') # prepare inputs url = "https://www.barnorama.com/wp-content/uploads/2016/12/03-Confusing-Pictures.jpg" image = Image.open(requests.get(url, stream=True).raw) text = f"##human: What is this picture?\n##gpt: " # do preprocessing inputs = processor( text, image, return_tensors="pt", truncation=True, ) inputs = {k: v.to(f"cuda:{device_id}") for k, v in inputs.items()} # set eos token eos_token_id_list = [ processor.tokenizer.pad_token_id, processor.tokenizer.eos_token_id, ] # do inference with torch.no_grad(): out = model.generate(**inputs, max_length=256, do_sample=False, temperature=0., eos_token_id=eos_token_id_list) # print result print(processor.tokenizer.batch_decode(out)[0]) ``` ## Model Details * **Developed by**: [Turing Inc.](https://www.turing-motors.com/) * **Adaptor type**: [GIT](https://arxiv.org/abs/2205.14100) * **Lamguage Model**: [meta-llama/Llama-2-7b-chat-hf](https://huggingface.co/meta-llama/Llama-2-7b-chat-hf) * **Language(s)**: English ### Training This model was initially trained with the Adaptor using Coco Captions in M3IT. In the second phase, it was fine-tuned with M3IT. Finally, it was trained by instruction tuning with LLaVA-Instruct-150K. ### Training Dataset - [LLaVA-Instruct-150K](https://huggingface.co/datasets/liuhaotian/LLaVA-Instruct-150K) - [M3IT](https://huggingface.co/datasets/MMInstruction/M3IT) ## Use and Limitations ### Intended Use This model is intended for use in chat-like applications and for research purposes. ### Limitations The model may produce inaccurate or false information, and its accuracy is not guaranteed. It is still in the research and development stage. ## How to cite ```bibtex @misc{GitLlama2, url = {[https://huggingface.co/turing-motors/heron-chat-git-Llama-2-7b-v0](https://huggingface.co/turing-motors/heron-chat-git-Llama-2-7b-v0)}, title = {Heron GIT Llama 2 7B}, author = {Yuichi Inoue, Kotaro Tanahashi, and Yu Yamaguchi} } ``` ## Citations ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint={2307.09288}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` --- license: cc-by-nc-4.0 ---
null
Haimath/BLIP-Math
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
uf-aice-lab/BLIP-Math
# BLIP - Math Our model is fine-tuned on a mathematical multi-modal dataset, and it comprises two output heads: text generation and scoring. We provide the weight file for the text generation part of the model 'pytorch_model.bin.' You will need 4 input sources, including two text inputs and two image inputs: 'problem_body,' 'student_response,' 'question_image,' and 'student_image.' To perform conditional text generation: - Concatenating the text in the following manner: text = 'problem:' + ' ' + [problem_body] + ' ' + 'student:' + [student_response] + ' ' + 'response:' - Concatenating [question_image] and [student_image] vertically while keeping [question_image] on top and selecting the larger of the two image sizes. For all other usage, follow the same procedures as with the BLIP model. If you have any further questions or need assistance with specific code or implementation details, please feel free to ask. # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
advaitadasein/blip2_test
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv.org/abs/2301.12597) by Li et al. and first released in [this repository](https://github.com/salesforce/LAVIS/tree/main/projects/blip2). Disclaimer: The team releasing BLIP-2 did not write a model card for this model so this model card has been written by the Hugging Face team. ## Model description BLIP-2 consists of 3 models: a CLIP-like image encoder, a Querying Transformer (Q-Former) and a large language model. The authors initialize the weights of the image encoder and large language model from pre-trained checkpoints and keep them frozen while training the Querying Transformer, which is a BERT-like Transformer encoder that maps a set of "query tokens" to query embeddings, which bridge the gap between the embedding space of the image encoder and the large language model. The goal for the model is simply to predict the next text token, giving the query embeddings and the previous text. <img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/transformers/model_doc/blip2_architecture.jpg" alt="drawing" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) - chat-like conversations by feeding the image and the previous conversation as prompt to the model ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and optional text. See the [model hub](https://huggingface.co/models?search=Salesforce/blip) to look for fine-tuned versions on a task that interests you. ## Bias, Risks, Limitations, and Ethical Considerations BLIP2-OPT uses off-the-shelf OPT as the language model. It inherits the same risks and limitations as mentioned in Meta's model card. > Like other large language models for which the diversity (or lack thereof) of training > data induces downstream impact on the quality of our model, OPT-175B has limitations in terms > of bias and safety. OPT-175B can also have quality issues in terms of generation diversity and > hallucination. In general, OPT-175B is not immune from the plethora of issues that plague modern > large language models. > BLIP2 is fine-tuned on image-text datasets (e.g. [LAION](https://laion.ai/blog/laion-400-open-dataset/) ) collected from the internet. As a result the model itself is potentially vulnerable to generating equivalently inappropriate content or replicating inherent biases in the underlying data. BLIP2 has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", torch_dtype=torch.float16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details> ##### In 8-bit precision (`int8`) <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Salesforce/blip2-opt-2.7b") model = Blip2ForConditionalGeneration.from_pretrained("Salesforce/blip2-opt-2.7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "how many dogs are in the picture?" inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True).strip()) ``` </details>
null
upro/blip
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Sof22/image-caption-large-copy
This isi the BLIP salesforce large image captioning model with small adjustments to the paramaters on the back end for testing - note in particular the length of reply is increased. # BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT large backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-large") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-large", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null
Gregor/mblip-bloomz-7b
# mBLIP BLOOMZ-7B This is the model checkpoint for our work [mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs](https://arxiv.org/abs/2307.06930). ## Model description mBLIP is a [BLIP-2](https://arxiv.org/abs/2301.12597) model which consists of 3 sub-models: a Vision Transformer (ViT), a Query-Transformer (Q-Former) and a large language model (LLM). The Q-Former and ViT have both been initialized by an English BLIP-2 checkpoint ([blip2-flan-t5-xl](https://huggingface.co/Gregor/mblip-bloomz-7b)) and then re-aligned to the multilingual LLM ([bloomz-7b1](https://huggingface.co/bigscience/bloomz-7b1)) using a [multilingual task mixture](https://huggingface.co/datasets/Gregor/mblip-train). <img src="https://github.com/gregor-ge/mBLIP/blob/main/architecture.png" alt="The mBLIP architecture" width="600"/> This allows the model to be used for tasks like: - image captioning - visual question answering (VQA) in 96 languages. #### Languages mBLIP was trained on the following 96 languages: ` af, am, ar, az, be, bg, bn, ca, ceb, cs, cy, da, de, el, en, eo, es, et, eu, fa, fi, fil, fr, ga, gd, gl, gu, ha, hi, ht, hu, hy, id, ig, is, it, iw, ja, jv, ka, kk, km, kn, ko, ku, ky, lb, lo, lt, lv, mg, mi, mk, ml, mn, mr, ms, mt, my, ne, nl, no, ny, pa, pl, ps, pt, ro, ru, sd, si, sk, sl, sm, sn, so, sq, sr, st, su, sv, sw, ta, te, tg, th, tr, uk, ur, uz, vi, xh, yi, yo, zh, zu ` ## Direct Use and Downstream Use You can use the raw model for conditional text generation given an image and prompt text in a zero-shot setup or alternatively finetune it for downstream applications. We strongly recommend LoRA applied to the LLM when finetuning and to use bf16 as data type - standard fp16 can cause NaN loss. See [our repository](https://github.com/gregor-ge/mBLIP) for the code used to train and finetune this model. When using batched input, use left padding! ## Bias, Risks, Limitations, and Ethical Considerations While mBLIP can work in theory with up to 100 languages, in practice, we expect best results when prompted in high-resource languages like English, German, Spanish, etc. mBLIP inherits the risk, limitations, and biases from the models used to initialize it. mBLIP has not been tested in real world applications. It should not be directly deployed in any applications. Researchers should first carefully assess the safety and fairness of the model in relation to the specific context they’re being deployed within. ### How to use For code examples, we refer to the BLIP-2 [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2#transformers.Blip2ForConditionalGeneration.forward.example). #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python # pip install accelerate import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In half precision (`bfloat16`) <details> <summary> Click to expand </summary> ```python # pip install accelerate import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", torch_dtype=torch.bfloat16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 8-bit precision (`int8`) >**Important:** Paper results only use int8 for the LLM weights while this loads all weights in int8. > We see that this gives slightly worse results but currently int8 for only some model parts is not supported by HuggingFace. <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", load_in_8bit=True, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ##### In 4-bit precision (`int4`) >**Important:** Paper results only use int4 for the LLM weights while this loads all weights in int8. > We see that this gives slightly worse results but currently int4 for only some model parts is not supported by HuggingFace. <details> <summary> Click to expand </summary> ```python # pip install accelerate bitsandbytes import torch import requests from PIL import Image from transformers import Blip2Processor, Blip2ForConditionalGeneration processor = Blip2Processor.from_pretrained("Gregor/mblip-bloomz-7b") model = Blip2ForConditionalGeneration.from_pretrained("Gregor/mblip-bloomz-7b", load_in_4bit=True, bnb_4bit_quant_type="nf4", bnb_4bit_use_double_quant=False, bnb_4bit_compute_dtype=torch.bfloat16, device_map="auto") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') question = "Describe the image in German." inputs = processor(raw_image, question, return_tensors="pt").to("cuda", torch.bfloat16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) ``` </details> ## Citation If you use our model, please cite the following: ``` @article{geigle2023mblip, author = {Gregor Geigle and Abhay Jain and Radu Timofte and Goran Glava\v{s}}, title = {mBLIP: Efficient Bootstrapping of Multilingual Vision-LLMs}, journal = {arXiv}, volume = {abs/2307.06930}, year = {2023}, url = {https://arxiv.org/abs/2307.06930}, eprinttype = {arXiv}, eprint = {2307.06930}, } ```
null
ayoubkirouane/git-base-One-Piece
# Model Details + **Model Name**: Git-base-One-Piece + **Base Model**: Microsoft's "git-base" model + **Model Type**: Generative Image-to-Text (GIT) + **Fine-Tuned** On: 'One-Piece-anime-captions' dataset + **Fine-Tuning Purpose**: To generate text captions for images related to the anime series "One Piece." ## Model Description **Git-base-One-Piece** is a fine-tuned variant of Microsoft's **git-base** model, specifically trained for the task of generating descriptive text captions for images from the **One-Piece-anime-captions** dataset. The dataset consists of **856 {image: caption}** pairs, providing a substantial and diverse training corpus for the model. The model is conditioned on both CLIP image tokens and text tokens and employs a **teacher forcing** training approach. It predicts the next text token while considering the context provided by the image and previous text tokens. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/6338c06c107c4835a05699f9/N_yNK2tLabtwmSYAqpTEp.jpeg) ## Limitations + The quality of generated captions may vary depending on the complexity and diversity of images from the **One-Piece-anime-captions** dataset. + The model's output is based on the data it was fine-tuned on, so it may not generalize well to images outside the dataset's domain. Generating highly detailed or contextually accurate captions may still be a challenge. ## Usage ```python # Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-to-text", model="ayoubkirouane/git-base-One-Piece") ``` **or** ```python # Load model directly from transformers import AutoProcessor, AutoModelForCausalLM processor = AutoProcessor.from_pretrained("ayoubkirouane/git-base-One-Piece") model = AutoModelForCausalLM.from_pretrained("ayoubkirouane/git-base-One-Piece") ```
null
s3-tresio/blip-image-captioning-base
# BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation Model card for image captioning pretrained on COCO dataset - base architecture (with ViT base backbone). | ![BLIP.gif](https://cdn-uploads.huggingface.co/production/uploads/1670928184033-62441d1d9fdefb55a0b7d12c.gif) | |:--:| | <b> Pull figure from BLIP official repo | Image source: https://github.com/salesforce/BLIP </b>| ## TL;DR Authors from the [paper](https://arxiv.org/abs/2201.12086) write in the abstract: *Vision-Language Pre-training (VLP) has advanced the performance for many vision-language tasks. However, most existing pre-trained models only excel in either understanding-based tasks or generation-based tasks. Furthermore, performance improvement has been largely achieved by scaling up the dataset with noisy image-text pairs collected from the web, which is a suboptimal source of supervision. In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. BLIP effectively utilizes the noisy web data by bootstrapping the captions, where a captioner generates synthetic captions and a filter removes the noisy ones. We achieve state-of-the-art results on a wide range of vision-language tasks, such as image-text retrieval (+2.7% in average recall@1), image captioning (+2.8% in CIDEr), and VQA (+1.6% in VQA score). BLIP also demonstrates strong generalization ability when directly transferred to videolanguage tasks in a zero-shot manner. Code, models, and datasets are released.* ## Usage You can use this model for conditional and un-conditional image captioning ### Using the Pytorch model #### Running the model on CPU <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> #### Running the model on GPU ##### In full precision <details> <summary> Click to expand </summary> ```python import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base").to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda") out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ##### In half precision (`float16`) <details> <summary> Click to expand </summary> ```python import torch import requests from PIL import Image from transformers import BlipProcessor, BlipForConditionalGeneration processor = BlipProcessor.from_pretrained("Salesforce/blip-image-captioning-base") model = BlipForConditionalGeneration.from_pretrained("Salesforce/blip-image-captioning-base", torch_dtype=torch.float16).to("cuda") img_url = 'https://storage.googleapis.com/sfr-vision-language-research/BLIP/demo.jpg' raw_image = Image.open(requests.get(img_url, stream=True).raw).convert('RGB') # conditional image captioning text = "a photography of" inputs = processor(raw_image, text, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) # >>> a photography of a woman and her dog # unconditional image captioning inputs = processor(raw_image, return_tensors="pt").to("cuda", torch.float16) out = model.generate(**inputs) print(processor.decode(out[0], skip_special_tokens=True)) >>> a woman sitting on the beach with her dog ``` </details> ## BibTex and citation info ``` @misc{https://doi.org/10.48550/arxiv.2201.12086, doi = {10.48550/ARXIV.2201.12086}, url = {https://arxiv.org/abs/2201.12086}, author = {Li, Junnan and Li, Dongxu and Xiong, Caiming and Hoi, Steven}, keywords = {Computer Vision and Pattern Recognition (cs.CV), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
null