Video-LLaVA-Seg / README.md
Ali2500's picture
Add pipeline tag, library name, and link to paper (#1)
d58f25b verified
metadata
license: apache-2.0
library_name: transformers
pipeline_tag: video-text-to-text

Video-LLaVA-Seg

Project | Arxiv

This is the official baseline implementation for the ViCas dataset, presented in the paper ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation.

For details about setting up the model, refer to the Video-LLaVA-Seg GitHub repo

For details about downloading and evaluating the dataset benchmark, refer to the ViCaS GitHub repo