--- license: apache-2.0 library_name: transformers pipeline_tag: video-text-to-text --- # Video-LLaVA-Seg [Project](https://ali2500.github.io/vicas-project/) | [Arxiv](https://arxiv.org/abs/2412.09754) This is the official baseline implementation for the ViCas dataset, presented in the paper [ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation](https://huggingface.co/papers/2412.09754). For details about setting up the model, refer to the [Video-LLaVA-Seg GitHub repo](https://github.com/Ali2500/Video-LLaVA-Seg/tree/main) For details about downloading and evaluating the dataset benchmark, refer to the [ViCaS GitHub repo](https://github.com/Ali2500/ViCaS/tree/main)