Ali2500 nielsr HF Staff commited on
Commit
d58f25b
·
verified ·
1 Parent(s): 0e6ee0f

Add pipeline tag, library name, and link to paper (#1)

Browse files

- Add pipeline tag, library name, and link to paper (dcefa65cedd2e5ca12fa3d72609176d85de2c801)


Co-authored-by: Niels Rogge <[email protected]>

Files changed (1) hide show
  1. README.md +3 -1
README.md CHANGED
@@ -1,12 +1,14 @@
1
  ---
2
  license: apache-2.0
 
 
3
  ---
4
 
5
  # Video-LLaVA-Seg
6
 
7
  [Project](https://ali2500.github.io/vicas-project/) | [Arxiv](https://arxiv.org/abs/2412.09754)
8
 
9
- This is the official baseline implementation for the ViCas dataset.
10
 
11
  For details about setting up the model, refer to the [Video-LLaVA-Seg GitHub repo](https://github.com/Ali2500/Video-LLaVA-Seg/tree/main)
12
 
 
1
  ---
2
  license: apache-2.0
3
+ library_name: transformers
4
+ pipeline_tag: video-text-to-text
5
  ---
6
 
7
  # Video-LLaVA-Seg
8
 
9
  [Project](https://ali2500.github.io/vicas-project/) | [Arxiv](https://arxiv.org/abs/2412.09754)
10
 
11
+ This is the official baseline implementation for the ViCas dataset, presented in the paper [ViCaS: A Dataset for Combining Holistic and Pixel-level Video Understanding using Captions with Grounded Segmentation](https://huggingface.co/papers/2412.09754).
12
 
13
  For details about setting up the model, refer to the [Video-LLaVA-Seg GitHub repo](https://github.com/Ali2500/Video-LLaVA-Seg/tree/main)
14