prithivMLmods commited on
Commit
6710b44
Β·
verified Β·
1 Parent(s): f443c5c

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +104 -3
README.md CHANGED
@@ -1,3 +1,104 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ task_categories:
4
+ - video-text-to-text
5
+ - image-to-text
6
+ language:
7
+ - en
8
+ tags:
9
+ - colab
10
+ - notebook
11
+ - demo
12
+ - vlm
13
+ - models
14
+ - hf
15
+ - ocr
16
+ - reasoning
17
+ - code
18
+ size_categories:
19
+ - n<1K
20
+ ---
21
+ # **VLM-Video-Understanding**
22
+
23
+ > A minimalistic demo for image inference and video understanding using OpenCV, built on top of several popular open-source Vision-Language Models (VLMs). This repository provides Colab notebooks demonstrating how to apply these VLMs to video and image tasks using Python and Gradio.
24
+
25
+ ## Overview
26
+
27
+ This project showcases lightweight inference pipelines for the following:
28
+ - Video frame extraction and preprocessing
29
+ - Image-level inference with VLMs
30
+ - Real-time or pre-recorded video understanding
31
+ - OCR-based text extraction from video frames
32
+
33
+ ## Models Included
34
+
35
+ The repository supports a variety of open-source models and configurations, including:
36
+
37
+ - Aya-Vision-8B
38
+ - Florence-2-Base
39
+ - Gemma3-VL
40
+ - MiMo-VL-7B-RL
41
+ - MiMo-VL-7B-SFT
42
+ - Qwen2-VL
43
+ - Qwen2.5-VL
44
+ - Qwen-2VL-MessyOCR
45
+ - RolmOCR-Qwen2.5-VL
46
+ - olmOCR-Qwen2-VL
47
+ - typhoon-ocr-7b-Qwen2.5VL
48
+
49
+ Each model has a dedicated Colab notebook to help users understand how to use it with video inputs.
50
+
51
+ ## Technologies Used
52
+
53
+ - **Python**
54
+ - **OpenCV** – for video and image processing
55
+ - **Gradio** – for interactive UI
56
+ - **Jupyter Notebooks** – for easy experimentation
57
+ - **Hugging Face Transformers** – for loading VLMs
58
+
59
+ ## Folder Structure
60
+
61
+ ```
62
+
63
+ β”œβ”€β”€ Aya-Vision-8B/
64
+ β”œβ”€β”€ Florence-2-Base/
65
+ β”œβ”€β”€ Gemma3-VL/
66
+ β”œβ”€β”€ MiMo-VL-7B-RL/
67
+ β”œβ”€β”€ MiMo-VL-7B-SFT/
68
+ β”œβ”€β”€ Qwen2-VL/
69
+ β”œβ”€β”€ Qwen2.5-VL/
70
+ β”œβ”€β”€ Qwen-2VL-MessyOCR/
71
+ β”œβ”€β”€ RolmOCR-Qwen2.5-VL/
72
+ β”œβ”€β”€ olmOCR-Qwen2-VL/
73
+ β”œβ”€β”€ typhoon-ocr-7b-Qwen2.5VL/
74
+ β”œβ”€β”€ LICENSE
75
+ └── README.md
76
+
77
+ ````
78
+
79
+ ## Getting Started
80
+
81
+ 1. Clone the repository:
82
+
83
+ ```bash
84
+ git clone https://github.com/PRITHIVSAKTHIUR/VLM-Video-Understanding.git
85
+ cd VLM-Video-Understanding
86
+ ````
87
+
88
+ 2. Open any of the Colab notebooks and follow the instructions to run image or video inference.
89
+
90
+ 3. Optionally, install dependencies locally:
91
+
92
+ ```bash
93
+ pip install opencv-python gradio transformers
94
+ ```
95
+
96
+ ## Hugging Face Dataset
97
+
98
+ The models and examples are supported by a dataset on Hugging Face:
99
+
100
+ [VLM-Video-Understanding](https://huggingface.co/datasets/prithivMLmods/VLM-Video-Understanding)
101
+
102
+ ## License
103
+
104
+ This project is licensed under the Apache-2.0 License.