--- dataset_info: features: - name: image dtype: image - name: labels dtype: class_label: names: '0': charts '1': diagram '2': geometry '3': medical '4': ocr '5': random '6': table splits: - name: train num_bytes: 160813723527.0 num_examples: 700768 - name: test num_bytes: 8506367769.25 num_examples: 36886 download_size: 169224452489 dataset_size: 169320091296.25 configs: - config_name: default data_files: - split: train path: data/train-* - split: test path: data/test-* --- # πŸ“Š Vision Filtering Dataset A high-quality, labeled image dataset designed to benchmark computer vision models for filtering noisy image dataβ€”especially relevant for pretraining and curating datasets for vision-language models (VLMs). --- ## πŸ“Œ Overview This dataset contains **6 image categories** curated from online and public datasets: - πŸ“ˆ `charts`: Graphs, bar charts, line charts, pie charts - 🧠 `diagrams`: Schematics, flowcharts, technical illustrations - πŸ“ `geometry`: Geometric shapes, figures, and math visuals - πŸ₯ `medical`: Annotated scans, X-rays, and medical diagrams - πŸ”€ `ocr`: Images containing printed text or handwriting - πŸŒ€ `random`: Miscellaneous, non-relevant/noisy images The dataset is intended for training and evaluating classification models to **automatically filter relevant images** from large-scale scraped datasets. --- ## 🧩 Datasets by Category (with Hugging Face Links) | Category | Dataset | |--------------|---------| | πŸ“Š **Charts** | [nimapourjafar/mm_chart2text](https://huggingface.co/datasets/nimapourjafar/mm_chart2text)
[nimapourjafar/mm_chartqa](https://huggingface.co/datasets/nimapourjafar/mm_chartqa)
[nimapourjafar/mm_dvqa](https://huggingface.co/datasets/nimapourjafar/mm_dvqa)
[nimapourjafar/mm_figureqa](https://huggingface.co/datasets/nimapourjafar/mm_figureqa)
[nimapourjafar/mm_plotqa](https://huggingface.co/datasets/nimapourjafar/mm_plotqa)
[nimapourjafar/mm_vistext](https://huggingface.co/datasets/nimapourjafar/mm_vistext) | | πŸ”„ **Diagram** | [lmms-lab/ai2d](https://huggingface.co/datasets/lmms-lab/ai2d)
[nimapourjafar/mm_tqa](https://huggingface.co/datasets/nimapourjafar/mm_tqa)
[shreyanshu09/Block_Diagram](https://huggingface.co/datasets/shreyanshu09/Block_Diagram)
[yyyyifan/TQA](https://huggingface.co/datasets/yyyyifan/TQA) | | πŸ“ **Geometry** | [5CD-AI/Viet-Geometry-VQA](https://huggingface.co/datasets/5CD-AI/Viet-Geometry-VQA)
[AI4Math/MathVerse](https://huggingface.co/datasets/AI4Math/MathVerse)
[HuggingFaceM4/datikz](https://huggingface.co/datasets/HuggingFaceM4/datikz)
[MathLLMs/MathVision](https://huggingface.co/datasets/MathLLMs/MathVision)
[nimapourjafar/mm_geomverse](https://huggingface.co/datasets/nimapourjafar/mm_geomverse)
[nimapourjafar/mm_intergps](https://huggingface.co/datasets/nimapourjafar/mm_intergps)
[PeijieWang/MV-MATH](https://huggingface.co/datasets/PeijieWang/MV-MATH)
[THU-KEG/MM_Math](https://huggingface.co/datasets/THU-KEG/MM_Math)
[VIM-Bench/VIM-MathVista](https://huggingface.co/datasets/VIM-Bench/VIM-MathVista)
[We-Math/We-Math](https://huggingface.co/datasets/We-Math/We-Math) | | 🧬 **Medical** | [foreverbeliever/OmniMedVQA](https://huggingface.co/datasets/foreverbeliever/OmniMedVQA)
[rbojia/medical-vqa](https://huggingface.co/datasets/rbojia/medical-vqa) | | 🧾 **OCR** | [5CD-AI/Viet-Geometry-VQA](https://huggingface.co/datasets/5CD-AI/Viet-Geometry-VQA)
[mathieu1256/FATURA2-invoices](https://huggingface.co/datasets/mathieu1256/FATURA2-invoices)
[nimapourjafar/mm_docvqa](https://huggingface.co/datasets/nimapourjafar/mm_docvqa)
[nimapourjafar/mm_iam](https://huggingface.co/datasets/nimapourjafar/mm_iam)
[nimapourjafar/mm_ocrvqa](https://huggingface.co/datasets/nimapourjafar/mm_ocrvqa)
[nimapourjafar/mm_rendered_text](https://huggingface.co/datasets/nimapourjafar/mm_rendered_text)
[nimapourjafar/mm_visualmrc](https://huggingface.co/datasets/nimapourjafar/mm_visualmrc)
[nimapourjafar/mm_websight](https://huggingface.co/datasets/nimapourjafar/mm_websight)
[vikp/doclaynet_math](https://huggingface.co/datasets/vikp/doclaynet_math)
[JayRay5/Image_Infographvqa](https://huggingface.co/datasets/JayRay5/Image_Infographvqa)
[nimapourjafar/mm_infographic_vqa](https://huggingface.co/datasets/nimapourjafar/mm_infographic_vqa)
[nimapourjafar/mm_finqa](https://huggingface.co/datasets/nimapourjafar/mm_finqa)
[nimapourjafar/mm_multihierrt](https://huggingface.co/datasets/nimapourjafar/mm_multihierrt)
[nimapourjafar/mm_robust_sqa](https://huggingface.co/datasets/nimapourjafar/mm_robust_sqa)
[nimapourjafar/mm_robust_wikisql](https://huggingface.co/datasets/nimapourjafar/mm_robust_wikisql)
[nimapourjafar/mm_robust_wtq](https://huggingface.co/datasets/nimapourjafar/mm_robust_wtq)
[nimapourjafar/mm_tabmwp](https://huggingface.co/datasets/nimapourjafar/mm_tabmwp)
[nimapourjafar/mm_tat_qa](https://huggingface.co/datasets/nimapourjafar/mm_tat_qa) | | πŸŒ„ **Random** | [COCO Dataset](https://cocodataset.org/) | --- ## πŸ“Š Dataset Distribution The dataset is well-balanced across six image categories, with slightly more samples in the `ocr` and `random` classes.
Pie Chart - Percentage Distribution Bar Chart - File Count
## 🧾 Dataset Structure The dataset is organized in a standard image classification folder format: vision-filtering-dataset/ ``` β”œβ”€β”€ train/ β”‚ β”œβ”€β”€ charts/ β”‚ β”œβ”€β”€ diagrams/ β”‚ β”œβ”€β”€ geometry/ β”‚ β”œβ”€β”€ medical/ β”‚ β”œβ”€β”€ ocr/ β”‚ └── random/ └── test/ β”œβ”€β”€ charts/ β”œβ”€β”€ diagrams/ β”œβ”€β”€ geometry/ β”œβ”€β”€ medical/ β”œβ”€β”€ ocr/ └── random/ ``` Each subfolder contains `.jpg` or `.png` image files. --- ## πŸ§ͺ Use Cases - Vision model training (CNNs, Transformers, ViTs) - Image filtering for web-scraped datasets - Preprocessing for multimodal or OCR-based tasks - Benchmarking classification models on mixed visual domains --- ## 🧠 Loading with πŸ€— Datasets ```python from datasets import load_dataset dataset = load_dataset("AbdulazizAlshamsi/VLM_Dataset_classification") train = dataset["train"] test = dataset["test"] ``` Each sample contains: β€’ image: the image data (PIL object) β€’ label: the class label (charts, diagrams, etc.) βΈ» ## πŸ“š Citation If you use this dataset, please cite it as follows: ```bibtex @misc{visionfiltering2025, title={Vision Filtering Dataset}, author={Abdulaziz Alshamsi}, year={2025}, howpublished={\url{https://huggingface.co/datasets/AbdulazizAlshamsi/VLM_Dataset_classification}}, note={Image classification dataset for visual filtering} } ``` βΈ» ## πŸ™‹β€β™‚οΈ Author Abdulaziz Alshamsi AI Researcher β€” The University of Manchester πŸ“§ abdulaziz.alshamsi@postgrad.manchester.ac.uk πŸ”— LinkedIn βΈ» ## ❀️ Contributions Feel free to open issues or submit pull requests to improve the dataset!