Datasets:
image
imagewidth (px) 332
6.02k
| label
class label 3
classes |
---|---|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
|
0far-ood
|
M-Hood Dataset: Out-of-Distribution Evaluation Collection
This dataset collection contains out-of-distribution (OOD) image datasets specifically curated for evaluating the robustness of object detection models, particularly those trained to mitigate hallucination on out-of-distribution data.
π― Purpose
These datasets are designed to test how well object detection models perform when encountering images that differ from their training distribution. They are particularly useful for:
- Evaluating model robustness on out-of-distribution data
- Testing hallucination mitigation techniques
- Benchmarking domain adaptation capabilities
- Research on robust object detection
π Dataset Overview
Dataset | Images | Size | Description | Domain |
---|---|---|---|---|
far-ood | 1,000 | 278MB | Far out-of-distribution images significantly different from training domains | General OOD |
near-ood-bdd | 1,010 | 337MB | Near OOD images related to BDD 100K driving domain | Autonomous Driving |
near-ood-voc | 1,020 | 318MB | Near OOD images related to Pascal VOC object classes | General Objects |
π Dataset Structure
m-hood-dataset/
βββ far-ood/
β βββ 8a2b026a6c3d5ee2.jpg
β βββ 5ec941c27b5a6c2f.jpg
β βββ ... (1,000 images)
βββ near-ood-bdd/
β βββ [image files]
β βββ ... (1,010 images)
βββ near-ood-voc/
βββ [image files]
βββ ... (1,020 images)
π Dataset Details
Far-OOD Dataset
- Images: 1,000 high-quality images
- Size: 278MB
- Characteristics: Images significantly different from typical object detection training domains
- Use Case: Testing extreme out-of-distribution robustness
Near-OOD-BDD Dataset
- Images: 1,010 high-quality images
- Size: 337MB
- Domain: Related to autonomous driving (BDD 100K-adjacent)
- Characteristics: Images similar to but distinct from BDD 100K training distribution
- Use Case: Testing domain shift robustness in autonomous driving scenarios
Near-OOD-VOC Dataset
- Images: 1,020 high-quality images
- Size: 318MB
- Domain: Related to Pascal VOC object classes
- Characteristics: Images similar to but distinct from Pascal VOC training distribution
- Use Case: Testing domain shift robustness for general object detection
π Usage
Loading with Hugging Face Datasets
from datasets import load_dataset
# Load the entire dataset collection
dataset = load_dataset("HugoHE/m-hood-dataset")
# Access individual subsets
far_ood = dataset["far-ood"]
near_ood_bdd = dataset["near-ood-bdd"]
near_ood_voc = dataset["near-ood-voc"]
Direct Download
You can also download specific subsets directly:
from huggingface_hub import snapshot_download
# Download specific dataset
snapshot_download(
repo_id="HugoHE/m-hood-dataset",
repo_type="dataset",
local_dir="./datasets",
allow_patterns="far-ood/*" # or "near-ood-bdd/*" or "near-ood-voc/*"
)
Evaluation Example
from ultralytics import YOLO
import os
from PIL import Image
# Load your trained model
model = YOLO('path/to/your/model.pt')
# Evaluate on far-ood dataset
far_ood_dir = "path/to/far-ood"
results = []
for img_file in os.listdir(far_ood_dir):
if img_file.endswith('.jpg'):
img_path = os.path.join(far_ood_dir, img_file)
result = model(img_path)
results.append(result)
# Analyze results for hallucination/false positives
π¬ Research Applications
This dataset collection is particularly valuable for research in:
- Out-of-distribution detection
- Hallucination mitigation in object detection
- Domain adaptation and transfer learning
- Robust computer vision systems
- Autonomous driving perception robustness
- General object detection robustness
π Evaluation Metrics
When using these datasets for evaluation, consider these metrics:
- False Positive Rate (FPR): Rate of hallucinated detections
- Confidence Calibration: How well confidence scores reflect actual accuracy
- Detection Consistency: Consistency of detections across similar OOD images
- Domain Shift Sensitivity: Performance degradation compared to in-distribution data
π― Related Models
This dataset collection is designed to work with the M-Hood model collection available at:
- Repository: HugoHE/m-hood
- Models: YOLOv10 and Faster R-CNN variants trained on BDD 100K, Pascal VOC, and KITTI
- Fine-tuned variants: Specifically trained to mitigate hallucination on OOD data
π Citation
If you use this dataset collection in your research, please cite:
@dataset{mhood_ood_dataset,
title={M-Hood Dataset: Out-of-Distribution Evaluation Collection for Object Detection},
author={[Your Name]},
year={2025},
howpublished={\url{https://huggingface.co/datasets/HugoHE/m-hood-dataset}}
}
π License
This dataset collection is released under the MIT License.
π·οΈ Keywords
Out-of-Distribution, OOD, Object Detection, Computer Vision, Robustness Evaluation, Hallucination Mitigation, BDD 100K, Pascal VOC, Domain Adaptation, Model Evaluation.
- Downloads last month
- 827