Construction-Hazard-Detection-YOLO11

YOLO11-based models for construction-site hazard detection. These models detect:

  • Workers without helmets and/or safety vests
  • Workers near machinery or vehicles
  • Workers in restricted areas (derived from safety cone clustering)
  • Machinery/vehicles near utility poles

This repository provides ready-to-use weights in PyTorch (.pt) and ONNX (.onnx) formats, a demo image, and the class label mapping for easy integration.

πŸ‘‰ For the full end-to-end system (APIs, web UI, training, evaluation, data tools), see the main project: https://github.com/yihong1120/Construction-Hazard-Detection

demo

Labels

Index-to-name mapping used across all provided models (also in class_names.txt):

0: Hardhat
1: Mask
2: NO-Hardhat
3: NO-Mask
4: NO-Safety Vest
5: Person
6: Safety Cone
7: Safety Vest
8: Machinery
9: Utility Pole
10: Vehicle

Available models

  • PyTorch (Ultralytics):
    • models/pt/best_yolo11n.pt
    • models/pt/best_yolo11s.pt
    • models/pt/best_yolo11m.pt
    • models/pt/best_yolo11l.pt
    • models/pt/best_yolo11x.pt
  • ONNX:
    • models/onnx/best_yolo11n.onnx
    • models/onnx/best_yolo11s.onnx
    • models/onnx/best_yolo11m.onnx
    • models/onnx/best_yolo11l.onnx
    • models/onnx/best_yolo11x.onnx

Large binaries are tracked with Git LFS.

Quick start

A) Ultralytics (PyTorch)

from ultralytics import YOLO

# Load a model (choose the variant that fits your needs)
model = YOLO("models/pt/best_yolo11x.pt")

# Inference on the demo image
results = model("data/examples/demo.jpg", imgsz=640, conf=0.25)

# Parse results (first image)
res = results[0]
boxes = res.boxes  # xyxy, confidence, class
for xyxy, conf, cls_id in zip(boxes.xyxy.tolist(), boxes.conf.tolist(), boxes.cls.tolist()):
    print(xyxy, conf, int(cls_id))

CLI option:

yolo predict model=models/pt/best_yolo11x.pt source=data/examples/demo.jpg imgsz=640 conf=0.25

B) ONNX Runtime

import cv2
import numpy as np
import onnxruntime as ort

# Load and preprocess image to 640x640
img = cv2.imread("data/examples/demo.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
size = 640
inp = cv2.resize(img_rgb, (size, size)).astype(np.float32) / 255.0
inp = np.transpose(inp, (2, 0, 1))[None, ...]  # 1x3x640x640

# Run ONNX model
session = ort.InferenceSession("models/onnx/best_yolo11x.onnx", providers=["CPUExecutionProvider"])
input_name = session.get_inputs()[0].name
outputs = session.run(None, {input_name: inp})

pred = outputs[0]  # Typically (1, N, no)
print(pred.shape)

Post-processing (NMS, scaling back to original image) follows standard Ultralytics/YOLO routines.

File structure

.
β”œβ”€ README.md
β”œβ”€ LICENSE
β”œβ”€ models/
β”‚  β”œβ”€ pt/
β”‚  β”‚  β”œβ”€ best_yolo11n.pt
β”‚  β”‚  β”œβ”€ best_yolo11s.pt
β”‚  β”‚  β”œβ”€ best_yolo11m.pt
β”‚  β”‚  β”œβ”€ best_yolo11l.pt
β”‚  β”‚  └─ best_yolo11x.pt
β”‚  └─ onnx/
β”‚     β”œβ”€ best_yolo11n.onnx
β”‚     β”œβ”€ best_yolo11s.onnx
β”‚     β”œβ”€ best_yolo11m.onnx
β”‚     β”œβ”€ best_yolo11l.onnx
β”‚     └─ best_yolo11x.onnx
β”œβ”€ data/
β”‚  └─ examples/
β”‚     └─ demo.jpg
└─ class_names.txt

Intended use and limitations

  • Intended for research and prototyping in construction safety monitoring.
  • Performance depends on camera viewpoint, lighting, occlusion, and domain gap.
  • For production, evaluate thoroughly on your target environment and consider rule-based filters and tracking.

Acknowledgements and sources

License

This repository is distributed under the AGPL-3.0 license. See LICENSE for details and ensure compliance, especially for networked deployments.

Downloads last month
535
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support