Papers
arxiv:2507.16782

Task-Specific Zero-shot Quantization-Aware Training for Object Detection

Published on Jul 22
ยท Submitted by lichangh20 on Jul 23
Authors:
,
,
,

Abstract

Quantization is a key technique to reduce network size and computational complexity by representing the network parameters with a lower precision. Traditional quantization methods rely on access to original training data, which is often restricted due to privacy concerns or security challenges. Zero-shot Quantization (ZSQ) addresses this by using synthetic data generated from pre-trained models, eliminating the need for real training data. Recently, ZSQ has been extended to object detection. However, existing methods use unlabeled task-agnostic synthetic images that lack the specific information required for object detection, leading to suboptimal performance. In this paper, we propose a novel task-specific ZSQ framework for object detection networks, which consists of two main stages. First, we introduce a bounding box and category sampling strategy to synthesize a task-specific calibration set from the pre-trained network, reconstructing object locations, sizes, and category distributions without any prior knowledge. Second, we integrate task-specific training into the knowledge distillation process to restore the performance of quantized detection networks. Extensive experiments conducted on the MS-COCO and Pascal VOC datasets demonstrate the efficiency and state-of-the-art performance of our method. Our code is publicly available at: https://github.com/DFQ-Dojo/dfq-toolkit .

Community

Paper author Paper submitter

๐Ÿš€ Unlock Zero-Shot QAT Like Never Before!
We present a powerful plug-and-play Zero-Shot QAT framework that seamlessly supports four leading object detection backbones โ€” YOLOv5, YOLO11, Mask R-CNN, and ViT. Whether you're a researcher pushing the boundaries or a practitioner building real-world applications, this toolkit is your launchpad to rapid, calibration-free quantization!

๐Ÿ”— Code: https://github.com/DFQ-Dojo/dfq-toolkit
๐ŸŒ Project Page: https://dfq-dojo.github.io/dfq-toolkit-webpage

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2507.16782 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2507.16782 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2507.16782 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.