fushh7/llmdet_swin_tiny_hf (Quantized)

Description

This model is a quantized version of the original model fushh7/llmdet_swin_tiny_hf.

It's quantized using the BitsAndBytes library to 4-bit using the bnb-my-repo space.

Quantization Details

  • Quantization Type: int4
  • bnb_4bit_quant_type: nf4
  • bnb_4bit_use_double_quant: True
  • bnb_4bit_compute_dtype: bfloat16
  • bnb_4bit_quant_storage: uint8

馃搫 Original Model Information

This is the huggingface version of LLMDet (CVPR2025 Highlight).

Please refer to our Github for more details.

If you find our work helpful for your research, please consider citing our paper.

@article{fu2025llmdet,
  title={LLMDet: Learning Strong Open-Vocabulary Object Detectors under the Supervision of Large Language Models},
  author={Fu, Shenghao and Yang, Qize and Mo, Qijie and Yan, Junkai and Wei, Xihan and Meng, Jingke and Xie, Xiaohua and Zheng, Wei-Shi},
  journal={arXiv preprint arXiv:2501.18954},
  year={2025}
}
Downloads last month
51
Safetensors
Model size
102M params
Tensor type
I64
F32
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support

Model tree for ioquioweknaskjdykasqwe12/llmdet_swin_tiny_hf-bnb-4bit

Quantized
(1)
this model