prudant/Qwen3-Reranker-0.6B-seq-cls-W8A8
This is a compressed version of tomaarsen/Qwen3-Reranker-0.6B-seq-cls using llm-compressor with the following scheme: W8A8
Serving
python3 -m vllm.entrypoints.openai.api_server --model 'dolfsai/Qwen3-Reranker-0.6B-seq-cls-vllm-W8A8' --task classify
Important: You MUST read the following guide for correct usage of this model here Guide
Model Details
- Original Model: tomaarsen/Qwen3-Reranker-0.6B-seq-cls
- Quantization Method: GPTQ
- Compression Libraries: llm-compressor
- Calibration Dataset: ultrachat_200k (2048 samples)
- Optimized For: Inference with vLLM
- License: same as original model
- Downloads last month
- 390
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
🙋
Ask for provider support