indic-mALBERT-static-smooth-INT8-squad-v2

This model is a static-smooth-INT8 Quantized version of indic-mALBERT-squad-v2 on the squad_v2 dataset. Please Note that we use Intel庐 Neural Compressor for INT8 Quantization.

Downloads last month
4
Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and the model is not deployed on the HF Inference API.