Quantization: ExLlamaV2 (ExL2) at 6 bits per weight.

Overview

This is an ExLlamaV2 (ExL2) 6 bpw quantized version of microsoft/phi-4.

Quantization By

I often have idle A100 GPUs while building/testing the app, so I put them to use quantizing models.

I hope the community finds these quantizations useful.

Andrew Webby @ RolePlai

Downloads last month
4
Inference Examples
Unable to determine this model's library. Check the docs .