Model Summary

This repository hosts quantized versions of the Phi-3.5-mini-instruct model.

Format: GGUF
Converter: llama.cpp 2f3c1466ff46a2413b0e363a5005c46538186ee6
Quantizer: LM-Kit.NET 2024.8.2

For more detailed information on the base model, please visit the following link

Downloads last month
132
GGUF
Model size
3.82B params
Architecture
phi3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support