CHE-72/Phi-3-mini-128k-instruct-Q6_K-GGUF

This model was converted to GGUF format from microsoft/Phi-3-mini-128k-instruct using llama.cpp via the ggml.ai's GGUF-my-repo space. Refer to the original model card for more details on the model.

Downloads last month
11
GGUF
Model size
14B params
Architecture
phi3

2-bit

3-bit

Inference Providers NEW
This model is not currently available via any of the supported third-party Inference Providers, and HF Inference API was unable to determine this model's library.

Model tree for CHE-72-ZLab/Microsoft-Phi3-14B-Instruct128K-GGUF

Quantized
(55)
this model