yasserrmd/phi-4-gguf
This model was converted to GGUF format from microsoft/phi-4
using llama.cpp via
Convert Model to GGUF.
Key Features:
- Quantized for reduced file size (GGUF format)
- Optimized for use with llama.cpp
- Compatible with llama-server for efficient serving
Refer to the original model card for more details on the base model.
Usage with llama.cpp
1. Install llama.cpp:
brew install llama.cpp # For macOS/Linux
2. Run Inference:
CLI:
llama-cli --hf-repo yasserrmd/phi-4-gguf --hf-file /content/phi-4.q2_k.gguf -p "Your prompt here"
Server:
llama-server --hf-repo yasserrmd/phi-4-gguf --hf-file /content/phi-4.q2_k.gguf -c 2048
For more advanced usage, refer to the llama.cpp repository.
- Downloads last month
- 46
Inference Providers
NEW
This model is not currently available via any of the supported third-party Inference Providers, and
HF Inference API was unable to determine this model's library.