Quantized version of: EpistemeAI/DeepPhi-3.5-mini-instruct
'Make knowledge free for everyone'
- Downloads last month
- 55
Hardware compatibility
Log In
to view the estimation
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
16-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
๐
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for DevQuasar/EpistemeAI.DeepPhi-3.5-mini-instruct-GGUF
Base model
EpistemeAI/DeepPhi-3.5-mini-instruct