PREVIEW RELEASE

This is a bfloat16 model, useful for fine-tuning and merging. For inference, use the 8-bit quantized version. For CPU-only testing, use the 4-bit GGUF version.

Downloads last month
6
GGUF
Model size
14.7B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for CycloneDX/cdx1-gguf-BF16-GGUF

Base model

microsoft/phi-4
Finetuned
unsloth/phi-4
Quantized
(27)
this model

Dataset used to train CycloneDX/cdx1-gguf-BF16-GGUF

Collection including CycloneDX/cdx1-gguf-BF16-GGUF