π¦ T-Rex-mini β GGUF IQ4_XS (Quantized)
This is a quantized GGUF version of saturated-labs/T-Rex-mini
, converted using llama.cpp and quantized to the IQ4_XS
format.
π§ Quantization Details
- Original Model:
saturated-labs/T-Rex-mini
- Format: GGUF (
.gguf
) - Quantization Type:
IQ4_XS
- Tool Used:
llama.cpp
- Command:
./llama-quantize.exe trex-mini-f16.gguf trex-mini-iq4_xs.gguf iq4_xs
- Downloads last month
- 20
Hardware compatibility
Log In
to view the estimation
4-bit
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support
HF Inference deployability: The model has no library tag.
Model tree for Deaquay/T-Rex-mini-IQ4_XS-GGUF
Base model
saturated-labs/T-Rex-mini