mamba-2.8b-GGUF

Quantized mamba-2.8b models using recent versions of llama.cpp.

Downloads last month
429
GGUF
Model size
2.77B params
Architecture
mamba

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for jpodivin/mamba-2.8b-hf-GGUF

Quantized
(11)
this model