LFM2‑350M β€’ Quantized Version (GGUF)

Quantized GGUF version of the LiquidAI/LFM2-350M model.

  • βœ… Format: GGUF
  • βœ… Use with: liquid_llama.cpp
  • βœ… Supported precisions: Q4_0, Q4_K, etc.

Download

wget https://huggingface.co/yasserrmd/LFM2-350M-gguf/resolve/main/lfm2-350m.Q4_K.gguf

(Adjust filename for other quant formats like Q4_0, if available.)

Notes

  • Only compatible with liquid_llama.cpp (not llama.cpp).
  • Replace Q4_K with your chosen quant version.
Downloads last month
269
GGUF
Model size
354M params
Architecture
lfm2
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

32-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for yasserrmd/LFM2-350M-gguf

Base model

LiquidAI/LFM2-350M
Quantized
(11)
this model

Collection including yasserrmd/LFM2-350M-gguf