gguf version please?
#1
by
Narutoouz
- opened
Greate model for edge Applications!
Thanks for your interest. It's coming!
also interested in the gguf files.
I tried to convert with liquid's fork of llama.cpp (from https://github.com/Liquid4All/liquid_llama.cpp):
python liquid_llama.cpp/convert_hf_to_gguf.py LFM2-350M --outfile LFM2-350M.gguf
but getting this error:
INFO:hf-to-gguf:Loading model: LFM2-350M
INFO:hf-to-gguf:Model architecture: LFM2ForCausalLM
ERROR:hf-to-gguf:Model LFM2ForCausalLM is not supported