AI & ML interests

Medical LLM | MLX | MLX-LM | Self-hosting | Finetuning | Conversational Agents

Recent Activity

LibraxisAI's activity

div0-space 
posted an update 8 days ago
view post
Post
214
Hello Community!
I am experimenting with q5 quantization on mlx==0.26.2 and it is so promising! Our beloved ( CohereLabs/c4ai-command-a-03-2025) the model we use for our medical-nlp-use : 200gb+ bfloat16 vs. ~70gb q5 mlx with perfect quality ( LibraxisAI/c4ai-command-a-03-2025-q5-mlx)

The q5 and mixed quants came up with the release of mlx>=0.26.0. This quant is not yet supported by lmstudio but we didn't check the Ollama compatibility. For inference at this moment we us native mlx_lm.chat or mlx_lm.server, which works perfectly with python mx.generate pipeline. Repository consists of complete safetensors and config files **plus** we put there also our everyday command-generation scripts - convert-to-mlx.sh for mlx_lm.convert with variable parameters customization, and mlx-serve.sh for easy server building. Enjoy

@lmstudio-community @mlx-community
@bartowski
  • 2 replies
·