I've been running it locally using Ollama and Open Web UI, and felt people could be interested in .gguf quantized versions that could run on consumer GPUs.

Unsure where I could include GPU requirements like this:

## GGUF quantised versions (2025-06-07)

| Quant | File | Size | Min VRAM (4 K ctx) |
|-------|------|------|--------------------|
| Q4_K_M | `qwen25.q4_k_m.gguf` | 9 GB | 12 GB |
| Q6_K   | `qwen25.q6_k.gguf`    | 12 GB | 16 GB |
| Q8_0   | `qwen25.q8_0.gguf`   | 15 GB | 17 GB |

*Quantised with `llama.cpp` (`llama-quantize`); inherits Apache-2.0 licence.*

Feel free to disregard the PR if you think it's not needed. I wasn't sure where else I could upload this, and creating another repo just for the quantizations seemed redundant. I could also PR the .gguf of the full f16 model if you would like.

These quantized models, as well as the .gguf version of the f16 model, are available in my Ollama repo if people want to pull it from there straight to Open Web UI: https://ollama.com/MarcoBarroca/qwen25-qiskit/tags

MarcoBarroca changed pull request status to open
Cannot merge
This branch has merge conflicts in the following files:
  • .gitattributes

Sign up or log in to comment