Quantized version of: Menlo/Jan-nano

'Make knowledge free for everyone'

Made with

Buy Me a Coffee at ko-fi.com

Downloads last month
0
GGUF
Model size
4.02B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for DevQuasar/Menlo.Jan-nano-GGUF

Base model

Qwen/Qwen3-4B-Base
Finetuned
Qwen/Qwen3-4B
Finetuned
Menlo/Jan-nano
Quantized
(12)
this model