Model Card for Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v1-gguf

Model Description

This is the GGUF Quantisationn of Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v1.

Ollama

ollama run goekdenizguelmez/JOSIEFIED-Qwen3:0.6b
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:0.6b-q4_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:0.6b-q5_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:0.6b-q6_k
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:0.6b-q8_0
ollama run goekdenizguelmez/JOSIEFIED-Qwen3:0.6b-fp16
  • Developed by: Gökdeniz Gülmez
  • Funded by: Gökdeniz Gülmez
  • Shared by: Gökdeniz Gülmez
  • Origional model: Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v1
Downloads last month
201
GGUF
Model size
596M params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v1-gguf

Collection including Goekdeniz-Guelmez/Josiefied-Qwen3-0.6B-abliterated-v1-gguf