Triangle104/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-Q4_K_M-GGUF
This model was converted to GGUF format from DavidAU/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
This as a 4X8B, Mixture of Experts model with all 4 experts (4 Llama fine tunes) activated, all with Deepseek Reasoning tech installed (in each one) giving you a 32B (4X8B) parameter model in only 24.9B model size.
This model is a Deepseek model with "Distilled" components of "thinking/reasoning" fused into it.
This model can be used for creative, non-creative use cases and general usage.
This is a very stable model, which can operate at temps 1+ 2+ and higher and generate coherent thought(s) and exceeds the original distill model (by Deepseek) in terms of performance, coherence and depth of thought.
The actual "DeepSeek" thinking / reasoning tech built (grafted in directly, by DavidAU) into it. The "thinking/reasoning" tech (for the model at this repo) is from the original Llama 3.1 "Distill" model from Deepseek:
[ https://huggingface.co/deepseek-ai/DeepSeek-R1-Distill-Llama-8B ]
This model is for all use cases, and it has a slightly more creative slant than a standard model.
This model can also be used for solving logic puzzles, riddles, and other problems with the enhanced "thinking" systems by DeepSeek.
This model also can solve problems/riddles/ and puzzles normally beyond the abilities of a Llama 3.1 model due to DeepSeek systems.
This model MAY produce NSFW / uncensored content.
Use with llama.cpp
Install llama.cpp through brew (works on Mac and Linux)
brew install llama.cpp
Invoke the llama.cpp server or the CLI.
CLI:
llama-cli --hf-repo Triangle104/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-Q4_K_M-GGUF --hf-file deepseek-moe-4x8b-r1-distill-llama-3.1-deep-thinker-uncensored-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
Server:
llama-server --hf-repo Triangle104/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-Q4_K_M-GGUF --hf-file deepseek-moe-4x8b-r1-distill-llama-3.1-deep-thinker-uncensored-24b-q4_k_m.gguf -c 2048
Note: You can also use this checkpoint directly through the usage steps listed in the Llama.cpp repo as well.
Step 1: Clone llama.cpp from GitHub.
git clone https://github.com/ggerganov/llama.cpp
Step 2: Move into the llama.cpp folder and build it with LLAMA_CURL=1
flag along with other hardware-specific flags (for ex: LLAMA_CUDA=1 for Nvidia GPUs on Linux).
cd llama.cpp && LLAMA_CURL=1 make
Step 3: Run inference through the main binary.
./llama-cli --hf-repo Triangle104/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-Q4_K_M-GGUF --hf-file deepseek-moe-4x8b-r1-distill-llama-3.1-deep-thinker-uncensored-24b-q4_k_m.gguf -p "The meaning to life and the universe is"
or
./llama-server --hf-repo Triangle104/DeepSeek-MOE-4X8B-R1-Distill-Llama-3.1-Deep-Thinker-Uncensored-24B-Q4_K_M-GGUF --hf-file deepseek-moe-4x8b-r1-distill-llama-3.1-deep-thinker-uncensored-24b-q4_k_m.gguf -c 2048
- Downloads last month
- 35
4-bit