danielus/MermaidMixtral-3x7b-Q8_0-GGUF

This model was converted to GGUF format from TroyDoesAI/MermaidMixtral-3x7b using llama.cpp. Refer to the original model card for more details on the model.

Use with llama.cpp

brew install ggerganov/ggerganov/llama.cpp
llama-cli --hf-repo danielus/MermaidMixtral-3x7b-Q8_0-GGUF --model mermaidmixtral-3x7b.Q8_0.gguf -p "The meaning to life and the universe is "
llama-server --hf-repo danielus/MermaidMixtral-3x7b-Q8_0-GGUF --model mermaidmixtral-3x7b.Q8_0.gguf -c 2048
Downloads last month
9
GGUF
Model size
18.5B params
Architecture
llama
Hardware compatibility
Log In to view the estimation

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support