language: | |
- fr | |
- it | |
- de | |
- es | |
- en | |
license: apache-2.0 | |
tags: | |
- moe | |
- llama-cpp | |
- gguf | |
extra_gated_description: If you want to learn more about how we process your personal | |
data, please read our <a href="https://mistral.ai/terms/">Privacy Policy</a>. | |
base_model: mistralai/Mixtral-8x22B-v0.1 | |
# Supa-AI/Mixtral-8x22B-v0.1-gguf | |
This model was converted to GGUF format from [`mistralai/Mixtral-8x22B-v0.1`](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) using llama.cpp. | |
Refer to the [original model card](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) for more details on the model. | |
## Available Versions | |
## Use with llama.cpp | |
Replace `FILENAME` with one of the above filenames. | |
### CLI: | |
```bash | |
llama-cli --hf-repo Supa-AI/Mixtral-8x22B-v0.1-gguf --hf-file FILENAME -p "Your prompt here" | |
``` | |
### Server: | |
```bash | |
llama-server --hf-repo Supa-AI/Mixtral-8x22B-v0.1-gguf --hf-file FILENAME -c 2048 | |
``` | |
## Model Details | |
- **Original Model:** [mistralai/Mixtral-8x22B-v0.1](https://huggingface.co/mistralai/Mixtral-8x22B-v0.1) | |
- **Format:** GGUF | |