-
-
-
-
-
-
Inference Providers
Active filters:
mixtral
model-hub/Mixtral-8x7B-v0.1
Text Generation
•
Updated
•
24
•
1
TheBloke/Mixtral-8x7B-MoE-RP-Story-GGUF
Updated
•
1.69k
•
48
TheBloke/Mixtral-8x7B-MoE-RP-Story-GPTQ
Text Generation
•
Updated
•
20
•
9
osanseviero/qtest
Text Generation
•
Updated
•
14
marclove/marclove-mixtral-8x7B
Text Generation
•
Updated
•
9
Undi95/Toppy-Mix-4x7B
Text Generation
•
Updated
•
10
•
6
ewof/koishi-8x7b-qlora
Text Generation
•
Updated
•
20
chargoddard/demixtral
ewof/koishi-8x7b-qlora-gguf
Updated
•
15
ludis/tsukasa-8x7b-qlora
Text Generation
•
Updated
•
12
ludis/tsukasa-8x7b-qlora-gptq
Text Generation
•
Updated
•
12
ludis/tsukasa-8x7b-qlora-gguf
Undi95/Llamix2-MLewd-4x13B
Text Generation
•
Updated
•
115
•
65
crumb/e4-k2-d1024-hm2-1gt
Text Generation
•
Updated
•
15
chargoddard/SmolLlamix-8x101M
Text Generation
•
Updated
•
84
•
12
TheBloke/Llamix2-MLewd-4x13B-GPTQ
Text Generation
•
Updated
•
21
•
8
TheBloke/Llamix2-MLewd-4x13B-GGUF
Updated
•
45
•
10
ewof/koishi-8x7b-qlora-gptq
Text Generation
•
Updated
•
14
VAGOsolutions/SauerkrautLM-Mixtral-8x7B
Text Generation
•
Updated
•
104
•
12
VAGOsolutions/SauerkrautLM-Mixtral-8x7B-Instruct
Text Generation
•
Updated
•
107
•
22
mobiuslabsgmbh/Mixtral-8x7B-v0.1-hf-attn-4bit-moe-2bit-HQQ
Text Generation
•
Updated
•
23
•
7
mobiuslabsgmbh/Mixtral-8x7B-Instruct-v0.1-hf-attn-4bit-moe-2bit-HQQ
Text Generation
•
Updated
•
22
•
38
ChenMnZ/Mixtral-8x7B-v0.1-OmniQuantv2-w4a16g128
Text Generation
•
Updated
•
8
•
1
janhq/Phoenix-v1-8x7B
Text Generation
•
Updated
•
15
•
3
TheBloke/Chupacabra-8x7B-MoE-GGUF
Updated
•
46
•
3
jan-hq/Phoenix-v1-8x7B
Text Generation
•
Updated
•
99
MergeFuel/BigPlap-8x20B
Text Generation
•
Updated
•
42
•
8
ura-hcmut/MixSUra
Text Generation
•
Updated
•
9
mmnga/Mixtral-Fusion-4x7B-Instruct-v0.1
Text Generation
•
Updated
•
19
•
18
TheBloke/Falkor-8x7B-MoE-GGUF
Updated
•
16
•
4