-
-
-
-
-
-
Inference Providers
Active filters:
Q8
prithivMLmods/Qwen2.5-Coder-7B-Instruct-GGUF
Text Generation
•
Updated
•
107
•
2
prithivMLmods/Qwen2.5-Coder-7B-GGUF
Text Generation
•
Updated
•
196
•
3
prithivMLmods/Qwen2.5-Coder-3B-GGUF
Text Generation
•
Updated
•
524
•
2
prithivMLmods/Qwen2.5-Coder-1.5B-GGUF
Text Generation
•
Updated
•
159
•
3
prithivMLmods/Qwen2.5-Coder-1.5B-Instruct-GGUF
Text Generation
•
Updated
•
58
•
3
prithivMLmods/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation
•
Updated
•
58
•
4
prithivMLmods/Llama-3.2-3B-GGUF
Text Generation
•
Updated
•
86
•
2
cibernicola/FLOR-6.3B-xat-Q8_0
Text Generation
•
Updated
•
33
cibernicola/FLOR-1.3B-xat-Q8
Text Generation
•
Updated
•
18
cibernicola/FLOR-6.3B-xat-Q5_K
Text Generation
•
Updated
•
11
harisnaeem/Phi-4-mini-instruct-GGUF-Q8
Text Generation
•
Updated
•
21
ykarout/llama3-deepseek_Q8
Text Generation
•
Updated
•
5