-
-
-
-
-
-
Inference Providers
Active filters:
int4
RedHatAI/Meta-Llama-3.1-8B-Instruct-quantized.w4a16
Text Generation
•
2B
•
Updated
•
41.8k
•
29
angeloc1/llama3dot1SimilarProcesses4
Text Generation
•
8B
•
Updated
•
3
angeloc1/llama3dot1DifferentProcesses4
Text Generation
•
8B
•
Updated
•
5
ModelCloud/Meta-Llama-3.1-405B-Instruct-gptq-4bit
Text Generation
•
59B
•
Updated
•
8
•
2
RedHatAI/Meta-Llama-3.1-70B-Instruct-quantized.w4a16
Text Generation
•
11B
•
Updated
•
11.7k
•
32
ModelCloud/EXAONE-3.0-7.8B-Instruct-gptq-4bit
2B
•
Updated
•
5
•
3
RedHatAI/Meta-Llama-3.1-405B-Instruct-quantized.w4a16
Text Generation
•
58B
•
Updated
•
128
•
12
angeloc1/llama3dot1FoodDel4v05
Text Generation
•
8B
•
Updated
•
3
zzzmahesh/Meta-Llama-3-8B-Instruct-quantized.w4a4
Text Generation
•
2B
•
Updated
•
10
•
1
ModelCloud/GRIN-MoE-gptq-4bit
6B
•
Updated
•
1
•
6
joshmiller656/Llama3.2-1B-AWQ-INT4
0.7B
•
Updated
•
5
Advantech-EIOT/intel_llama-3.1-8b-instruct
RedHatAI/Qwen2.5-7B-quantized.w4a16
Text Generation
•
2B
•
Updated
•
21
joshmiller656/Llama-3.1-Nemotron-70B-Instruct-AWQ-INT4
Text Generation
•
11B
•
Updated
•
1.84k
•
3
ModelCloud/Llama-3.2-1B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
0.7B
•
Updated
•
1.13k
•
2
jojo1899/llama-3_1-8b-instruct-ov-int4
ModelCloud/Llama-3.2-1B-Instruct-gptqmodel-4bit-vortex-v2
Text Generation
•
0.7B
•
Updated
•
5
•
3
ModelCloud/Llama-3.2-3B-Instruct-gptqmodel-4bit-vortex-v3
Text Generation
•
1B
•
Updated
•
1.02k
•
5
tclf90/qwen2.5-72b-instruct-gptq-int4
Text Generation
•
12B
•
Updated
•
9
ModelCloud/Llama-3.2-1B-Instruct-gptqmodel-4bit-vortex-v2.5
Text Generation
•
0.7B
•
Updated
•
1.12k
•
5
jojo1899/Phi-3.5-mini-instruct-ov-int4
ModelCloud/Qwen2.5-Coder-32B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
7B
•
Updated
•
19
•
15
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-FP8-dynamic
Text Generation
•
8B
•
Updated
•
5
RedHatAI/Sparse-Llama-3.1-8B-evolcodealpaca-2of4-quantized.w4a16
Text Generation
•
2B
•
Updated
•
17
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v1
Text Generation
•
7B
•
Updated
•
9
•
51
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v2
Text Generation
•
7B
•
Updated
•
7
•
16
ModelCloud/QwQ-32B-Preview-gptqmodel-4bit-vortex-v3
Text Generation
•
7B
•
Updated
•
7
•
14
ModelCloud/Falcon3-10B-Instruct-gptqmodel-4bit-vortex-v1
Text Generation
•
2B
•
Updated
•
7
•
3
RedHatAI/Llama-3.3-70B-Instruct-quantized.w4a16
Text Generation
•
11B
•
Updated
•
2.36k
•
2
RedHatAI/Mixtral-8x22B-v0.1-quantized.w4a16