LlamaEdge compatible quants for SmolVLM2 models.
AI & ML interests
Run open source LLMs across CPU and GPU without changing the binary in Rust and Wasm locally!
Recent Activity
View all activity
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 27B • Updated • 1.29k -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 362 • 1 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 679 -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 272
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 12.9k • 12 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 301 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 3.69k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 5.91k • 9
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 286 • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 140 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 299 • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 270
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 275 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 100 • 4 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 391 • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 322 • 4
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 320 -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 252 • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 254 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 104
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 127 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 109 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 118 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 288
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 187 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 251 -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 162 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 353
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 308 • 2 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 186 • 1 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 365 • 1 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 232
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 83.8M • Updated • 832 • 12 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.67k • 11 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 64.5M • Updated • 248 • 12 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 238 • 13
LlamaEdge compatible quants for SmolVLM2 models.
LlamaEdge compatible quants for Qwen2.5-VL models.
LlamaEdge compatible quants for Qwen3 models.
LlamaEdge compatible quants for Tessa-T1 models.
LlamaEdge compatible quants for EXAONE-3.5 models.
LlamaEdge compatible quants for EXAONE-Deep models.
LlamaEdge compatible quants for Gemma-3-it models.
-
second-state/gemma-3-27b-it-GGUF
Image-Text-to-Text • 27B • Updated • 1.29k -
second-state/gemma-3-12b-it-GGUF
Image-Text-to-Text • 12B • Updated • 362 • 1 -
second-state/gemma-3-4b-it-GGUF
Image-Text-to-Text • 4B • Updated • 679 -
second-state/gemma-3-1b-it-GGUF
Text Generation • 1.0B • Updated • 272
LlamaEdge compatible quants for DeepSeek-R1 distilled models.
-
second-state/DeepSeek-R1-Distill-Qwen-1.5B-GGUF
Text Generation • 2B • Updated • 320 -
second-state/DeepSeek-R1-Distill-Qwen-7B-GGUF
Text Generation • 8B • Updated • 252 • 1 -
second-state/DeepSeek-R1-Distill-Qwen-14B-GGUF
Text Generation • 15B • Updated • 254 -
second-state/DeepSeek-R1-Distill-Qwen-32B-GGUF
Text Generation • 33B • Updated • 104
-
second-state/stable-diffusion-v1-5-GGUF
Text-to-Image • 1B • Updated • 12.9k • 12 -
second-state/stable-diffusion-v-1-4-GGUF
Text-to-Image • 1B • Updated • 301 • 3 -
second-state/stable-diffusion-3.5-medium-GGUF
Text-to-Image • 0.7B • Updated • 3.69k • 9 -
second-state/stable-diffusion-3.5-large-GGUF
Text-to-Image • 0.7B • Updated • 5.91k • 9
LlamaEdge compatible quants for Falcon3-Instruct models.
-
second-state/Falcon3-10B-Instruct-GGUF
Text Generation • 10B • Updated • 127 • 1 -
second-state/Falcon3-7B-Instruct-GGUF
Text Generation • 7B • Updated • 109 • 2 -
second-state/Falcon3-3B-Instruct-GGUF
Text Generation • 3B • Updated • 118 -
second-state/Falcon3-1B-Instruct-GGUF
Text Generation • 2B • Updated • 288
LlamaEdge compatible quants for Qwen2-VL models.
LlamaEdge compatible quants for Qwen2.5-Coder models.
-
second-state/Qwen2.5-Coder-0.5B-Instruct-GGUF
Text Generation • 0.5B • Updated • 187 -
second-state/Qwen2.5-Coder-3B-Instruct-GGUF
Text Generation • 3B • Updated • 251 -
second-state/Qwen2.5-Coder-14B-Instruct-GGUF
Text Generation • 15B • Updated • 162 -
second-state/Qwen2.5-Coder-32B-Instruct-GGUF
Text Generation • 33B • Updated • 353
LlamaEdge compatible quants for tool-use models.
-
second-state/Llama-3-Groq-8B-Tool-Use-GGUF
Text Generation • 8B • Updated • 286 • 2 -
second-state/Llama-3-Groq-70B-Tool-Use-GGUF
Text Generation • 71B • Updated • 140 • 2 -
second-state/Hermes-2-Pro-Llama-3-8B-GGUF
Text Generation • 8B • Updated • 299 • 2 -
second-state/Nemotron-Mini-4B-Instruct-GGUF
4B • Updated • 270
LlamaEdge compatible quants for InternLM-2.5 models.
LlamaEdge compatible quants for Llama 3.2 3B and 1B Instruct models.
LlamaEdge compatible quants for Qwen 2.5 instruct and coder models.
-
second-state/Qwen2.5-72B-Instruct-GGUF
Text Generation • 73B • Updated • 308 • 2 -
second-state/Qwen2.5-32B-Instruct-GGUF
Text Generation • 33B • Updated • 186 • 1 -
second-state/Qwen2.5-14B-Instruct-GGUF
Text Generation • 15B • Updated • 365 • 1 -
second-state/Qwen2.5-7B-Instruct-GGUF
Text Generation • 8B • Updated • 232
LlamaEdge compatible quants for Yi-1.5 chat models.
-
second-state/Yi-1.5-9B-Chat-16K-GGUF
Text Generation • 9B • Updated • 275 • 5 -
second-state/Yi-1.5-34B-Chat-16K-GGUF
Text Generation • 34B • Updated • 100 • 4 -
second-state/Yi-1.5-9B-Chat-GGUF
Text Generation • 9B • Updated • 391 • 8 -
second-state/Yi-1.5-6B-Chat-GGUF
Text Generation • 6B • Updated • 322 • 4
LlamaEdge compatible quants for FLUX.1 models.
-
second-state/FLUX.1-schnell-GGUF
Text-to-Image • 83.8M • Updated • 832 • 12 -
second-state/FLUX.1-dev-GGUF
Text-to-Image • 0.1B • Updated • 1.67k • 11 -
second-state/FLUX.1-Redux-dev-GGUF
Text-to-Image • 64.5M • Updated • 248 • 12 -
second-state/FLUX.1-Canny-dev-GGUF
Text-to-Image • 12B • Updated • 238 • 13