Best-Models-For-ComfyUI

A curated vault of the most essential, powerful, and optimized models for ComfyUI users. Flux1, SDXL, ControlNets, Clips, GGUFsβ€”all in one place. Carefully organized. Pre-tested. One-click ready.


πŸͺœ What's Inside

This repo is not a chaotic dumping ground. It’s a purposeful collection of the most important ComfyUI-compatible models:

Flux1

  • Unet Models: Dev, Schnell, Depth, Canny, Fill
  • GGUF Versions: Q3, Q5, Q6 for each major branch
  • Clip + T5XXL encoders (standard + GGUF versions)
  • Loras: Only when there's actual value (e.g. not bloated or redundant)

SDXL

  • Top Models from Civitai (Realism, Stylized, Experimental)
  • Base + Refiner official models
  • ControlNets: Depth, Canny, OpenPose, Normal, etc.

Extra

  • VAE, upscalers, and anything required to support workflows

πŸ‹οΈ Unet Recommendations (Based on VRAM)

VRAM Use Case Model Type
16GB+ Full-quality FP8 flux1-dev-fp8.safetensors
12GB Balanced Q5_K_S GGUF flux1-dev-Q5_K_S.gguf
8GB Light Q3_K_S GGUF flux1-dev-Q3_K_S.gguf

GGUF models are significantly lighter and designed for low-VRAM systems.

🧠 T5XXL Recommendations (Based on Ram)

System RAM Use Case Model Type
64GB Max quality t5xxl_fp16.safetensors
32GB High quality (can crash if multitasking) t5xxl_fp16.safetensors
16GB Balanced t5xxl_fp8_scaled.safetensors
<16GB Low-memory / Safe mode GGUF Q5_K_S or Q3_K_S

⚠️ These are recommended tiers, not hard rules. RAM usage depends on your active processes, ComfyUI extensions, batch sizes, and other factors.
If you're getting random crashes, try scaling down one tier.


πŸ› Folder Structure (Flux1 Only)

Flux1/
β”œβ”€ unet/
β”‚   β”œβ”€ Dev/
β”‚   β”‚   β”œβ”€ flux1-dev-fp8.safetensors
β”‚   β”‚   └─ GGUF/
β”‚   β”œβ”€ Schnell/
β”‚   β”œβ”€ Depth/
β”‚   β”œβ”€ Canny/
β”‚   └─ Fill/
β”œβ”€ clip/
β”‚   β”œβ”€ t5xxl_fp16.safetensors
β”‚   β”œβ”€ GGUF/
β”‚   └─ ...
β”œβ”€ loras/

πŸ“ˆ Model Previews (Coming Soon)

We will add a single grid-style graphic showing example outputs:

  • Dev vs Schnell: Quality vs Speed
  • Depth / Canny / Fill: Source image β†’ processed map β†’ output
  • SDXL examples: Realism, Stylized, etc.

All preview images will be grouped into a single efficient visual block for each group.


πŸ“’ Want It Even Easier?

Skip the manual downloads.

🎁 Patreon.com/MaxedOut β€” Get:

  • One-click installers for all major Flux & SDXL workflows
  • Organized ComfyUI folders built for beginners and pros
  • Specialized templates (e.g. Mega Flux, Tiled Composites, Realistic Portraits)
  • Behind-the-scenes model picks and tips

❓ FAQ

Q: Why not every GGUF?
A: Because Q3, Q5, and Q6 cover the most meaningful range. No bloat.

Q: Are these the official models?
A: Yes. Most are sourced directly from creators, or validated mirrors.

Q: Will this grow?
A: Yes. But only with purpose.


✨ Final Thoughts

You shouldn’t need to hunt through 12 Discord servers and 6 Civitai pages just to build your ComfyUI folder.

This repo fixes that.

The Best Models. For ComfyUI. In One Place.

Downloads last month
4
GGUF
Model size
4.76B params
Architecture
t5encoder
Hardware compatibility
Log In to view the estimation

3-bit

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support