YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

SVDQuant INT4 Model (Nunchaku-Compatible)

This repository provides an INT4-quantized version of nyanko7/flux-dev-de-distill, optimized for use with the nunchaku framework. This model delivers a smaller memory footprint and faster inference, making it ideal for resource-constrained environments.

Note: This model requires specific versions of the nunchaku Python package and the ComfyUI-nunchaku custom nodes. Follow the installation steps carefully.


Prerequisites

Before beginning, ensure you have the following:

  1. ComfyUI Installed – A working installation.

  2. Activated Python Environment – Use ComfyUI’s Python virtual environment.

  3. System Build Tools:

    • Linux: gcc or g++
      # Debian/Ubuntu
      sudo apt update && sudo apt install build-essential
      
      # Fedora
      sudo dnf groupinstall "Development Tools"
      
    • Windows: Visual Studio with the Desktop development with C++ workload.
  4. Required Python Packages: Run this in your activated ComfyUI environment:

    pip install ninja wheel diffusers transformers accelerate sentencepiece protobuf huggingface_hub build
    

Installation Instructions

Perform all steps in your activated ComfyUI Python environment.

1. Install the nunchaku Python Package (v0.3.0dev1+)

# Activate your ComfyUI Python environment:
# Linux/macOS:
source /path/to/your/ComfyUI/venv/bin/activate
# Windows:
.\ComfyUI\venv\Scripts\activate

# Clone and build:
git clone --recursive https://github.com/mit-han-lab/nunchaku.git
cd nunchaku
git checkout tags/v0.3.0dev1  # Or a newer v0.3.xdev version if available

# Build the wheel:
# Linux/macOS:
NUNCHAKU_BUILD_WHEELS=1 python -m build --wheel --no-isolation

# Windows (PowerShell):
$env:NUNCHAKU_BUILD_WHEELS="1"
python -m build --wheel --no-isolation

# Windows (CMD):
set NUNCHAKU_BUILD_WHEELS=1
python -m build --wheel --no-isolation

# Install the wheel:
pip install dist/nunchaku-*.whl

2. Install ComfyUI-nunchaku Custom Nodes (v0.3.0dev1+)

cd /path/to/your/ComfyUI/custom_nodes/
git clone https://github.com/mit-han-lab/ComfyUI-nunchaku.git
cd ComfyUI-nunchaku
git checkout tags/v0.3.0dev1

# Install requirements:
pip install -r requirements.txt

3. Install Additional Dependencies

Still within your ComfyUI environment:

pip install insightface facexlib onnxruntime timm

4. Download the Model

Download the model files:

huggingface-cli download theunlikely/svdq-int4-flux-dev-de-distill --local-dir /path/to/your/ComfyUI/models/diffusion_models/flux-de-distill

5. Restart ComfyUI

After all installation steps, restart your ComfyUI instance.


6. Use provided workflow

Drag and drop workflow.png into ComfyUI to load the workflow. Make sure that the proper models are selected!

Important Notes on Usage and Output

No Custom Pipeline (Yet)

This model does not use a specialized pipeline tailored for the dedistlled model. Output may differ. Expect some variation, and experiment with settings to achieve optimal results.

CFG (Classifier-Free Guidance) Scale

Unlike standard nunchaku compatible models, **this model requires CFG > 1**—even without negative prompts.

  • Try values like 3.5, 5, or 8.0.

Troubleshooting

Build Issues

  • Confirm build tools are installed (gcc/g++, Visual Studio).
  • Ensure Python packages ninja, build are installed.
  • Verify that the nunchaku repo was cloned with --recursive.

Missing Nodes in ComfyUI

  • Ensure ComfyUI-nunchaku is in ComfyUI/custom_nodes/.
  • Confirm all dependencies (including manual installs) are satisfied.
  • Restart ComfyUI.

Version Mismatches

  • Use nunchaku v0.3.0dev1+ and ComfyUI-nunchaku v0.3.0dev1+.
  • Use git checkout tags/v0.3.0dev1 on both repositories for consistency.

Poor Output / No Video

  • Set CFG > 1. This is a common issue.
  • Remember: no custom pipeline = different behavior.
  • Adjust generation parameters (steps, CFG, etc.).
Downloads last month
16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support