--- license: apache-2.0 language: - en base_model: - OnomaAIResearch/Illustrious-xl-early-release-v0 pipeline_tag: text-to-image tags: - gguf-node widget: - text: "masterpiece, best quality, vibrant, very aesthetic, high contrast, semrealistic, highly detailed, absurdres, masterful composition, cinematic lighting, score_9, score_8_up, score_7_up, score_6_up, score_5_up, rating_questionable, source_anime, 1girl, portrait, multicolored hair, fringe, bare shoulders, upper body, cosmic" parameters: negative_prompt: "femboy, low quality, 2koma, 4koma, bad anatomy, jpeg artifacts, signature, watermark, lowres, bad hands" output: url: samples\ComfyUI_00001_.png - text: drag it to browser same descriptor to the 1st one with gguf q4_0 output: url: samples\ComfyUI_00002_.png - text: drag it to browser same descriptor to the 1st one with gguf q4_0 output: url: samples\ComfyUI_00003_.png - text: drag it to browser same descriptor to the 1st one with gguf q4_0 (new v90 model) output: url: samples\ComfyUI_00011_.png - text: drag it to browser same descriptor to the 1st one with gguf q4_0 (new v90 model) output: url: samples\ComfyUI_00007_.png - text: drag it to browser same descriptor to the 1st one with gguf q5_0 (new v90 model) output: url: samples\ComfyUI_00008_.png --- # **gguf quantized and fp8 scaled versions of illustrious (test pack)** ### **setup (in general)** - drag gguf file(s) to diffusion_models folder (./ComfyUI/models/diffusion_models) - drag clip or encoder(s), i.e., illustrious_g_clip and illustrious_l_clip, to text_encoders folder (./ComfyUI/models/text_encoders) - drag vae decoder(s), i.e., vae, to illustrious_vae folder (./ComfyUI/models/vae) ### **run it straight (no installation needed way)** - get the comfy pack with the new gguf-node ([pack](https://github.com/calcuis/gguf/releases)) - run the .bat file in the main directory ### **workflow** - drag any workflow json file to the activated browser; or - drag any generated output file (i.e., picture, video, etc.; which contains the workflow metadata) to the activated browser ### **review** - use tag/word(s) as input for more accurate results for those legacy models; not very convenient (compare to the recent models) at the very beginning - credits should be given to those contributors from civitai platform - **fast-illustrious gguf** was quantized from **fp8** scaled safetensors while **illustrious gguf** was quantized from the original **bf16** (this is just an attempt to test: is it true? the trimmed model with 50% tensors lesser really load faster? please test it yourself; btw, some models might have their unique structure/feature affecting the loader performance, never one size fits all) - fp8 scaled file works fine in this model; including vae and clips - good to run on old machines, i.e., 9xx series or before (legacy mode [--disable-cuda-malloc --lowvram] supported); compatible with the new gguf-node - **disclaimer**: some models (original files) are provided by someone else and we might not easily spot out the creator/contributor(s) behind, unless it was specified in the source; rather let it blank instead of anonymous/unnamed/unknown; if it is your work, do let us know; we will address it back properly and probably; thanks for everything ### **reference** - wai [creator](https://civitai.com/user/WAI0731) - comfyui [comfyanonymous](https://github.com/comfyanonymous/ComfyUI) - gguf-node ([pypi](https://pypi.org/project/gguf-node)|[repo](https://github.com/calcuis/gguf)|[pack](https://github.com/calcuis/gguf/releases))