phenomenalai/sd3-token-mod-1024

Adapter for stabilityai/stable-diffusion-3.5-medium trained with Higgsfield.

Usage

import torch
from diffusers import StableDiffusion3Pipeline
from higgsfield.adapters.token_mod import GlobalTokenModulator
from huggingface_hub import hf_hub_download

base_model = "stabilityai/stable-diffusion-3.5-medium"
repo_id = "phenomenalai/sd3-token-mod-1024"

pipe = StableDiffusion3Pipeline.from_pretrained(base_model, torch_dtype=torch.bfloat16).to("cuda")
# download adapter
adapter_path = hf_hub_download(repo_id, filename="token_mod.pt")
# load adapter
mod = GlobalTokenModulator(num_tokens=333, embed_dim=4096, out_channels=pipe.transformer.config.in_channels)
mod.load_state_dict(torch.load(adapter_path, map_location="cpu"))
mod.to("cuda").eval()
# Now use your own inference loop adding bias from `mod` like in your codebase.

Files

  • token_mod.pt adapter weights
  • run.json metadata
  • manifest.md human summary
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for ritam5013/higgsfield

Finetuned
(30)
this model