Image-to-Image
Diffusers
lora
flux

Finegrain Product Placement LoRA

A lightweight LoRA (rank=8) for Flux Kontext, enabling realistic product placement in photos.

  • Built on EditNet, our pixel-perfect image editing dataset for supervised fine-tuning.
  • Designed for product photography.
  • Trained to support bounding-box control for predictable placement (position and scale).
  • Distributed under the same license as Flux Kontext.

examples

πŸ‘‰ Try it on Hugging Face

Features

  • Products are relighted according to the scene.
  • Perspective is adjusted to remain as close as possible to the reference image, minimizing hallucinations.
  • Shadows and reflections are added for realism.
  • Can generate at any resolution, with a maximum short side of 1024px.

Limitations

  • Occlusion βœ… β€” as long as the product stays the main focus.
  • Virtual try-on ⚠️ β€” only for hats, watches, and glasses.
  • Scene leakage ⚠️ β€” the product may occasionally be transformed into another element of the scene.
  • Small text ⚠️ β€” fine details, like small text on products, may not be fully preserved.
  • Textures ⚠️ β€” May look washed out at times.
  • Occasional failures ⚠️ β€” Sometimes the model does not generate anything.
  • Products in hands ❌ β€” not supported.

Usage

This is a Flux Kontext LoRA but it uses a non-standard formulation and requires specific input preparation.

For that reason, it cannot be used with the official Diffusers pipeline. Here is how you can use it with uv instead:

HF_TOKEN=hf_aBcD1234 \
    uv run --with git+https://github.com/finegrain-ai/finegrain-toolbox \
    python foo.py
from pathlib import Path

import torch
from finegrain_toolbox.flux import Model, TextEncoder
from finegrain_toolbox.processors import product_placement
from huggingface_hub import hf_hub_download
from PIL import Image

device = torch.device("cuda")
dtype = torch.bfloat16

model = Model.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev",
    device=device,
    dtype=dtype,
)

text_encoder = TextEncoder.from_pretrained(
    "black-forest-labs/FLUX.1-Kontext-dev",
    device=device,
    dtype=dtype,
)

lora_path = Path(
    hf_hub_download(
        repo_id="finegrain/finegrain-product-placement-lora",
        filename="finegrain-placement-v1-rank8.safetensors",
    )
)

prompt = text_encoder.encode("Add this in the box")

model.transformer.load_lora_adapter(lora_path, adapter_name="inserter")

scene_image = Image.open("scene.webp")
reference = Image.open("reference.webp")
bbox = (1085, 1337, 1737, 3077)

result = product_placement.process(
    model=model,
    scene=scene_image,
    reference=reference,
    bbox=bbox,
    prompt=prompt,
)

result.output.save("output.webp")

Citation

@misc{fg2025pplace,
  author = {The Finegrain Team},
  title = {Finegrain Product Placement LoRA},
  year = {2025},
  publisher = {HuggingFace},
}
Downloads last month
258
Inference Providers NEW

Model tree for finegrain/finegrain-product-placement-lora

Adapter
(206)
this model

Spaces using finegrain/finegrain-product-placement-lora 3