δΈ­ζ–‡ι˜…θ―»

HunyuanImage-2.1: An Efficient Diffusion Model for High-Resolution (2K) Text-to-Image Generation​

πŸ‘‹ Join our WeChat


This repo contains PyTorch model definitions, pretrained weights and inference/sampling code for our HunyuanImage-2.1. You can find more visualizations on our project page.

πŸ”₯πŸ”₯πŸ”₯ Latest Updates

  • September 8, 2025: πŸš€ Released inference code and model weights for HunyuanImage-2.1.

πŸŽ₯ Demo

HunyuanImage 2.1 Demo

Contents


Abstract

We present HunyuanImage-2.1, a highly efficient text-to-image model that is capable of generating 2K (2048 Γ— 2048) resolution images. Leveraging an extensive dataset and structured captions involving multiple expert models, we significantly enhance text-image alignment capabilities. The model employs a highly expressive VAE with a (32 Γ— 32) spatial compression ratio, substantially reducing computational costs.

Our architecture consists of two stages:

  1. ​Base text-to-image Model:​​ The first stage is a text-to-image model that utilizes two text encoders: a multimodal large language model (MLLM) to improve image-text alignment, and a multi-language, character-aware encoder to enhance text rendering across various languages. This stage features a single- and dual-stream diffusion transformer with 17 billion parameters. To optimize aesthetics and structural coherence, we apply reinforcement learning from human feedback (RLHF).
  2. Refiner Model: The second stage introduces a refiner model that further enhances image quality and clarity, while minimizing artifacts.

Additionally, we developed the PromptEnhancer module to further boost model performance, and employed meanflow distillation for efficient inference. HunyuanImage-2.1 demonstrates robust semantic alignment and cross-scenario generalization, leading to improved consistency between text and image, enhanced control of scene details, character poses, and expressions, and the ability to generate multiple objects with distinct descriptions.

HunyuanImage-2.1 Overall Pipeline

Training Data and Caption

Structured captions provide hierarchical semantic information at short, medium, long, and extra-long levels, significantly enhancing the model’s responsiveness to complex semantics. Innovatively, an OCR agent and IP RAG are introduced to address the shortcomings of general VLM captioners in dense text and world knowledge descriptions, while a bidirectional verification strategy ensures caption accuracy.

Text-to-Image Model Architecture

HunyuanImage 2.1 Architecture

Core Components:

  • High-Compression VAE with REPA Training Acceleration:
    • A VAE with a 32Γ— compression rate drastically reduces the number of input tokens for the DiT model. By aligning its feature space with DINOv2 features, we facilitate the training of high-compression VAEs. As a result, our model generates 2K images with the same token length (and thus similar inference time) as other models require for 1K images, achieving superior inference efficiency.
    • Multi-bucket, multi-resolution REPA loss aligns DiT features with a high-dimensional semantic feature space, accelerating model convergence.
  • Dual Text Encoder:
    • A vision-language multimodal encoder is employed to better understand scene descriptions, character actions, and detailed requirements.
    • A multilingual ByT5 text encoder is introduced to specialize in text generation and multilingual expression.
  • Network: A single- and dual-stream diffusion transformer with 17 billion parameters.

Reinforcement Learning from Human Feedback

Two-Stage Post-Training with Reinforcement Learning: Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) are applied sequentially in two post-training stages. We introduce a Reward Distribution Alignment algorithm, which innovatively incorporates high-quality images as selected samples to ensure stable and improved reinforcement learning outcomes.

Rewriting Model

HunyuanImage 2.1 Architecture

  • The first systematic industrial-level rewriting model. SFT training structurally rewrites user text instructions to enrich visual expression, while GRPO training employs a fine-grained semantic AlignEvaluator reward model to substantially improve the semantics of images generated from rewritten text. The AlignEvaluator covers 6 major categories and 24 fine-grained assessment points. PromptEnhancer supports both Chinese and English rewriting and demonstrates general applicability in enhancing semantics for both open-source and proprietary text-to-image models.

Model distillation

We propose a novel distillation method based on meanflow that addresses the key challenges of instability and inefficiency inherent in standard meanflow training. This approach enables high-quality image generation with only a few sampling steps. To our knowledge, this is the first successful application of meanflow to an industrial-scale model.

πŸŽ‰ HunyuanImage-2.1 Key Features

  • High-Quality Generation: Efficiently produces ultra-high-definition (2K) images with cinematic composition.
  • Multilingual Support: Provides native support for both Chinese and English prompts.
  • Advanced Architecture: Built on a multi-modal, single- and dual-stream combined DiT (Diffusion Transformer) backbone.
  • Glyph-Aware Processing: Utilizes ByT5's text rendering capabilities for improved text generation accuracy.
  • Flexible Aspect Ratios: Supports a variety of image aspect ratios (1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3).
  • Prompt Enhancement: Automatically rewrites prompts to improve descriptive accuracy and visual quality.

Prompt Enhanced Demo

To improve the quality and detail of generated images, we use a prompt rewriting model. This model automatically enhances user-provided text prompts by adding detailed and descriptive information.

Human Evaluation with Other Models

πŸ“ˆ Comparisons

SSAE Evaluation

SSAE (Structured Semantic Alignment Evaluation) is an intelligent evaluation metric for image-text alignment based on advanced multimodal large language models (MLLMs). We extracted 3500 key points across 12 categories, then used multimodal large language models to automatically evaluate and score by comparing the generated images with these key points based on the visual content of the images. Mean Image Accuracy represents the image-wise average score across all key points, while Global Accuracy directly calculates the average score across all key points.

Model Open Source Mean Image Accuracy Global Accuracy Primary Subject Secondary Subject Scene Other
Noun Key Attributes Other Attributes Action Noun Attributes Action Noun Attributes Shot Style Composition
FLUX-dev βœ… 0.7122 0.6995 0.7965 0.7824 0.5993 0.5777 0.7950 0.6826 0.6923 0.8453 0.8094 0.6452 0.7096 0.6190
Seedream-3.0 ❌ 0.8827 0.8792 0.9490 0.9311 0.8242 0.8177 0.9747 0.9103 0.8400 0.9489 0.8848 0.7582 0.8726 0.7619
Qwen-Image βœ… 0.8854 0.8828 0.9502 0.9231 0.8351 0.8161 0.9938 0.9043 0.8846 0.9613 0.8978 0.7634 0.8548 0.8095
GPT-Image ❌ 0.8952 0.8929 0.9448 0.9289 0.8655 0.8445 0.9494 0.9283 0.8800 0.9432 0.9017 0.7253 0.8582 0.7143
HunyuanImage 2.1 βœ… 0.8888 0.8832 0.9339 0.9341 0.8363 0.8342 0.9627 0.8870 0.9615 0.9448 0.9254 0.7527 0.8689 0.7619

From the SSAE evaluation results, our model has currently achieved the optimal performance among open-source models in terms of semantic alignment, and is very close to the performance of closed-source commercial models (GPT-Image).

GSB Evaluation

Human Evaluation with Other Models

We adopted the GSB evaluation method commonly used to assess the relative performance between two models from an overall image perception perspective. In total, we utilized 1000 text prompts, generating an equal number of image samples for all compared models in a single run. For a fair comparison, we conducted inference only once for each prompt, avoiding any cherry-picking of results. When comparing with the baseline methods, we maintained the default settings for all selected models. The evaluation was performed by more than 100 professional evaluators. From the results, HunyuanImage 2.1 achieved a relative win rate of -1.36% against Seedream3.0 (closed-source) and 2.89% outperforming Qwen-Image (open-source). The GSB evaluation results demonstrate that HunyuanImage 2.1, as an open-source model, has reached a level of image generation quality comparable to closed-source commercial models (Seedream3.0), while showing certain advantages in comparison with similar open-source models (Qwen-Image). This fully validates the technical advancement and practical value of HunyuanImage 2.1 in text-to-image generation tasks.

πŸ“œ System Requirements

Hardware and OS Requirements:

  • NVIDIA GPU with CUDA support.

    Minimum requrement for now: 36 GB GPU memory for 2048x2048 image generation.

    ✨ An FP8-quantized model is coming soon, enabling even lower GPU memory requirements for inference, stay tuned πŸ‘€!

    Note: The memory requirements above are measured with model CPU offloading enabled. If your GPU has sufficient memory, you may disable offloading for improved inference speed.

  • Supported operating system: Linux.

πŸ› οΈ Dependencies and Installation

  1. Clone the repository:
git clone https://github.com/Tencent-Hunyuan/HunyuanImage-2.1.git
cd HunyuanImage-2.1
  1. Install dependencies:
pip install -r requirements.txt
pip install flash-attn==2.7.3 --no-build-isolation

🧱 Download Pretrained Models

The details of download pretrained models are shown here.

πŸ”‘ Usage

HunyuanImage-2.1 only supports 2K image generation (e.g. 2048x2048 for 1:1 images, 2560x1536 for 16:9 images, etc.). Generating images with 1K resolution will result in artifacts. Additionally, we recommend using the full generation pipeline for better quality (i.e. enabling prompt enhancement and refinment).

import torch
from hyimage.diffusion.pipelines.hunyuanimage_pipeline import HunyuanImagePipeline

# Supported model_name: hunyuanimage-v2.1, hunyuanimage-v2.1-distilled
model_name = "hunyuanimage-v2.1"
pipe = HunyuanImagePipeline.from_pretrained(model_name=model_name, torch_dtype='bf16')
pipe = pipe.to("cuda")

prompt = "A cute, cartoon-style anthropomorphic penguin plush toy with fluffy fur, standing in a painting studio, wearing a red knitted scarf and a red beret with the word β€œTencent” on it, holding a paintbrush with a focused expression as it paints an oil painting of the Mona Lisa, rendered in a photorealistic photographic style."
image = pipe(
    prompt=prompt,
    # Examples of supported resolutions and aspect ratios for HunyuanImage-2.1:
    # 16:9  -> width=2560, height=1536
    # 4:3   -> width=2304, height=1792
    # 1:1   -> width=2048, height=2048
    # 3:4   -> width=1792, height=2304
    # 9:16  -> width=1536, height=2560
    # Please use one of the above width/height pairs for best results.
    width=2048,
    height=2048,
    use_reprompt=True,  # Enable prompt enhancement
    use_refiner=True,   # Enable refiner model
    # For the distilled model, use 8 steps for faster inference.
    # For the non-distilled model, use 50 steps for better quality.
    num_inference_steps=8 if "distilled" in model_name else 50, 
    guidance_scale=3.5,
    shift=5,
    seed=649151,
)

image.save(f"generated_image.png")

πŸ”— BibTeX

If you find this project useful for your research and applications, please cite as:

@misc{HunyuanImage-2.1,
  title={HunyuanImage 2.1: An Efficient Diffusion Model for High-Resolution (2K) Text-to-Image Generation},
  author={Tencent Hunyuan Team},
  year={2025},
  howpublished={\url{https://github.com/Tencent-Hunyuan/HunyuanImage-2.1}},
}

Acknowledgements

We would like to thank the following open-source projects and communities for their contributions to open research and exploration: Qwen, FLUX, diffusers and HuggingFace.

Github Star History

Star History Chart
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ 4 Ask for provider support

Model tree for tencent/HunyuanImage-2.1

Quantizations
3 models

Spaces using tencent/HunyuanImage-2.1 2