-
SwapAnything: Enabling Arbitrary Object Swapping in Personalized Visual Editing
Paper • 2404.05717 • Published • 24 -
ByteEdit: Boost, Comply and Accelerate Generative Image Editing
Paper • 2404.04860 • Published • 24 -
SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions
Paper • 2403.16627 • Published • 20 -
Personalized Face Inpainting with Diffusion Models by Parallel Visual Attention
Paper • 2312.03556 • Published • 1
Collections
Discover the best community collections!
Collections including paper arxiv:2301.00553
-
StreamMultiDiffusion: Real-Time Interactive Generation with Region-Based Semantic Control
Paper • 2403.09055 • Published • 24 -
GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models
Paper • 2112.10741 • Published • 3 -
Lightweight Image Inpainting by Stripe Window Transformer with Joint Attention to CNN
Paper • 2301.00553 • Published • 2 -
ObjectDrop: Bootstrapping Counterfactuals for Photorealistic Object Removal and Insertion
Paper • 2403.18818 • Published • 25
-
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows
Paper • 2107.00652 • Published • 2 -
Cross-Shaped Windows Transformer with Self-supervised Pretraining for Clinically Significant Prostate Cancer Detection in Bi-parametric MRI
Paper • 2305.00385 • Published • 2 -
2nd Place Solution to Google Landmark Recognition Competition 2021
Paper • 2110.02638 • Published • 2 -
BOAT: Bilateral Local Attention Vision Transformer
Paper • 2201.13027 • Published • 2
-
FaceChain-SuDe: Building Derived Class to Inherit Category Attributes for One-shot Subject-Driven Generation
Paper • 2403.06775 • Published • 3 -
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
Paper • 2010.11929 • Published • 6 -
Data Incubation -- Synthesizing Missing Data for Handwriting Recognition
Paper • 2110.07040 • Published • 2 -
A Mixture of Expert Approach for Low-Cost Customization of Deep Neural Networks
Paper • 1811.00056 • Published • 2
-
Linear Transformers with Learnable Kernel Functions are Better In-Context Models
Paper • 2402.10644 • Published • 78 -
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Paper • 2305.13245 • Published • 5 -
ChunkAttention: Efficient Self-Attention with Prefix-Aware KV Cache and Two-Phase Partition
Paper • 2402.15220 • Published • 19 -
Sequence Parallelism: Long Sequence Training from System Perspective
Paper • 2105.13120 • Published • 5