paint-brush
SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesisby@synthesizing

SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis

by SynthesizingOctober 3rd, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

SDXL is a novel latent diffusion model for text-to-image synthesis, significantly improving upon previous Stable Diffusion versions with a larger UNet backbone and innovative conditioning techniques. It introduces a refinement model for enhanced visual fidelity, prioritizing open research and transparency in AI model development. SDXL achieves competitive results with state-of-the-art black-box generators while promoting ethical deployment.
featured image - SDXL: Improving Latent Diffusion Models for High-Resolution Image Synthesis
Synthesizing HackerNoon profile picture

Authors:

(1) Dustin Podell, Stability AI, Applied Research;

(2) Zion English, Stability AI, Applied Research;

(3) Kyle Lacey, Stability AI, Applied Research;

(4) Andreas Blattmann, Stability AI, Applied Research;

(5) Tim Dockhorn, Stability AI, Applied Research;

(6) Jonas Müller, Stability AI, Applied Research;

(7) Joe Penna, Stability AI, Applied Research;

(8) Robin Rombach, Stability AI, Applied Research.

Abstract and 1 Introduction

2 Improving Stable Diffusion

2.1 Architecture & Scale

2.2 Micro-Conditioning

2.3 Multi-Aspect Training

2.4 Improved Autoencoder and 2.5 Putting Everything Together

3 Future Work


Appendix

A Acknowledgements

B Limitations

C Diffusion Models

D Comparison to the State of the Art

E Comparison to Midjourney v5.1

F On FID Assessment of Generative Text-Image Foundation Models

G Additional Comparison between Single- and Two-Stage SDXL pipeline

References

Abstract

We present SDXL, a latent diffusion model for text-to-image synthesis. Compared to previous versions of Stable Diffusion, SDXL leverages a three times larger UNet backbone: The increase of model parameters is mainly due to more attention blocks and a larger cross-attention context as SDXL uses a second text encoder. We design multiple novel conditioning schemes and train SDXL on multiple aspect ratios. We also introduce a refinement model which is used to improve the visual fidelity of samples generated by SDXL using a post-hoc image-to-image technique. We demonstrate that SDXL shows drastically improved performance compared to previous versions of Stable Diffusion and achieves results competitive with those of black-box state-of-the-art image generators. In the spirit of promoting open research and fostering transparency in large model training and evaluation, we provide access to code and model weights.

1 Introduction

The last year has brought enormous leaps in deep generative modeling across various data domains, such as natural language [50], audio [17], and visual media [38, 37, 40, 44, 15, 3, 7]. In this report, we focus on the latter and unveil SDXL, a drastically improved version of Stable Diffusion. Stable Diffusion is a latent text-to-image diffusion model (DM) which serves as the foundation for an array of recent advancements in, e.g., 3D classification [43], controllable image editing [54], image personalization [10], synthetic data augmentation [48], graphical user interface prototyping [51], etc. Remarkably, the scope of applications has been extraordinarily extensive, encompassing fields as diverse as music generation [9] and reconstructing images from fMRI brain scans [49].


User studies demonstrate that SDXL consistently surpasses all previous versions of Stable Diffusion by a significant margin (see Fig. 1). In this report, we present the design choices which lead to this boost in performance encompassing i) a 3× larger UNet-backbone compared to previous Stable Diffusion models (Sec. 2.1), ii) two simple yet effective additional conditioning techniques (Sec. 2.2) which do not require any form of additional supervision, and iii) a separate diffusion-based refinement model which applies a noising-denoising process [28] to the latents produced by SDXL to improve the visual quality of its samples (Sec. 2.5).


A major concern in the field of visual media creation is that while black-box-models are often recognized as state-of-the-art, the opacity of their architecture prevents faithfully assessing and validating their performance. This lack of transparency hampers reproducibility, stifles innovation, and prevents the community from building upon these models to further the progress of science and art. Moreover, these closed-source strategies make it challenging to assess the biases and limitations of these models in an impartial and objective way, which is crucial for their responsible and ethical deployment. With SDXL we are releasing an open model that achieves competitive performance with black-box image generation models (see Fig. 10 & Fig. 11).


This paper is available on arxiv under CC BY 4.0 DEED license.