In machine learning, diffusion models, also known as diffusion probabilistic models or score-based generative models, are a class of latent variable generative models. A diffusion model consists of three major components: the forward process, the reverse process, and the sampling procedure.[1] The goal of diffusion models is to learn a diffusion process that generates a probability distribution for a given dataset from which we can then sample new elements. They learn the latent structure of a dataset by modeling the way in which data points diffuse through their latent space.[2]

In the case of computer vision, diffusion models can be applied to a variety of tasks, including image denoising, inpainting, super-resolution, and image generation. This typically involves training a neural network to sequentially denoise images blurred with Gaussian noise.[2][3] The model is trained to reverse the process of adding noise to an image. After training to convergence, it can be used for image generation by starting with an image composed of random noise for the network to iteratively denoise. Announced on 13 April 2022, OpenAI's text-to-image model DALL-E 2 is an example that uses diffusion models for both the model's prior (which produces an image embedding given a text caption) and the decoder that generates the final image.[4] Diffusion models have recently found applications in natural language processing (NLP),[5] particularly in areas like text generation[6][7] and summarization.[8]

Diffusion models are typically formulated as Markov chains and trained using variational inference.[9] Examples of generic diffusion modeling frameworks used in computer vision are denoising diffusion probabilistic models, noise conditioned score networks, and stochastic differential equations.[10]

## Denoising diffusion model

### Non-equilibrium thermodynamics

Diffusion models were introduced in 2015 as a method to learn a model that can sample from a highly complex probability distribution. They used techniques from non-equilibrium thermodynamics, especially diffusion.[11]

Consider, for example, how one might model the distribution of all naturally-occurring photos. Each image is a point in the space of all images, and the distribution of naturally-occurring photos is a "cloud" in space, which, by repeatedly adding noise to the images, diffuses out to the rest of the image space, until the cloud becomes all but indistinguishable from a Gaussian distribution ${\displaystyle N(0,I)}$. A model that can approximately undo the diffusion can then be used to sample from the original distribution. This is studied in "non-equilibrium" thermodynamics, as the starting distribution is not in equilibrium, unlike the final distribution.

The equilibrium distribution is the Gaussian distribution ${\displaystyle N(0,I)}$, with pdf ${\displaystyle \rho (x)\propto e^{-{\frac {1}{2))\|x\|^{2))}$. This is just the Maxwell–Boltzmann distribution of particles in a potential well ${\displaystyle V(x)={\frac {1}{2))\|x\|^{2))$ at temperature 1. The initial distribution, being very much out of equilibrium, would diffuse towards the equilibrium distribution, making biased random steps that are a sum of pure randomness (like a Brownian walker) and gradient descent down the potential well. The randomness is necessary: if the particles were to undergo only gradient descent, then they will all fall to the origin, collapsing the distribution.

### Denoising Diffusion Probabilistic Model (DDPM)

The 2020 paper proposed the Denoising Diffusion Probabilistic Model (DDPM), which improves upon the previous method by variational inference.[9]

#### Forward diffusion

To present the model, we need some notation.

• ${\displaystyle \beta _{1},...,\beta _{T}\in (0,1)}$ are fixed constants.
• ${\displaystyle \alpha _{t}:=1-\beta _{t))$
• ${\displaystyle {\bar {\alpha ))_{t}:=\alpha _{1}\cdots \alpha _{t))$
• ${\displaystyle {\tilde {\beta ))_{t}:={\frac {1-{\bar {\alpha ))_{t-1)){1-{\bar {\alpha ))_{t))}\beta _{t))$
• ${\displaystyle {\tilde {\mu ))_{t}(x_{t},x_{0}):={\frac ((\sqrt {\alpha _{t))}(1-{\bar {\alpha ))_{t-1})x_{t}+{\sqrt ((\bar {\alpha ))_{t-1))}(1-\alpha _{t})x_{0)){1-{\bar {\alpha ))_{t))))$
• ${\displaystyle N(\mu ,\Sigma )}$ is the normal distribution with mean ${\displaystyle \mu }$ and variance ${\displaystyle \Sigma }$, and ${\displaystyle N(x|\mu ,\Sigma )}$ is the probability density at ${\displaystyle x}$.
• A vertical bar denotes conditioning.

A forward diffusion process starts at some starting point ${\displaystyle x_{0}\sim q}$, where ${\displaystyle q}$ is the probability distribution to be learned, then repeatedly adds noise to it by${\displaystyle x_{t}={\sqrt {1-\beta _{t))}x_{t-1}+{\sqrt {\beta _{t))}z_{t))$where ${\displaystyle z_{1},...,z_{T))$ are IID samples from ${\displaystyle N(0,I)}$. This is designed so that for any starting distribution of ${\displaystyle x_{0))$, we have ${\displaystyle \lim _{t}x_{t}|x_{0))$ converging to ${\displaystyle N(0,I)}$.

The entire diffusion process then satisfies${\displaystyle q(x_{0:T})=q(x_{0})q(x_{1}|x_{0})\cdots q(x_{T}|x_{T-1})=q(x_{0})N(x_{1}|{\sqrt {\alpha _{1))}x_{0},\beta _{1}I)\cdots N(x_{T}|{\sqrt {\alpha _{T))}x_{T-1},\beta _{T}I)}$or${\displaystyle \ln q(x_{0:T})=\ln q(x_{0})-\sum _{t=1}^{T}{\frac {1}{2\beta _{t))}\|x_{t}-{\sqrt {1-\beta _{t))}x_{t-1}\|^{2}+C}$where ${\displaystyle C}$ is a normalization constant and often omitted. In particular, we note that ${\displaystyle x_{1:T}|x_{0))$ is a gaussian process, which affords us considerable freedom in reparameterization. For example, by standard manipulation with gaussian process, ${\displaystyle x_{t}|x_{0}\sim N\left({\sqrt ((\bar {\alpha ))_{t))}x_{0},(1-{\bar {\alpha ))_{t})I\right)}$${\displaystyle x_{t-1}|x_{t},x_{0}\sim N({\tilde {\mu ))_{t}(x_{t},x_{0}),{\tilde {\beta ))_{t}I)}$In particular, notice that for large ${\displaystyle t}$, the variable ${\displaystyle x_{t}|x_{0}\sim N\left({\sqrt ((\bar {\alpha ))_{t))}x_{0},(1-{\bar {\alpha ))_{t})I\right)}$ converges to ${\displaystyle N(0,I)}$. That is, after a long enough diffusion process, we end up with some ${\displaystyle x_{T))$ that is very close to ${\displaystyle N(0,I)}$, with all traces of the original ${\displaystyle x_{0}\sim q}$ gone.

For example, since${\displaystyle x_{t}|x_{0}\sim N\left({\sqrt ((\bar {\alpha ))_{t))}x_{0},(1-{\bar {\alpha ))_{t})I\right)}$we can sample ${\displaystyle x_{t}|x_{0))$ directly "in one step", instead of going through all the intermediate steps ${\displaystyle x_{1},x_{2},...,x_{t-1))$.

Derivation by reparameterization

We know ${\textstyle x_{t-1}|x_{0))$ is a gaussian, and ${\textstyle x_{t}|x_{t-1))$ is another gaussian. We also know that these are independent. Thus we can perform a reparameterization: ${\displaystyle x_{t-1}={\sqrt ((\bar {\alpha ))_{t-1))}x_{0}+{\sqrt {1-{\bar {\alpha ))_{t-1))}z}$ ${\displaystyle x_{t}={\sqrt {\alpha _{t))}x_{t-1}+{\sqrt {1-\alpha _{t))}z'}$ where ${\textstyle z,z'}$ are IID gaussians.

There are 5 variables ${\textstyle x_{0},x_{t-1},x_{t},z,z'}$ and two linear equations. The two sources of randomness are ${\textstyle z,z'}$, which can be reparameterized by rotation, since the IID gaussian distribution is rotationally symmetric.

By plugging in the equations, we can solve for the first reparameterization: ${\displaystyle x_{t}={\sqrt ((\bar {\alpha ))_{t))}x_{0}+\underbrace ((\sqrt {\alpha _{t}-{\bar {\alpha ))_{t))}z+{\sqrt {1-\alpha _{t))}z'} _{={\sqrt {1-{\bar {\alpha ))_{t))}z''))$ where ${\textstyle z''}$ is a gaussian with mean zero and variance one.

To find the second one, we complete the rotational matrix: ${\displaystyle {\begin{bmatrix}z''\\z'''\end{bmatrix))={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha ))_{t))}{\sqrt {1-{\bar {\alpha ))_{t))))&{\frac {\sqrt {\beta _{t))}{\sqrt {1-{\bar {\alpha ))_{t))))\\?&?\end{bmatrix)){\begin{bmatrix}z\\z'\end{bmatrix))}$

Since rotational matrices are all of the form ${\textstyle {\begin{bmatrix}\cos \theta &\sin \theta \\-\sin \theta &\cos \theta \end{bmatrix))}$, we know the matrix must be ${\displaystyle {\begin{bmatrix}z''\\z'''\end{bmatrix))={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha ))_{t))}{\sqrt {1-{\bar {\alpha ))_{t))))&{\frac {\sqrt {\beta _{t))}{\sqrt {1-{\bar {\alpha ))_{t))))\\-{\frac {\sqrt {\beta _{t))}{\sqrt {1-{\bar {\alpha ))_{t))))&{\frac {\sqrt {\alpha _{t}-{\bar {\alpha ))_{t))}{\sqrt {1-{\bar {\alpha ))_{t))))\end{bmatrix)){\begin{bmatrix}z\\z'\end{bmatrix))}$ and since the inverse of rotational matrix is its transpose,
${\displaystyle {\begin{bmatrix}z\\z'\end{bmatrix))={\begin{bmatrix}{\frac {\sqrt {\alpha _{t}-{\bar {\alpha ))_{t))}{\sqrt {1-{\bar {\alpha ))_{t))))&-{\frac {\sqrt {\beta _{t))}{\sqrt {1-{\bar {\alpha ))_{t))))\\{\frac {\sqrt {\beta _{t))}{\sqrt {1-{\bar {\alpha ))_{t))))&{\frac {\sqrt {\alpha _{t}-{\bar {\alpha ))_{t))}{\sqrt {1-{\bar {\alpha ))_{t))))\end{bmatrix)){\begin{bmatrix}z''\\z'''\end{bmatrix))}$

Plugging back, and simplifying, we have ${\displaystyle x_{t}={\sqrt ((\bar {\alpha ))_{t))}x_{0}+{\sqrt {1-{\bar {\alpha ))_{t))}z''}$ ${\displaystyle x_{t-1}={\tilde {\mu ))_{t}(x_{t},x_{0})-{\sqrt ((\tilde {\beta ))_{t))}z'''}$

#### Backward diffusion

The key idea of DDPM is to use a neural network parametrized by ${\displaystyle \theta }$. The network takes in two arguments ${\displaystyle x_{t},t}$, and outputs a vector ${\displaystyle \mu _{\theta }(x_{t},t)}$ and a matrix ${\displaystyle \Sigma _{\theta }(x_{t},t)}$, such that each step in the forward diffusion process can be approximately undone by ${\displaystyle x_{t-1}\sim N(\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}$. This then gives us a backward diffusion process ${\displaystyle p_{\theta ))$ defined by${\displaystyle p_{\theta }(x_{T})=N(x_{T}|0,I)}$${\displaystyle p_{\theta }(x_{t-1}|x_{t})=N(x_{t-1}|\mu _{\theta }(x_{t},t),\Sigma _{\theta }(x_{t},t))}$The goal now is to learn the parameters such that ${\displaystyle p_{\theta }(x_{0})}$ is as close to ${\displaystyle q(x_{0})}$ as possible. To do that, we use maximum likelihood estimation with variational inference.

#### Variational inference

The ELBO inequality states that ${\displaystyle \ln p_{\theta }(x_{0})\geq E_{x_{1:T}\sim q(\cdot |x_{0})}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}$, and taking one more expectation, we get${\displaystyle E_{x_{0}\sim q}[\ln p_{\theta }(x_{0})]\geq E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}$We see that maximizing the quantity on the right would give us a lower bound on the likelihood of observed data. This allows us to perform variational inference.

Define the loss function${\displaystyle L(\theta ):=-E_{x_{0:T}\sim q}[\ln p_{\theta }(x_{0:T})-\ln q(x_{1:T}|x_{0})]}$and now the goal is to minimize the loss by stochastic gradient descent. The expression may be simplified to[12]${\displaystyle L(\theta )=\sum _{t=1}^{T}E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]+E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]+C}$where ${\displaystyle C}$ does not depend on the parameter, and thus can be ignored. Since ${\displaystyle p_{\theta }(x_{T})=N(x_{T}|0,I)}$ also does not depend on the parameter, the term ${\displaystyle E_{x_{0}\sim q}[D_{KL}(q(x_{T}|x_{0})\|p_{\theta }(x_{T}))]}$ can also be ignored. This leaves just ${\displaystyle L(\theta )=\sum _{t=1}^{T}L_{t))$ with ${\displaystyle L_{t}=E_{x_{t-1},x_{t}\sim q}[-\ln p_{\theta }(x_{t-1}|x_{t})]}$ to be minimized.

#### Noise prediction network

Since ${\displaystyle x_{t-1}|x_{t},x_{0}\sim N({\tilde {\mu ))_{t}(x_{t},x_{0}),{\tilde {\beta ))_{t}I)}$, this suggests that we should use ${\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu ))_{t}(x_{t},x_{0})}$; however, the network does not have access to ${\displaystyle x_{0))$, and so it has to estimate it instead. Now, since ${\displaystyle x_{t}|x_{0}\sim N\left({\sqrt ((\bar {\alpha ))_{t))}x_{0},(1-{\bar {\alpha ))_{t})I\right)}$, we may write ${\displaystyle x_{t}={\sqrt ((\bar {\alpha ))_{t))}x_{0}+{\sqrt {1-{\bar {\alpha ))_{t))}z}$, where ${\displaystyle z}$ is some unknown gaussian noise. Now we see that estimating ${\displaystyle x_{0))$ is equivalent to estimating ${\displaystyle z}$.

Therefore, let the network output a noise vector ${\displaystyle \epsilon _{\theta }(x_{t},t)}$, and let it predict${\displaystyle \mu _{\theta }(x_{t},t)={\tilde {\mu ))_{t}\left(x_{t},{\frac {x_{t}-{\sqrt {1-{\bar {\alpha ))_{t))}\epsilon _{\theta }(x_{t},t)}{\sqrt ((\bar {\alpha ))_{t))))\right)={\frac {x_{t}-\epsilon _{\theta }(x_{t},t)\beta _{t}/{\sqrt {1-{\bar {\alpha ))_{t)))){\sqrt {\alpha _{t))))}$It remains to design ${\displaystyle \Sigma _{\theta }(x_{t},t)}$. The DDPM paper suggested not learning it (since it resulted in "unstable training and poorer sample quality"), but fixing it at some value ${\displaystyle \Sigma _{\theta }(x_{t},t)=\sigma _{t}^{2}I}$, where either ${\displaystyle \sigma _{t}^{2}=\beta _{t}{\text{ or )){\tilde {\beta ))_{t))$ yielded similar performance.

With this, the loss simplifies to ${\displaystyle L_{t}={\frac {\beta _{t}^{2)){2\alpha _{t}(1-{\bar {\alpha ))_{t})\sigma _{t}^{2))}E_{x_{0}\sim q;z\sim N(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]+C}$which may be minimized by stochastic gradient descent. The paper noted empirically that an even simpler loss function${\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim N(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}$resulted in better models.

## Score-based generative model

Score-based generative model is another formulation of diffusion modelling. They are also called noise conditional score network (NCSN) or score-matching with Langevin dynamics (SMLD).[13][14]

### Score matching

#### The idea of score functions

Consider the problem of image generation. Let ${\displaystyle x}$ represent an image, and let ${\displaystyle q(x)}$ be the probability distribution over all possible images. If we have ${\displaystyle q(x)}$ itself, then we can say for certain how likely a certain image is. However, this is intractable in general.

Most often, we are uninterested in knowing the absolute probability of a certain image. Instead, we are usually only interested in knowing how likely a certain image is compared to its immediate neighbors — e.g. how much more likely is an image of cat compared to some small variants of it? Is it more likely if the image contains two whiskers, or three, or with some Gaussian noise added?

Consequently, we are actually quite uninterested in ${\displaystyle q(x)}$ itself, but rather, ${\displaystyle \nabla _{x}\ln q(x)}$. This has two major effects:

• One, we no longer need to normalize ${\displaystyle q(x)}$, but can use any ${\displaystyle {\tilde {q))(x)=Cq(x)}$, where ${\displaystyle C=\int {\tilde {q))(x)dx>0}$ is any unknown constant that is of no concern to us.
• Two, we are comparing ${\displaystyle q(x)}$ neighbors ${\displaystyle q(x+dx)}$, by ${\displaystyle {\frac {q(x)}{q(x+dx)))=e^{-\langle \nabla _{x}\ln q,dx\rangle ))$

Let the score function be ${\displaystyle s(x):=\nabla _{x}\ln q(x)}$; then consider what we can do with ${\displaystyle s(x)}$.

As it turns out, ${\displaystyle s(x)}$ allows us to sample from ${\displaystyle q(x)}$ using thermodynamics. Specifically, if we have a potential energy function ${\displaystyle U(x)=-\ln q(x)}$, and a lot of particles in the potential well, then the distribution at thermodynamic equilibrium is the Boltzmann distribution ${\displaystyle q_{U}(x)\propto e^{-U(x)/k_{B}T}=q(x)^{1/k_{B}T))$. At temperature ${\displaystyle k_{B}T=1}$, the Boltzmann distribution is exactly ${\displaystyle q(x)}$.

Therefore, to model ${\displaystyle q(x)}$, we may start with a particle sampled at any convenient distribution (such as the standard gaussian distribution), then simulate the motion of the particle forwards according to the Langevin equation${\displaystyle dx_{t}=-\nabla _{x_{t))U(x_{t})dt+dW_{t))$and the Boltzmann distribution is, by Fokker-Planck equation, the unique thermodynamic equilibrium. So no matter what distribution ${\displaystyle x_{0))$ has, the distribution of ${\displaystyle x_{t))$ converges in distribution to ${\displaystyle q}$ as ${\displaystyle t\to \infty }$.

#### Learning the score function

Given a density ${\displaystyle q}$, we wish to learn a score function approximation ${\displaystyle f_{\theta }\approx \nabla \ln q}$. This is score matching.[15] Typically, score matching is formalized as minimizing Fisher divergence function ${\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]}$. By expanding the integral, and performing an integration by parts, ${\displaystyle E_{q}[\|f_{\theta }(x)-\nabla \ln q(x)\|^{2}]=E_{q}[\|f_{\theta }\|^{2}+2\nabla ^{2}\cdot f_{\theta }]+C}$giving us a loss function, also known as the Hyvärinen scoring rule, that can be minimized by stochastic gradient descent.

#### Annealing the score function

Suppose we need to model the distribution of images, and we want ${\displaystyle x_{0}\sim N(0,I)}$, a white-noise image. Now, most white-noise images do not look like real images, so ${\displaystyle q(x_{0})\approx 0}$ for large swaths of ${\displaystyle x_{0}\sim N(0,I)}$. This presents a problem for learning the score function, because if there are no samples around a certain point, then we can't learn the score function at that point. If we do not know the score function ${\displaystyle \nabla _{x_{t))\ln q(x_{t})}$ at that point, then we cannot impose the time-evolution equation on a particle:${\displaystyle dx_{t}=\nabla _{x_{t))\ln q(x_{t})dt+dW_{t))$To deal with this problem, we perform annealing. If ${\displaystyle q}$ is too different from a white-noise distribution, then progressively add noise until it is indistinguishable from one. That is, we perform a forward diffusion, then learn the score function, then use the score function to perform a backward diffusion.

### Continuous diffusion processes

#### Forward diffusion process

Consider again the forward diffusion process, but this time in continuous time:${\displaystyle x_{t}={\sqrt {1-\beta _{t))}x_{t-1}+{\sqrt {\beta _{t))}z_{t))$By taking the ${\displaystyle \beta _{t}\to \beta (t)dt,{\sqrt {dt))z_{t}\to dW_{t))$ limit, we obtain a continuous diffusion process, in the form of a stochastic differential equation:${\displaystyle dx_{t}=-{\frac {1}{2))\beta (t)x_{t}dt+{\sqrt {\beta (t)))dW_{t))$where ${\displaystyle W_{t))$ is a Wiener process (multidimensional Brownian motion).

Now, the equation is exactly a special case of the overdamped Langevin equation${\displaystyle dx_{t}=-{\frac {D}{k_{B}T))(\nabla _{x}U)dt+{\sqrt {2D))dW_{t))$where ${\displaystyle D}$ is diffusion tensor, ${\displaystyle T}$ is temperature, and ${\displaystyle U}$ is potential energy field. If we substitute in ${\displaystyle D={\frac {1}{2))\beta (t)I,k_{B}T=1,U={\frac {1}{2))\|x\|^{2))$, we recover the above equation. This explains why the phrase "Langevin dynamics" is sometimes used in diffusion models.

Now the above equation is for the stochastic motion of a single particle. Suppose we have a cloud of particles distributed according to ${\displaystyle q}$ at time ${\displaystyle t=0}$, then after a long time, the cloud of particles would settle into the stable distribution of ${\displaystyle N(0,I)}$. Let ${\displaystyle \rho _{t))$ be the density of the cloud of particles at time ${\displaystyle t}$, then we have${\displaystyle \rho _{0}=q;\quad \rho _{T}\approx N(0,I)}$and the goal is to somehow reverse the process, so that we can start at the end and diffuse back to the beginning.

By Fokker-Planck equation, the density of the cloud evolves according to${\displaystyle \partial _{t}\ln \rho _{t}={\frac {1}{2))\beta (t)\left(n+(x+\nabla \ln \rho _{t})\cdot \nabla \ln \rho _{t}+\Delta \ln \rho _{t}\right)}$where ${\displaystyle n}$ is the dimension of space, and ${\displaystyle \Delta }$ is the Laplace operator.

#### Backward diffusion process

If we have solved ${\displaystyle \rho _{t))$ for time ${\displaystyle t\in [0,T]}$, then we can exactly reverse the evolution of the cloud. Suppose we start with another cloud of particles with density ${\displaystyle \nu _{0}=\rho _{T))$, and let the particles in the cloud evolve according to${\displaystyle dy_{t}={\frac {1}{2))\beta (T-t)y_{t}dt+\beta (T-t)\underbrace {\nabla _{y_{t))\ln \rho _{T-t}\left(y_{t}\right)} _{\text{score function ))dt+{\sqrt {\beta (T-t)))dW_{t))$then by plugging into the Fokker-Planck equation, we find that ${\displaystyle \partial _{t}\rho _{T-t}=\partial _{t}\nu _{t))$. Thus this cloud of points is the original cloud, evolving backwards.[16]

### Noise conditional score network (NCSN)

At the continuous limit, ${\displaystyle {\bar {\alpha ))_{t}=(1-\beta _{1})\cdots (1-\beta _{t})=e^{\sum _{i}\ln(1-\beta _{i})}\to e^{-\int _{0}^{t}\beta (t)dt))$and so ${\displaystyle x_{t}|x_{0}\sim N\left(e^{-{\frac {1}{2))\int _{0}^{t}\beta (t)dt}x_{0},\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)I\right)}$In particular, we see that we can directly sample from any point in the continuous diffusion process without going through the intermediate steps, by first sampling ${\displaystyle x_{0}\sim q,z\sim N(0,I)}$, then get ${\displaystyle x_{t}=e^{-{\frac {1}{2))\int _{0}^{t}\beta (t)dt}x_{0}+\left(1-e^{-\int _{0}^{t}\beta (t)dt}\right)z}$. That is, we can quickly sample ${\displaystyle x_{t}\sim \rho _{t))$ for any ${\displaystyle t\geq 0}$.

Now, define a certain probability distribution ${\displaystyle \gamma }$ over ${\displaystyle [0,\infty )}$, then the score-matching loss function is defined as the expected Fisher divergence:${\displaystyle L(\theta )=E_{t\sim \gamma ,x_{t}\sim \rho _{t))[\|f_{\theta }(x_{t},t)\|^{2}+2\nabla \cdot f_{\theta }(x_{t},t)]}$After training, ${\displaystyle f_{\theta }(x_{t},t)\approx \nabla \ln \rho _{t))$, so we can perform the backwards diffusion process by first sampling ${\displaystyle x_{T}\sim N(0,I)}$, then integrating the SDE from ${\displaystyle t=T}$ to ${\displaystyle t=0}$:${\displaystyle x_{t-dt}=x_{t}+{\frac {1}{2))\beta (t)x_{t}dt+\beta (t)f_{\theta }(x_{t},t)dt+{\sqrt {\beta (t)))dW_{t))$This may be done by any SDE integration method, such as Euler–Maruyama method.

The name "noise conditional score network" is explained thus:

• "network", because ${\displaystyle f_{\theta ))$ is implemented as a neural network.
• "score", because the output of the network is interpreted as approximating the score function ${\displaystyle \nabla \ln \rho _{t))$.
• "noise conditional", because ${\displaystyle \rho _{t))$ is equal to ${\displaystyle \rho _{0))$ blurred by an added gaussian noise that increases with time, and so the score function depends on the amount of noise added.

## Their equivalence

DDPM and score-based generative models are equivalent.[17] This means that a network trained using DDPM can be used as a NCSN, and vice versa.

We know that ${\displaystyle x_{t}|x_{0}\sim N\left({\sqrt ((\bar {\alpha ))_{t))}x_{0},(1-{\bar {\alpha ))_{t})I\right)}$, so by Tweedie's formula, we have${\displaystyle \nabla _{x_{t))\ln q(x_{t})={\frac {1}{1-{\bar {\alpha ))_{t))}(-x_{t}+{\sqrt ((\bar {\alpha ))_{t))}E_{q}[x_{0}|x_{t}])}$As described previously, the DDPM loss function is ${\displaystyle \sum _{t}L_{simple,t))$ with${\displaystyle L_{simple,t}=E_{x_{0}\sim q;z\sim N(0,I)}\left[\left\|\epsilon _{\theta }(x_{t},t)-z\right\|^{2}\right]}$where ${\displaystyle x_{t}={\sqrt ((\bar {\alpha ))_{t))}x_{0}+{\sqrt {1-{\bar {\alpha ))_{t))}z}$. By a change of variables,${\displaystyle L_{simple,t}=E_{x_{0},x_{t}\sim q}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt ((\bar {\alpha ))_{t))}x_{0)){\sqrt {1-{\bar {\alpha ))_{t))))\right\|^{2}\right]=E_{x_{t}\sim q,x_{0}\sim q(\cdot |x_{t})}\left[\left\|\epsilon _{\theta }(x_{t},t)-{\frac {x_{t}-{\sqrt ((\bar {\alpha ))_{t))}x_{0)){\sqrt {1-{\bar {\alpha ))_{t))))\right\|^{2}\right]}$and the term inside becomes a least squares regression, so if the network actually reaches the global minimum of loss, then we have ${\displaystyle \epsilon _{\theta }(x_{t},t)={\frac {x_{t}-{\sqrt ((\bar {\alpha ))_{t))}E_{q}[x_{0}|x_{t}]}{\sqrt {1-{\bar {\alpha ))_{t))))=-{\sqrt {1-{\bar {\alpha ))_{t))}\nabla _{x_{t))\ln q(x_{t})}$.

Now, the continuous limit ${\displaystyle x_{t-1}=x_{t-dt},\beta _{t}=\beta (t)dt,z_{t}{\sqrt {dt))=dW_{t))$ of the backward equation${\displaystyle x_{t-1}={\frac {x_{t)){\sqrt {\alpha _{t))))-{\frac {\beta _{t)){\sqrt {\alpha _{t}(1-{\bar {\alpha ))_{t})))}\epsilon _{\theta }(x_{t},t)+{\sqrt {\beta _{t))}z_{t};\quad z_{t}\sim N(0,I)}$gives us precisely the same equation as score-based diffusion:${\displaystyle x_{t-dt}=x_{t}(1+\beta (t)dt/2)+\beta (t)\nabla _{x_{t))\ln q(x_{t})dt+{\sqrt {\beta (t)))dW_{t))$

## Main variants

### Denoising Diffusion Implicit Model (DDIM)

The original DDPM method for generating images is slow, since the forward diffusion process usually takes ${\displaystyle T\sim 1000}$ to make the distribution of ${\displaystyle x_{T))$ to appear close to gaussian. However this means the backward diffusion process also take 1000 steps. Unlike the forward diffusion process, which can skip steps as ${\displaystyle x_{t}|x_{0))$ is gaussian for all ${\displaystyle t\geq 1}$, the backward diffusion process does not allow skipping steps. For example, to sample ${\displaystyle x_{t-2}|x_{t-1}\sim N(\mu _{\theta }(x_{t-1},t-1),\Sigma _{\theta }(x_{t-1},t-1))}$ requires the model to first sample ${\displaystyle x_{t-1))$. Attempting to directly sample ${\displaystyle x_{t-2}|x_{t))$ would require us to marginalize out ${\displaystyle x_{t-1))$, which is generally intractable.

DDIM[18] is a method to take any model trained on DDPM loss, and use it to sample with some steps skipped, sacrificing an adjustable amount of quality. If we generate the Markovian chain case in DDPM to non-Markovian case, DDIM corresponds to the case that the reverse process has variance equals to 0. In other words, the reverse process (and also the forward process) is deterministic. When using fewer sampling steps, DDIM outperforms DDPM.

### Latent diffusion model (LDM)

Since the diffusion model is a general method for modelling probability distributions, if one wants to model a distribution over images, one can first encode the images into a lower-dimensional space by an encoder, then use a diffusion model to model the distribution over encoded images. Then to generate an image, one can sample from the diffusion model, then use a decoder to decode it into an image.[19]

The encoder-decoder pair is most often a variational autoencoder (VAE).

### Classifier guidance

Suppose we wish to sample not from the entire distribution of images, but conditional on the image description. We don't want to sample a generic image, but an image that fits the description "black cat with red eyes". Generally, we want to sample from the distribution ${\displaystyle p(x|y)}$, where ${\displaystyle x}$ ranges over images, and ${\displaystyle y}$ ranges over classes of images (a description "black cat with red eyes" is just a very detailed class, and a class "cat" is just a very vague description).

Taking the perspective of the noisy channel model, we can understand the process as follows: To generate an image ${\displaystyle x}$ conditional on description ${\displaystyle y}$, we imagine that the requester really had in mind an image ${\displaystyle x}$, but the image is passed through a noisy channel and came out garbled, as ${\displaystyle y}$. Image generation is then nothing but inferring which ${\displaystyle x}$ the requester had in mind.

In other words, conditional image generation is simply "translating from a textual language into a pictorial language". Then, as in noisy-channel model, we use Bayes theorem to get${\displaystyle p(x|y)\propto p(y|x)p(x)}$in other words, if we have a good model of the space of all images, and a good image-to-class translator, we get a class-to-image translator "for free". In the equation for backward diffusion, the score ${\displaystyle \nabla \ln p(x)}$ can be replaced by${\displaystyle \nabla _{x}\ln p(x|y)=\nabla _{x}\ln p(y|x)+\nabla _{x}\ln p(x)}$where ${\displaystyle \nabla _{x}\ln p(x)}$ is the score function, trained as previously described, and ${\displaystyle \nabla _{x}\ln p(y|x)}$ is found by using a differentiable image classifier.

### With temperature

The classifier-guided diffusion model samples from ${\displaystyle p(x|y)}$, which is concentrated around the maximum a posteriori estimate ${\displaystyle \arg \max _{x}p(x|y)}$. If we want to force the model to move towards the maximum likelihood estimate ${\displaystyle \arg \max _{x}p(y|x)}$, we can use ${\displaystyle p_{\beta }(x|y)\propto p(y|x)^{\beta }p(x)}$where ${\displaystyle \beta >0}$ is interpretable as inverse temperature. In the context of diffusion models, it is usually called the guidance scale. A high ${\displaystyle \beta }$ would force the model to sample from a distribution concentrated around ${\displaystyle \arg \max _{x}p(y|x)}$. This often improves quality of generated images.[20]

This can be done simply by SGLD with${\displaystyle \nabla _{x}\ln p_{\beta }(x|y)=\beta \nabla _{x}\ln p(y|x)+\nabla _{x}\ln p(x)}$

### Classifier-free guidance (CFG)

If we do not have a classifier ${\displaystyle p(y|x)}$, we could still extract one out of the image model itself:[21]${\displaystyle \nabla _{x}\ln p_{\beta }(x|y)=(1-\beta )\nabla _{x}\ln p(x)+\beta \nabla _{x}\ln p(x|y)}$Such a model is usually trained by presenting it with both ${\displaystyle (x,y)}$ and ${\displaystyle (x,{\rm {None)))}$, allowing it to model both ${\displaystyle \nabla _{x}\ln p(x|y)}$ and ${\displaystyle \nabla _{x}\ln p(x)}$.

### Samplers

Given a diffusion model, one may regard it either as a continuous process, and sample from it by integrating a SDE, or one can regard it as a discrete process, and sample from it by iterating the discrete steps. The choice of the "noise schedule" ${\displaystyle \beta _{t))$ can also affect the quality of samples. In the DDPM perspective, one can use the DDPM itself (with noise), or DDIM (with adjustable amount of noise). The case where one adds noise is sometimes called ancestral sampling.[22] One can interpolate between noise and no noise. The amount of noise is denoted ${\displaystyle \eta }$ ("eta value") in the DDIM paper, with ${\displaystyle \eta =0}$ denoting no noise (as in deterministic DDIM), and ${\displaystyle \eta =1}$ denoting full noise (as in DDPM).

In the perspective of SDE, one can use any of the numerical integration methods, such as Euler–Maruyama method, Heun's method, linear multistep methods, etc. Just as in the discrete case, one can add an adjustable amount of noise during the integration.

A survey and comparison of samplers in the context of image generation is in.[23]

## Flow-based diffusion model

Abstractly speaking, the idea of diffusion model is to take an unknown probability distribution (the distribution of natural-looking images), then progressively convert it to a known probability distribution (standard gaussian distribution), by building an absolutely continuous probability path connecting them. The probability path is in fact defined implicitly by the score function ${\displaystyle \nabla \ln p_{t))$.

In denoising diffusion models, the forward process adds noise, and the backward process removes noise. Both the forward and backward processes are SDEs, though the forward process is integrable in closed-form, so it can be done at no computational cost. The backward process is not integrable in closed-form, so it must be integrated step-by-step by standard SDE solvers, which can be very expensive. The probability path in diffusions model is defined through an Itô process and one can retrieve the deterministic process by using the Probability ODE flow formulation.[2]

In flow-based diffusion models, the forward process is a both deterministic flow along a time-dependent vector field, and the backward process is the same vector field, but going backwards. Both processes are solutions to ODEs. If the vector field is well-behaved, the ODE will also be well-behaved.

Given two distributions ${\displaystyle \pi _{0))$ and ${\displaystyle \pi _{1))$, a flow-based model is a time-dependent velocity field ${\displaystyle v_{t}(x)}$ in ${\displaystyle [0,1]\times \mathbb {R} ^{d))$, such that if we start by sampling a point ${\displaystyle x\sim \pi _{0))$, and let it move according to the velocity field:${\displaystyle {\frac {d}{dt))\phi _{t}(x)=v_{t}(\phi _{t}(x))\quad t\in [0,1],\quad {\text{starting from ))\phi _{0}(x)=x}$we end up with a point ${\displaystyle x_{1}\sim \pi _{1))$. The solution ${\displaystyle \phi _{t))$ of the above ODE define a probability path ${\displaystyle p_{t}=[\phi _{t}]_{\#}\pi _{0))$ by the Pushforward measure operator. Espectially, one has ${\displaystyle [\phi _{1}]_{\#}\pi _{0}=\pi _{1))$.

The probability path and the velocity field also satisfy the Continuity equation, in the sense of probability distribution:${\displaystyle \partial _{t}p_{t}+\mathrm {div} (v_{t}\cdot p_{t})=0}$To construct a probability path, we start by construct a conditional probability path ${\displaystyle p_{t}(x\vert z)}$ and the corresponding conditional velocity field ${\displaystyle v_{t}(x\vert z)}$ on some conditional distribution ${\displaystyle q(z)}$. A natural choice is the Gaussian conditional probability path:${\displaystyle p_{t}(x\vert z)={\mathcal {N))\left(m_{t}(z),\sigma _{t}^{2}I\right)}$The conditional velocity field which corresponds to the geodesic path between conditional Gaussian path is ${\displaystyle v_{t}(x\vert z)={\frac {\sigma _{t}'}{\sigma _{t))}(x-m_{t}(z))+m_{t}'(z)}$The probability path and velocity field are then computed by marginalizing

${\displaystyle p_{t}(x)=\int p_{t}(x\vert z)q(z)dz\qquad {\text{ and ))\qquad v_{t}(x)=\mathbb {E} _{q(z)}\left[{\frac {v_{t}(x\vert z)p_{t}(x\vert z)}{p_{t}(x)))\right]}$

### Optimal Transport Flow

The idea of optimal transport flow [24] is to construct a probability path minimizing the Wasserstein metric. The distribution on which we condition is the optimal transport plan between ${\displaystyle \pi _{0))$ and ${\displaystyle \pi _{1))$: ${\displaystyle z=(x_{0},x_{1})}$ and ${\displaystyle q(z)=\Gamma (\pi _{0},\pi _{1})}$, where ${\displaystyle \Gamma }$ is the optimal transport plan, which can be approximated by mini-batch optimal transport.

### Rectified flow

The idea of rectified flow[25][26] is to learn a flow model such that the velocity is nearly constant along each flow path. This is beneficial, because we can integrate along such a vector field with very few steps. For example, if an ODE ${\displaystyle {\dot {\phi _{t))}(x)=v_{t}(\phi _{t}(x))}$ follows perfectly straight paths, it simplifies to ${\displaystyle \phi _{t}(x)=x_{0}+t\cdot v_{0}(x_{0})}$, allowing for exact solutions in one step. In practice, we cannot reach such perfection, but when the flow field is nearly so, we can take a few large steps instead of many little steps.

 Linear interpolation Rectified Flow Straightened Rectified Flow [1]

The general idea is to start with two distributions ${\displaystyle \pi _{0))$ and ${\displaystyle \pi _{1))$, then construct a flow field ${\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\))$ from it, then repeatedly apply a "reflow" operation to obtain successive flow fields ${\displaystyle \phi ^{1},\phi ^{2},\dots }$, each straighter than the previous one. When the flow field is straight enough for the application, we stop.

Generally, for any time-differentiable process ${\displaystyle \phi _{t))$, ${\displaystyle v_{t))$ can be estimated by solving:${\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{x\sim p_{t}(x}\left[\lVert {v_{t}(x,\theta )-v_{t}(x)}\rVert ^{2}\right]\,\mathrm {d} t.}$

In rectified flow, by injecting strong priors that intermediate trajectories are straight, it can achieve both theoretical relevance for optimal transport and computational efficiency, as ODEs with straight paths can be simulated precisely without time discretization.

Specifically, rectified flow seeks to match an ODE with the marginal distributions of the linear interpolation between points from distributions ${\displaystyle \pi _{0))$ and ${\displaystyle \pi _{1))$. Given observations ${\displaystyle x_{0}\sim \pi _{0))$ and ${\displaystyle x_{1}\sim \pi _{1))$, the canonical linear interpolation ${\displaystyle x_{t}=tx_{1}+(1-t)x_{0},t\in [0,1]}$ yields a trivial case ${\displaystyle {\dot {x))_{t}=x_{1}-x_{0))$, which cannot be causally simulated without ${\displaystyle x_{1))$. To address this, ${\displaystyle x_{t))$ is "projected" into a space of causally simulatable ODEs, by minimizing the least squares loss with respect to the direction ${\displaystyle x_{1}-x_{0))$:${\displaystyle \min _{\theta }\int _{0}^{1}\mathbb {E} _{\pi _{0},\pi _{1},p_{t))\left[\lVert {(x_{1}-x_{0})-v_{t}(x_{t})}\rVert ^{2}\right]\,\mathrm {d} t.}$

The data pair ${\displaystyle (x_{0},x_{1})}$ can be any coupling of ${\displaystyle \pi _{0))$ and ${\displaystyle \pi _{1))$, typically independent (i.e., ${\displaystyle (x_{0},x_{1})\sim \pi _{0}\times \pi _{1))$) obtained by randomly combining observations from ${\displaystyle \pi _{0))$ and ${\displaystyle \pi _{1))$. This process ensures that the trajectories closely mirror the density map of ${\displaystyle x_{t))$ trajectories but reroute at intersections to ensure causality. This rectifying process is also known as Flow Matching,[27] Stochastic Interpolation,[28] and Alpha-Blending.[citation needed]

A distinctive aspect of rectified flow is its capability for "reflow", which straightens the trajectory of ODE paths. Denote the rectified flow ${\displaystyle \phi ^{0}=\{\phi _{t}:t\in [0,1]\))$ induced from ${\displaystyle (x_{0},x_{1})}$ as ${\displaystyle \phi ^{0}={\mathsf {Rectflow))((x_{0},x_{1}))}$. Recursively applying this ${\displaystyle {\mathsf {Rectflow))(\cdot )}$ operator generates a series of rectified flows ${\displaystyle \phi ^{k+1}={\mathsf {Rectflow))((\phi _{0}^{k}(x_{0}),\phi _{1}^{k}(x_{1})))}$. This "reflow" process not only reduces transport costs but also straightens the paths of rectified flows, making ${\displaystyle \phi ^{k))$ paths straighter with increasing ${\displaystyle k}$.

Rectified flow includes a nonlinear extension where linear interpolation ${\displaystyle x_{t))$ is replaced with any time-differentiable curve that connects ${\displaystyle x_{0))$ and ${\displaystyle x_{1))$, given by ${\displaystyle x_{t}=\alpha _{t}x_{1}+\beta _{t}x_{0))$. This framework encompasses DDIM and probability flow ODEs as special cases, with particular choices of ${\displaystyle \alpha _{t))$ and ${\displaystyle \beta _{t))$. However, in the case where the path of ${\displaystyle x_{t))$ is not straight, the reflow process no longer ensures a reduction in convex transport costs, and also no longer straighten the paths of ${\displaystyle \phi _{t))$.[25]

## Choice of architecture

### Diffusion model

For generating images by DDPM, we need a neural network that takes a time ${\displaystyle t}$ and a noisy image ${\displaystyle x_{t))$, and predicts a noise ${\displaystyle \epsilon _{\theta }(x_{t},t)}$ from it. Since predicting the noise is the same as predicting the denoised image, then subtracting it from ${\displaystyle x_{t))$, denoising architectures tend to work well. For example, the U-Net, which was found to be good for denoising images, is often used for denoising diffusion models that generate images.[29]

For DDPM, the underlying architecture does not have to be a U-Net. It just has to predict the noise somehow. For example, the diffusion transformer (DiT) uses a Transformer to predict the mean and diagonal covariance of the noise, given the textual conditioning and the partially denoised image. It is the same as standard U-Net-based denoising diffusion model, with a Transformer replacing the U-Net.[30]

DDPM can be used to model general data distributions, not just natural-looking images. For example, Human Motion Diffusion[31] models human motion trajectory by DDPM. Each human motion trajectory is a sequence of poses, represented by either joint rotations or positions. It uses a Transformer network to generate a less noisy trajectory out of a noisy one.

### Conditioning

The base diffusion model can only generate unconditionally from the whole distribution. For example, a diffusion model learned on ImageNet would generate images that look like a random image from ImageNet. To generate images from just one category, one would need to impose the condition. Whatever condition one wants to impose, one needs to first convert the conditioning into a vector of floating point numbers, then feed it into the underlying diffusion model neural network. However, one has freedom in choosing how to convert the conditioning into a vector.

Stable Diffusion, for example, imposes conditioning in the form of cross-attention mechanism, where the query is an intermediate representation of the image in the U-Net, and both key and value are the conditioning vectors. The conditioning can be selectively applied to only parts of an image, and new kinds of conditionings can be finetuned upon the base model, as used in ControlNet.[32]

As a particularly simple example, consider image inpainting. The conditions are ${\displaystyle {\tilde {x))}$, the reference image, and ${\displaystyle m}$, the inpainting mask. The conditioning is imposed at each step of the backward diffusion process, by first sampling ${\displaystyle {\tilde {x))_{t}\sim N\left({\sqrt ((\bar {\alpha ))_{t))}{\tilde {x)),(1-{\bar {\alpha ))_{t})I\right)}$, a noisy version of ${\displaystyle {\tilde {x))}$, then replacing ${\displaystyle x_{t))$ with ${\displaystyle (1-m)\odot x_{t}+m\odot {\tilde {x))_{t))$, where ${\displaystyle \odot }$ means elementwise multiplication.[33]

Conditioning is not limited to just generating images from a specific category, or according to a specific caption (as in text-to-image). For example,[31] demonstrated generating human motion, conditioned on an audio clip of human walking (allowing syncing motion to a soundtrack), or video of human running, or a text description of human motion, etc.

### Upscaling

As generating an image takes a long time, one can try to generate a small image by a base diffusion model, then upscale it by other models. Upscaling can be done by GAN,[34] Transformer,[35] or signal processing methods like Lanczos resampling.

Diffusion models themselves can be used to perform upscaling. Cascading diffusion model stacks multiple diffusion models one after another, in the style of Progressive GAN. The lowest level is a standard diffusion model that generate 32x32 image, then the image would be upscaled by a diffusion model specifically trained for upscaling, and the process repeats.[29]

In more detail, the diffusion upscaler is trained as follows:[29]

• Sample ${\displaystyle (x_{0},z_{0},c)}$, where ${\displaystyle x_{0))$ is the high-resolution image, ${\displaystyle z_{0))$ is the same image but scaled down to a low-resolution, and ${\displaystyle c}$ is the conditioning, which can be the caption of the image, the class of the image, etc.
• Sample two white noises ${\displaystyle \epsilon _{x},\epsilon _{z))$, two time-steps ${\displaystyle t_{x},t_{z))$. Compute the noisy versions of the high-resolution and low-resolution images: ${\displaystyle {\begin{cases}x_{t_{x))&={\sqrt ((\bar {\alpha ))_{t_{x))))x_{0}+{\sqrt {1-{\bar {\alpha ))_{t_{x))))\epsilon _{x}\\z_{t_{z))&={\sqrt ((\bar {\alpha ))_{t_{z))))z_{0}+{\sqrt {1-{\bar {\alpha ))_{t_{z))))\epsilon _{z}\end{cases))}$.
• Train the denoising network to predict ${\displaystyle \epsilon _{x))$ given ${\displaystyle x_{t_{x)),z_{t_{z)),t_{x},t_{z},c}$. That is, apply gradient descent on ${\displaystyle \theta }$ on the L2 loss ${\displaystyle \|\epsilon _{\theta }(x_{t_{x)),z_{t_{z)),t_{x},t_{z},c)-\epsilon _{x}\|_{2}^{2))$.

## Examples

This section collects some notable diffusion models, and briefly describes their architecture.

### OpenAI

 Main articles: DALL-E and Sora (text-to-video model)

The DALL-E series by OpenAI are text-conditional diffusion models of images.

The first version of DALL-E (2021) is not actually a diffusion model. Instead, it uses a Transformer architecture that generates a sequence of tokens, which is then converted to an image by the decoder of a discrete VAE. Released with DALL-E was the CLIP classifier, which was used by DALL-E to rank generated images according to how close the image fits the text.

GLIDE (2022-03)[36] is a 3.5-billion diffusion model, and a small version was released publicly.[4] Soon after, DALL-E 2 was released (2022-04).[37] DALL-E 2 is a 3.5-billion cascaded diffusion model that generates images from text by "inverting the CLIP image encoder", the technique which they termed "unCLIP".

Sora (2024-02) is a diffusion Transformer model (DiT).

### Stability AI

 Main article: Stable Diffusion

Stable Diffusion (2022-08), released by Stability AI, consists of a denoising latent diffusion model (860 million parameters), a VAE, and a text encoder. The denoising network is a U-Net, with cross-attention blocks to allow for conditional image generation.[38][19]

Stable Diffusion 3 (2024-02)[39] changed the latent diffusion model from the UNet to a Transformer model, and so it is a DiT. It uses rectified flow.

Imagen (2022-05)[40][41] uses a T5 language model to encode the input text into embeddings. It is a cascaded diffusion model with three steps. The first step denoises a white noise to a 64×64 image, conditional on text embedding. The second step upscales the image by 64×64→256×256, conditional on text embedding. The third step is similar, upscaling by 256×256→1024×1024. The three denoising networks are all U-Nets.

Imagen 2 (2023-12) is also diffusion-based. It can generate images based on a prompt that mixes images and text. No further information available.[42]

Veo (2024-05) generates videos by latent diffusion. The diffusion is conditioned on a vector that encodes both a text prompt and an image prompt.[43]

• Guidance: a cheat code for diffusion models. Overview of classifier guidance and classifier-free guidance, light on mathematical details.
• Mathematical details omitted in the article.
• "Power of Diffusion Models". AstraBlog. 2022-09-25. Retrieved 2023-09-25.
• Weng, Lilian (2021-07-11). "What are Diffusion Models?". lilianweng.github.io. Retrieved 2023-09-25.

## References

1. ^ Chang, Ziyi; Koulieris, George Alex; Shum, Hubert P. H. (2023). "On the Design Fundamentals of Diffusion Models: A Survey". arXiv:2306.04542 [cs.LG].
2. ^ a b c Song, Yang; Sohl-Dickstein, Jascha; Kingma, Diederik P.; Kumar, Abhishek; Ermon, Stefano; Poole, Ben (2021-02-10). "Score-Based Generative Modeling through Stochastic Differential Equations". arXiv:2011.13456 [cs.LG].
3. ^ Gu, Shuyang; Chen, Dong; Bao, Jianmin; Wen, Fang; Zhang, Bo; Chen, Dongdong; Yuan, Lu; Guo, Baining (2021). "Vector Quantized Diffusion Model for Text-to-Image Synthesis". arXiv:2111.14822 [cs.CV].
4. ^ a b GLIDE, OpenAI, 2023-09-22, retrieved 2023-09-24
5. ^ Li, Yifan; Zhou, Kun; Zhao, Wayne Xin; Wen, Ji-Rong (August 2023). "Diffusion Models for Non-autoregressive Text Generation: A Survey". Proceedings of the Thirty-Second International Joint Conference on Artificial Intelligence. California: International Joint Conferences on Artificial Intelligence Organization. pp. 6692–6701. arXiv:2303.06574. doi:10.24963/ijcai.2023/750. ISBN 978-1-956792-03-4.
6. ^ Han, Xiaochuang; Kumar, Sachin; Tsvetkov, Yulia (2023). "SSD-LM: Semi-autoregressive Simplex-based Diffusion Language Model for Text Generation and Modular Control". Proceedings of the 61st Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics: 11575–11596. arXiv:2210.17432. doi:10.18653/v1/2023.acl-long.647.
7. ^ Xu, Weijie; Hu, Wenxiang; Wu, Fanyou; Sengamedu, Srinivasan (2023). "DeTiME: Diffusion-Enhanced Topic Modeling using Encoder-decoder based LLM". Findings of the Association for Computational Linguistics: EMNLP 2023. Stroudsburg, PA, USA: Association for Computational Linguistics: 9040–9057. arXiv:2310.15296. doi:10.18653/v1/2023.findings-emnlp.606.
8. ^ Zhang, Haopeng; Liu, Xiao; Zhang, Jiawei (2023). "DiffuSum: Generation Enhanced Extractive Summarization with Diffusion". Findings of the Association for Computational Linguistics: ACL 2023. Stroudsburg, PA, USA: Association for Computational Linguistics: 13089–13100. arXiv:2305.01735. doi:10.18653/v1/2023.findings-acl.828.
9. ^ a b Ho, Jonathan; Jain, Ajay; Abbeel, Pieter (2020). "Denoising Diffusion Probabilistic Models". Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 6840–6851.
10. ^ Croitoru, Florinel-Alin; Hondru, Vlad; Ionescu, Radu Tudor; Shah, Mubarak (2023). "Diffusion Models in Vision: A Survey". IEEE Transactions on Pattern Analysis and Machine Intelligence. 45 (9): 10850–10869. arXiv:2209.04747. doi:10.1109/TPAMI.2023.3261988. PMID 37030794. S2CID 252199918.
11. ^ Sohl-Dickstein, Jascha; Weiss, Eric; Maheswaranathan, Niru; Ganguli, Surya (2015-06-01). "Deep Unsupervised Learning using Nonequilibrium Thermodynamics" (PDF). Proceedings of the 32nd International Conference on Machine Learning. 37. PMLR: 2256–2265.
12. ^ Weng, Lilian (2021-07-11). "What are Diffusion Models?". lilianweng.github.io. Retrieved 2023-09-24.
13. ^ "Generative Modeling by Estimating Gradients of the Data Distribution | Yang Song". yang-song.net. Retrieved 2023-09-24.
14. ^ Song, Yang; Sohl-Dickstein, Jascha; Kingma, Diederik P.; Kumar, Abhishek; Ermon, Stefano; Poole, Ben (2021-02-10). "Score-Based Generative Modeling through Stochastic Differential Equations". arXiv:2011.13456 [cs.LG].
15. ^ "Sliced Score Matching: A Scalable Approach to Density and Score Estimation | Yang Song". yang-song.net. Retrieved 2023-09-24.
16. ^ Anderson, Brian D.O. (May 1982). "Reverse-time diffusion equation models". Stochastic Processes and Their Applications. 12 (3): 313–326. doi:10.1016/0304-4149(82)90051-5. ISSN 0304-4149.
17. ^ Luo, Calvin (2022). "Understanding Diffusion Models: A Unified Perspective". arXiv:2208.11970v1 [cs.LG].
18. ^ Song, Jiaming; Meng, Chenlin; Ermon, Stefano (3 Oct 2023). "Denoising Diffusion Implicit Models". arXiv:2010.02502 [cs.LG].
19. ^ a b Rombach, Robin; Blattmann, Andreas; Lorenz, Dominik; Esser, Patrick; Ommer, Björn (13 April 2022). "High-Resolution Image Synthesis With Latent Diffusion Models". arXiv:2112.10752 [cs.CV].
20. ^ Dhariwal, Prafulla; Nichol, Alex (2021-06-01). "Diffusion Models Beat GANs on Image Synthesis". arXiv:2105.05233 [cs.LG].
21. ^ Ho, Jonathan; Salimans, Tim (2022-07-25). "Classifier-Free Diffusion Guidance". arXiv:2207.12598 [cs.LG].
22. ^ Yang, Ling; Zhang, Zhilong; Song, Yang; Hong, Shenda; Xu, Runsheng; Zhao, Yue; Zhang, Wentao; Cui, Bin; Yang, Ming-Hsuan (2022). "Diffusion Models: A Comprehensive Survey of Methods and Applications". arXiv:2206.00364 [cs.CV].
23. ^ Karras, Tero; Aittala, Miika; Aila, Timo; Laine, Samuli (2022). "Elucidating the Design Space of Diffusion-Based Generative Models". arXiv:2206.00364v2 [cs.CV].
24. ^ Tong, Alexander; Fatras, Kilian; Malkin, Nikolay; Huguet, Guillaume; Zhang, Yanlei; Rector-Brooks, Jarrid; Wolf, Guy; Bengio, Yoshua (2023-11-08). "Improving and generalizing flow-based generative models with minibatch optimal transport". Transactions on Machine Learning Research. ISSN 2835-8856.
25. ^ a b c d Liu, Xingchao; Gong, Chengyue; Liu, Qiang (2022-09-07). "Flow Straight and Fast: Learning to Generate and Transfer Data with Rectified Flow". arXiv:2209.03003 [cs.LG].
26. ^ Liu, Qiang (2022-09-29). "Rectified Flow: A Marginal Preserving Approach to Optimal Transport". arXiv:2209.14577 [stat.ML].
27. ^ Lipman, Yaron; Chen, Ricky T. Q.; Ben-Hamu, Heli; Nickel, Maximilian; Le, Matt (2023-02-08), Flow Matching for Generative Modeling, arXiv:2210.02747
28. ^ Albergo, Michael S.; Vanden-Eijnden, Eric (2023-03-09), Building Normalizing Flows with Stochastic Interpolants, arXiv:2209.15571
29. ^ a b c Ho, Jonathan; Saharia, Chitwan; Chan, William; Fleet, David J.; Norouzi, Mohammad; Salimans, Tim (2022-01-01). "Cascaded diffusion models for high fidelity image generation". The Journal of Machine Learning Research. 23 (1): 47:2249–47:2281. arXiv:2106.15282. ISSN 1532-4435.
30. ^ Peebles, William; Xie, Saining (March 2023). "Scalable Diffusion Models with Transformers". arXiv:2212.09748v2 [cs.CV].
31. ^ a b Tevet, Guy; Raab, Sigal; Gordon, Brian; Shafir, Yonatan; Cohen-Or, Daniel; Bermano, Amit H. (2022). "Human Motion Diffusion Model". arXiv:2209.14916 [cs.CV].
32. ^ Zhang, Lvmin; Rao, Anyi; Agrawala, Maneesh (2023). "Adding Conditional Control to Text-to-Image Diffusion Models". arXiv:2302.05543 [cs.CV].
33. ^ Lugmayr, Andreas; Danelljan, Martin; Romero, Andres; Yu, Fisher; Timofte, Radu; Van Gool, Luc (2022). "RePaint: Inpainting Using Denoising Diffusion Probabilistic Models". arXiv:2201.09865v4 [cs.CV].
34. ^ Wang, Xintao; Xie, Liangbin; Dong, Chao; Shan, Ying (2021). "Real-ESRGAN: Training Real-World Blind Super-Resolution With Pure Synthetic Data" (PDF). Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021. International Conference on Computer Vision. pp. 1905–1914. arXiv:2107.10833.
35. ^ Liang, Jingyun; Cao, Jiezhang; Sun, Guolei; Zhang, Kai; Van Gool, Luc; Timofte, Radu (2021). "SwinIR: Image Restoration Using Swin Transformer" (PDF). Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops. International Conference on Computer Vision, 2021. pp. 1833–1844. arXiv:2108.10257v1.
36. ^ Nichol, Alex; Dhariwal, Prafulla; Ramesh, Aditya; Shyam, Pranav; Mishkin, Pamela; McGrew, Bob; Sutskever, Ilya; Chen, Mark (2022-03-08). "GLIDE: Towards Photorealistic Image Generation and Editing with Text-Guided Diffusion Models". arXiv:2112.10741 [cs.CV].
37. ^ Ramesh, Aditya; Dhariwal, Prafulla; Nichol, Alex; Chu, Casey; Chen, Mark (2022-04-12). "Hierarchical Text-Conditional Image Generation with CLIP Latents". arXiv:2204.06125 [cs.CV].
38. ^ Alammar, Jay. "The Illustrated Stable Diffusion". jalammar.github.io. Retrieved 2022-10-31.
39. ^ Esser, Patrick; Kulal, Sumith; Blattmann, Andreas; Entezari, Rahim; Müller, Jonas; Saini, Harry; Levi, Yam; Lorenz, Dominik; Sauer, Axel (2024-03-05), Scaling Rectified Flow Transformers for High-Resolution Image Synthesis, arXiv:2403.03206
40. ^ "Imagen: Text-to-Image Diffusion Models". imagen.research.google. Retrieved 2024-04-04.
41. ^ Saharia, Chitwan; Chan, William; Saxena, Saurabh; Li, Lala; Whang, Jay; Denton, Emily L.; Ghasemipour, Kamyar; Gontijo Lopes, Raphael; Karagol Ayan, Burcu; Salimans, Tim; Ho, Jonathan; Fleet, David J.; Norouzi, Mohammad (2022-12-06). "Photorealistic Text-to-Image Diffusion Models with Deep Language Understanding". Advances in Neural Information Processing Systems. 35: 36479–36494. arXiv:2205.11487.
42. ^ "Imagen 2 - our most advanced text-to-image technology". Google DeepMind. Retrieved 2024-04-04.
43. ^ "Veo". Google DeepMind. 2024-05-14. Retrieved 2024-05-17.