Diffusion-III DDIM
We go over DDIM Denoising Diffusion Implicit Models by J. Song et al 2020.
Non-Markovian
Recall that DDPM assumes forward and backward processes are Markov. DDPM prescribes $p(x_t\mid x_{t-1})$ and then obtains $p(x_t\mid x_0)$ and $p(x_{t-1}\mid x_{t},x_0)$.
DDIM assumes that the forward process is Markov conditioned on $X_0$ (instead of fully Markov) and assumes
\[X_t\mid X_0 \sim \mathcal{N}(\alpha_t X_0, \beta_t^2 I)\]where $\alpha_t^2+\beta_t^2=1.$ Why? $\alpha_t=\bar\alpha^{\rm DDPM}_t,\beta_t=\bar\beta_t^{\rm DDPM}$ using notations from DDPM.
As in DDPM, we still assume
\[X_{t-1}\mid X_t,X_0 \sim\mathcal{N}(\tilde\mu_t(X_t,X_0),\sigma_t^2I)\]but we are free to choose $\sigma_t$.
In general, for $1 < s < t$,
\[X_{s}\mid X_{t},X_0 \sim \mathcal{N}(\tilde\mu_{s,t}(X_t,X_0), \sigma_{s,t}^2I)\]where $\tilde\mu_t(X_t,X_0)=\kappa_{s,t}X_t+ \lambda_{s,t}X_0$ TBD and
\[\sigma_{s,t}^2= \sigma_{s+1}^2+\cdots+\sigma_t^2.\]Fixing $\sigma_t$, the coefficients can be settled down by
\[p(x_{s}\mid x_0) = \int p(x_{s}\mid x_t,x_0)p(x_t\mid x_0)\,dx_t.\]Omit subindices when it is clear. Since
\[X_t=\alpha_t X_0 + \beta_t\varepsilon_t,\quad X_s=\kappa X_t+\lambda X_0 + \sigma \bar\varepsilon,\]where $\varepsilon_t,\bar\varepsilon=\bar\varepsilon_{s,t}$ are independent unit Gaussians,
\[\alpha_{s}X_0+\beta_{s}\varepsilon_{s} =X_{s} = \kappa(\alpha_tX_0+\beta_t\varepsilon_t)+\lambda X_0 + \sigma \bar\varepsilon.\]Then
\[\alpha_s=\alpha_t\kappa+\lambda,\quad \beta_{s}^2=\kappa^2\beta_t^2+\sigma^2,\]and thus
\[\boxed{\kappa_{s,t} = \beta_t^{-1}\sqrt{\beta_s^2-\sigma_{s,t}^2},\quad \lambda_{s,t} = \alpha_{s}-\kappa_{s,t}\alpha_t.}\]So, for $1<s<t$,
\[\boxed{X_{s}\mid X_t,X_0\sim \mathcal{N}\left(\alpha_{s}X_0 + \kappa_{s,t}(X_t-\alpha_tX_0),\sigma_{s,t}^2I \right).}\]If choosing $\sigma_t^2=\frac{\beta_{t-1}^2}{\beta_t^2}(1-\frac{\alpha_t^2}{\alpha_{t-1}^2})$ or $\tilde\beta_t^2$ from DDPM, then for $s=t-1$, $\tilde\mu_t$ coincides with that of DDPM.
Guess & Imrpove
We can model $q_\theta(x_s\mid x_t)$ by $p(x_{s}\mid x_t,x_0)$. First make a guess $f_t^\theta(x_t)$ of $x_0$ based on $x_t$, and then plug this into $p(x_{s}\mid x_t,x_0)$ to improve the guess. Define for $s<t$
\[\boxed{ q_\theta(x_{s}\mid x_t) := p(x_{s}\mid x_t,x_0=f_t^\theta(x_t)),\quad q_\theta(x_0\mid x_1) := \mathcal{N}(x_0\mid f_1^\theta(x_1),\sigma_1^2I).}\]Then same as DDPM,
\[\begin{align*} &D_{\rm KL}(p(x_{0:T})\,\|\, q(x_{0:T})) = \int p_{\rm data}(x_0)\,dx_0 \int p(x_{1:T}\mid x_0)\log\frac{p(x_{1:T}\mid x_0)p_{\rm data}(x_0)}{q(x_{0:T})}\, dx_{1:T}\\ &= C - \sum_{t=1}^T\mathbf{E}_{p_{\rm data}} \int p(x_{t}\mid x_{t-1},x_0)p(x_{t-1}\mid x_0) \log {q(x_{t-1}\mid x_t)} \,d x_{t-1}dx_t \\ &= C' -\sum_{t=1}^T\mathbf{E}_{p_{\rm data}} D_{\rm KL}(p(x_{t-1}\mid x_t,x_0)\,\|\, q(x_{t-1}\mid x_t) )\\ &=: C'+\sum_{t=1}^T L_t. \end{align*}\]For $t>1$, (see details in DDPM)
\[\begin{align*} L_t&= C + \frac{1}{2\sigma_t^2} \mathbf{E}\left|\tilde\mu_t(X_t,X_0)-\tilde\mu_t(X_t,f^\theta_t(X_t))\right|^2 \\ &=C + \frac{b_t^2}{2\sigma_t^2} \mathbf{E}\left|X_0-f^\theta_t(\alpha_tX_0+\beta_t\varepsilon_t)\right|^2 \end{align*}\]where $X_0\sim p_{\rm data},X_t=\alpha_tX_0+\beta_t\varepsilon_t,\varepsilon_t\sim \mathcal{N}.$
Similar to DDPM, inspired by $X_t=\alpha_tX_0+\beta_t\varepsilon_t$, define
\[\boxed{ X_t = \alpha_t f_t^\theta(X_t) + \beta_t\varepsilon_t^\theta(X_t).}\]Then
\[\begin{align*} L_t &= C + \frac{b_t^2}{2\sigma_t^2} \mathbf{E}\left|\alpha_t^{-1}(X_t-\beta_t\varepsilon_t)-f^\theta_t(X_t)\right|^2 \\ &= C + \frac{b_t^2\beta_t^2}{2\sigma_t^2\alpha_t^2} \mathbf{E}\left|\varepsilon_t - \varepsilon_t^\theta(\alpha_tX_0 + \beta\varepsilon_t)\right|^2, \end{align*}\]almost the same as DDPM.
In practice, the DDIM objective is
\[\mathbf{E}_{t\sim U, \varepsilon\sim\mathcal{N},X_0\sim p_{\rm data}} \gamma_t\left|\varepsilon - \varepsilon_t^\theta(\alpha_tX_0+\beta_t\varepsilon)\right|^2.\]$\gamma_t\equiv 1$ in DDPM.
Sampling
After training $\varepsilon_t^\theta$ and thus $f_t^\theta$, we use $q_\theta(x_s\mid x_t)$ to sample, where $s<t$.
The default choice is $s=t-1$ but we keep the general form for the accelerated sampling below.
Sample $X_T\sim\mathcal{N}$. For $t>1$,
\[\begin{align*} X_s &= \kappa_{s,t} X_t + (\alpha_{s}-\kappa_{s,t}\alpha_t)f_t^\theta(X_t) + \sigma_{s,t} \bar\varepsilon_{s,t}\\ &= \boxed{\alpha_{s}f_t^\theta(X_t) + \kappa_{s,t}\beta_t \varepsilon_t^\theta(X_t) + \sigma_{s,t}\bar\varepsilon_{s,t} } \end{align*}\]where $\bar\varepsilon_{s,t}\sim\mathcal{N}(0,I)$. Finally,
\[\boxed{X_0 = f_1^\theta(X_1) + \sigma_1 \bar\varepsilon_1.}\]Choices of variance
Recall in DDPM (second view), $\sigma_t=\tilde\beta_t^{\rm DDPM} = \frac{\beta_{t-1}}{\beta_{t}}\sqrt{1-\frac{\alpha_{t}^2}{\alpha_{t-1}^2}}$. DDIM chooses $\eta\tilde\beta_t^{\rm DDPM}$ for $\eta\in[0,1]$.
A simple choice is $\sigma_t=\beta_t$.
An extreme choice is $\sigma\equiv 0$ so that $X_0\mid X_T$ is deterministic.
Accelerated sampling
A key observation in DDIM is that
we may also consider forward processes with lengths smaller than T, which accelerates the corresponding generative processes without having to train a different model.
That is, the training of the whole process can actually fit any subsequence $1\le \tau_1<\cdots <\tau_d\le T$.
We train as usual, but when sampling we use the formula above with $s=\tau_{i-1},t=\tau_i$:
\[\boxed{X_s=\alpha_{s}f_t^\theta(X_t) + \kappa_{s,t}\beta_t \varepsilon_t^\theta(X_t) + \sigma_{s,t}\bar\varepsilon_{s,t}. }\]