Skip to content

Commit 1b81755

Browse files
committed
uncomment
1 parent b06a6fc commit 1b81755

File tree

1 file changed

+0
-7
lines changed

1 file changed

+0
-7
lines changed

src/content/lessons/diffusion.mdx

Lines changed: 0 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -51,8 +51,6 @@ To achieve this, diffusion models imagine a forward noising process, which progr
5151
The generative process is then trained to reverse this noising process, and denoise the data step by step.
5252

5353

54-
{/*
55-
5654

5755

5856

@@ -288,8 +286,6 @@ More precisely, we want to minimize the KL divergence between the distribution i
288286
</figure>
289287

290288

291-
*/}
292-
293289
## From global to local KL
294290

295291
This part details the key insight of how to transform a global KL loss into a sum of local ones, eventually expressed as square errors.
@@ -375,8 +371,6 @@ cal(L) & = C + EE_(x_0 tilde.op q(x_0)) sum_(t=1)^T lambda_t ||tilde(mu)(x_0, t
375371
with ~lambda_t = 1/(2 sigma_t^2)~.
376372

377373

378-
{/*
379-
380374
<figure>
381375
<InlineSvg asset="diffusion" hide='#forward, #backward, #more, #bigkl, #qt, #qtt' />
382376
<figcaption>[<Counter label="fig:temporal-locality"/>] Temporal locality of the learning process.</figcaption>
@@ -464,7 +458,6 @@ It corresponds to a different decomposition of the joint distribution of the Mar
464458
&& = q(x_T) times product_(t=1)^T q(x_(t-1) | x_t) \
465459
'/>
466460

467-
*/}
468461

469462
<T defines='
470463
#let x0 = $add(x_0)$;

0 commit comments

Comments
 (0)