You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/content/lessons/introduction.mdx
+111-3Lines changed: 111 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -13,6 +13,8 @@ import T from '../../components/TypstMath.astro'
13
13
This is an introduction to the main settings encountered in generative modelling. The first Lectures will introduce the main algorithms and concept for the vanilla unconditional generative modelling task.
14
14
At the end of the course, we will make excursions to class-conditional and text-conditional generative modelling.
15
15
16
+
The goal of this Lecture is to understand the relationship between vanilla unconditional generative modelling and industrial generative models such as DALL-E, Stable Diffusion, GPT, etc.
17
+
16
18
17
19
#### Unconditional Generative Modelling
18
20
@@ -91,6 +93,8 @@ export const catGallery = [
91
93
</figcaption>
92
94
</figure>
93
95
96
+
97
+
94
98
**Assumption** The core underlying assumption of generative modelling is that the data $x_1, \dots, x_n$, is drawn from some *unknown* underlying distribution $p_{data}$: for all $i \in 1, \dots, n$
**Assumption**The core underlying assumption of generative modelling is that the data $x_1, \dots, x_n$, is drawn from some *unknown* underlying distribution $p_{data}( \cdot | y_i)$: for all $i \in 1, \dots, n$
186
+
**Assumption**For *class-conditional* generative models, the assumption is that the data $x_1, \dots, x_n$, is drawn from some *unknown* underlying conditional probability distribution**s** $p_{data}( \cdot | y = y_i)$: for all $i \in 1, \dots, n$
<Tblockv='x_i ~ underbrace(p_"data" (dot | y = y_i), "unknown"), y_i in {"cat", "dog"}.' />
185
189
186
-
**Goal** Using the empirical data distribution $x_i \sim p_{data}(\cdot | y_i)$, the goal is to *generate* new samples $x^{\text{new}}$ that look like they were drawn from the same *unknown*distribution $p_{data}$
190
+
**Goal** Using the empirical data distributions $(x_1, y_1), \dots, (x_n, y_n) $, the goal is to *generate* new samples $x^{\text{new}}$ that look like they were drawn from the same *unknown*distributions $p_{data}(\cdot | y)$. More precisely, we want to be able to generate new images of cats $x^{\text{new cat}}$ and dogs $x^{\text{new dog}}$ that follow the conditional probability distributions
To train class-conditional generative models, we could split the dataset into two parts, one with all the cat images and one with all the dog images, and train two separate unconditional generative models. However, this would not leverage similarities between the two classes: both cats and dogs have four legs, a tail, fur, etc. Class-conditional generative models can share information across classes.
198
+
199
+
**Remark ii)**
200
+
*Generative modelling is a very different task than standard supervised learning*. The usual classification task is the following, given an empirical labelled data distribution $(x_1, y_1), \dots, (x_n, y_n)$, the goal is to estimate the probability a given new image $x$ is a cat or a dog, i.e. we want to estimate $p_{data}(y = cat | x)$.
201
+
On the opposite, in class-conditional generative modelling, we are given a class (e.g. cat), and we want to estimate the probability distribution of images of cats $p_{data}(x | y = cat)$, and sample new images from this distribution.
202
+
192
203
#### Text-Conditional Generative Modelling
193
204
205
+
**What** In *text-conditional* generative modelling, we are given a set of data (e.g. images) and their text description
206
+
207
+
<Tblockv='text("Data: ") underbrace({(x_1, y_1),dots, (x_n, y_n)}, n "images" x_i "and their text description " y_i) .' />
A dataset of cat and dog photos, and their text description (source, Pexels.com).
277
+
</figcaption>
278
+
</figure>
279
+
280
+
281
+
For instance, [Stable Diffusion](https://stabledifffusion.com/) was trained on the [LAION-5B dataset](https://laion.ai/blog/laion-5b/), a dataset of 5 billion images and their textual description.
282
+
283
+
**Assumption** For *text-conditional* generative models, the assumption is that the data $x_1, \dots, x_n$, is drawn from some *unknown* underlying conditional probability distribution**s** $p_{data}( \cdot | y = y_i)$: for all $i \in 1, \dots, n$
284
+
285
+
<Tblockv='x_i ~ underbrace(p_"data" (dot | y = y_i), "unknown"), y_i "is a text description".' />
286
+
287
+
The main difference with class-conditional is that the conditioning variable $y_i$ is now a text description, not a fixed number of classes.
288
+
289
+
290
+
**Goal** Using the data and their text description $(x_1, y_1), \dots, (x_n, y_n) $, the goal is to *generate* new samples $x^{\text{new}}$, given a text description.
291
+
More precisely, given a text description $y^{new}$ we want to be able to generate new images $x^{\text{new}}$ that follow the conditional probability distributions
**Remark** Text-conditional generative modelling is very challenging regarding multiple aspects:
297
+
- one usually observes only one sample $x_i$ per textual description $y_i$, i.e., one has to leverage similarities between text descriptions to learn the conditional distributions $p_{data}(\cdot | y)$.
298
+
- one has to handle *new text descriptions* $y^{new}$ that were not seen during training.
299
+
- text descriptions complex objects, that are not easy to handle (multiple sequence size). Handling text requires a lot of engineering and is out of the scope of this Lecture (tokenization, embeddings, transformers, etc.).
0 commit comments