You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
**Remark** Text-conditional generative modelling is very challenging regarding multiple aspects:
297
-
- one usually observes only one sample $x_i$ per textual description $y_i$, i.e., one has to leverage similarities between text descriptions to learn the conditional distributions $p_{data}(\cdot | y)$.
298
-
- one has to handle *new text descriptions* $y^{new}$ that were not seen during training.
299
-
- text descriptions complex objects, that are not easy to handle (multiple sequence size). Handling text requires a lot of engineering and is out of the scope of this Lecture (tokenization, embeddings, transformers, etc.).
296
+
**Remark iii)** Text-conditional generative modelling is very challenging regarding multiple aspects:
297
+
- one usually observes only one sample $x_i$ per textual description $y_i$, i.e., one has to leverage similarities between text descriptions $y_i$ to learn the conditional distributions $p_{data}(\cdot | y=y_i)$.
298
+
- one has to handle *new text descriptions* $y^{new}$ that *were not seen during training*, i.e., the model needs to be able to generalize to new text.
299
+
- text descriptions are complex objects, that are not easy to handle (discrete objects with variable sequence length). Handling text conditioning requires a lot of engineering and is out of the scope of this introduction Lecture (tokenization, embeddings, transformers, etc.).
300
300
301
+
**Remark iv)** Even if text-conditional generative modelling is very challenging, conceptually, the tools, algorithms, and concepts used for unconditional generative modelling are the same for text-conditional generative modelling.
0 commit comments