Skip to content

rasmusmhl/course-deep-unsupervised-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 

Repository files navigation

Deep Unsupervised Learning

The special course is based on the Berkeley CD294-158 Deep Unsupervised Learning.

Location

Building 321, 1st floor, room 134 ("lunchroom at the far end of the 1st floor"), 10:00-12:00.

Slack

Make sure to get access to the Slack CogSys "Deep Unsupervised Learning"-channel: cogsys.slack.com, #deep_unsupervised_learning.

Format

Each two-hour session will follow the following format:

  • We recap the lecture for the week and discuss/clarify as needed
  • A presenter will go through a paper (presenters are listed under Schedule, tentative papers for each week is listed under Reading)
  • We discuss the current homework.

In week 13 each person does a 5 min presentation of results of their individual project.

Homework

4 homework assignments are to be completed. The links are available in the schedule to the PDF describing the homework. The homework descriptions clearly outline a set of deliverables that are to be handed in on DTU Inside. We will discuss the homework as we go along (see Format). For each homework assignment you need to: complete the homework, fill out the LaTeX-template, and upload a PDF to the appropriate assignment.

Final project

For the final part of the course (weeks 12-13) a small project is to be completed.

For the last session (week 13), a short 5 minutes presentation on the project is to be given. This presentation and a report make up the deliverables for the last part of the course. The report should be an IEEE-paper style report of a maximum of 4 pages (excluding references) - a folder with the needed template is available on DTU Inside.

Given the very limited time scope of the project, the expectation of the project is that you investigate some aspect of the homework that was not a part of the homework of your own choosing. Examples: investigating the effect of batch size on the PixelCNN performance on the coloured MNIST (homework 1), investigating some of the bonus questions, or investigating the performance under changes to the architecture. The report should discuss the results with a basis in the course theory. You are more than welcome to work go beyond the homework and investigate your own data, but it is not a requirement for getting an approved project.

Paper presentations

Guidelines for presentation:

  • Read the paper to the best of your ability (you're not expected to understand or be able to explain all the details)
  • Prepare (minimal) slides that:
    • is structured under the same headlines as the paper, and generally make sure to go through:
      • abstract/overview,
      • study background,
      • aim/objective/hypothesis of the paper,
      • methods,
      • results, and
      • discussion/conclusion
    • has bullet points for the content under headlines
    • includes main tables and figures
    • includes relevant personal considerations on e.g.:
      • design/methods used,
      • the authors' discussion/interpretation of the results and study drawbacks,
      • significance of the paper

Passing the course

To pass the course you have to pass (get approved) each of the following elements:

  • Homework 1: Autoregressive Models
  • Homework 2: Flows
  • Homework 3: Variational Autoencoders
  • Homework 4: Generative Adversarial Networks
  • Your presentation of assigned papers
  • Project presentation, and project report

Schedule

Week Date Subject Presenter Homework
1 Sep 6 Likelihood-based models I: autoregressive models Rasmus Høegh HW1 (template, due: Sep 27)
2 Sep 13 Lossless compression and Likelihood-based models II: flow models Peter Ebert Christensen HW1 continued
3 Sep 20 Latent Variable Models I Valentin Liévin HW1 continued
4 Sep 27 Latent Variable Models II and Bits-Back Coding Frederik Boe Hüttel HW2 (template, due: Oct 25)
5 Oct 4 Implicit Models/Generative Adversarial Networks Didrik Nielsen HW2 continued
6 Oct 11 Non-Generative Representation Learning I HW 2 continued
Oct 18 Fall break
7 Oct 25 Non-Generative Representation Learning II Nicklas Hansen HW3 (template, due Nov 15)
8 Nov 1 Semi-Supervised Learning and Open AI: Reinforcement Learning Andreas Brink-Kjær HW3 continued
9 Nov 08 Unsupervised Distribution Alignment and BAIR: Self-Supervision Christoffer Riis HW3 continued
10 Nov 15 OpenAI: Language Models Alexander Neergaard Olesen HW4 (template, due: Nov 29)
11 Nov 22 Representation Learning in Reinforcement Learning Jonathan Foldager HW4 continued
12 Nov 29 Deep Mind: Latent-Space Generative Models Dimitris Kalatzis HW4 continued
13 Dec 6 Final project presentations All Project (presentation due Dec 6, report due Dec 20)

Reading

Reading is based on papers central to the talk or homework. Optionals are highlights beyond that paper from the various articles suggested here. Suggestions for important highlights are welcome. You are free to swap presentation dates (and thereby paper) - coordinate between yourselves and notify Rasmus (rmth@dtu.dk).

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published