Permalink
Find file Copy path
0e870c7 Jan 2, 2019
Christopher Pramerdorfer add assignment 3 instructions
0 contributors

Users who have contributed to this file

27 lines (16 sloc) 1.94 KB

Deep Learning for Visual Computing - Assignment 3

The third assignment encourages you to experiment with deep learning for tasks other than image classification. The task is simple: do something with deep learning that is not image classification and write about your findings. You don't have to train any models or code anything (unless you want to), simply use code from the Internet. You don't have to use Pytorch either (in this case the code won't work on our servers though). This link should get you started but you can also search for other projects on Github or elsewhere. The only requirement is that the code is open source.

Aim for a total effort of around 10 hours including getting the code to run and writing the report. This should be enough time for quite detailed experiments of a single task (like maybe comparing Faster R-CNN and YOLO on different images, or testing different solutions for semantic image segmentation). If you want, you can also cover multiple tasks.

Report

Write a short report that answers the following questions:

  • What did you do?
  • Where did you download the code (including links)? Did you change anything and (if so) what?
  • Did you have problems running the code? If so, how did you fix them?
  • Which tests did you perform and what are the results? Discuss the results and include images and figures.

Submission

Submit your assignment until 31.01 at 11pm. To do so, create a zip archive including the report and all the code. More precisely, after extracting the archive I should obtain the following:

group_x/
    report.pdf
    src/
        ...

Then upload that archive to the submission folder (assignment3) on the DLVC server. You will find this folder inside your group folder in your home directory. Make sure you've read the general assignment information here before your final submission.