Skip to content

iartag/hek-ml-workshop

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

33 Commits
 
 
 
 
 
 
 
 

Repository files navigation

iart logo

iart AI session and H3K ML workshop

MIT License Twitter

Main repository for the internal AI session @iart and the public H3K ML workshop.

All the info regarding the workshop as well as direct links to learning materials (slides, notebooks, examples, etc... ) are accessible via the github pages for this repository:

https://iartag.github.io/hek-ml-workshop/

Schedule

  • 11am - Start 😺
  • 11am - Introduction
  • 12pm - Lunch
  • 12.45pm - Software setup
  • 1.15pm - Experiments
  • 3.15 - Presentation
  • 4pm - End 😿

Slides

  1. Slides for the ML workshop
  2. Slides for the internal presentation at iart

Samples

The sample folder contains different examples:

  • 00_styletransfer: simple style transfer example with live webcam feed
  • 01_styletransfer: style transfer with gui + realtime filter
  • 02_styletransfer: style transfer drawing
  • 03_styletransfer: style transfer feedback loop
  • 04_mobilenet: simple mobilenet example
  • 05_cocossd: cocossd example (box + label drawing)
  • 06_maskrcnn: simple maskrcnn example
  • 07_posenet_im2txt: The text from im2text is "following" one body part
  • 08_posenet_im2txt: The text from im2text scaled / rotated according to the user hands
  • 09_posenet_im2txt: The text from im2text are turned into particles for interactions (WIP)
  • 10_im2txt_attngan: The image is described by im2txt and an image is generated by attngan
  • 11_pix2pix: pix2pix drawing
  • 12_pix2pix_facelandmarks: pix2pix face to facade (WIP)
  • ~~13_cocossd_facerecognition: (WIP)~~

Tools

System requirement

Modern machine with decent hardware and sufficient space on the hard drive (20+ Gb)

Runway

We are using Runway, a tool which makes deploying ML models easy, as middleware to build the interactive experiments. All participants to the workshop should have received an invitations with some GPU credits 🎉. For those who have not installed it prior to the workshop, we will go through the installation process together.

Docker

Docker is needed in order to deploy some of the models locally. This will give us some flexibility when running experiments locally. It will also allow us to chain models (at the moment a user can only run one model instance using the provided cloud GPU in Runway). A guide to getting started is available. For linux users, those post install steps could be useful as well.

Docker for Windows requires Microsoft Hyper-V, which is supported only in the Pro, Enterprise or Education editions of Windows. If you don't have a Pro, Enterprise or Education Windows edition you will not be able to install Docker and you will be able to only run some models using cloud GPU.

P5.js

We will use p5.js for the front end. It’s a high level creative programming framework with an intuitive API. If some of you have used Processing before you should be confortable using p5.js. To get familiar with p5 you can go through this list of tutorials / guides:

Code editor

If you don’t have a code editor, please install one. Some suggestions (in no particular order)

Web server

We need a simple web server to run the experiments locally. Some suggestions

References / Reading list

Repository structure

├── docs
│   ├── _layouts
│   ├── assets            (img, etc.. for content)
│   │   ├── css
│   │   └── images
│   └── slides            (slides of the presentations)
│       ├── demos
│       └── static        (img, etc.. for slides)
├── samples               (code sample) 
└── utilities             (scripts and notes)

Releases

No releases published

Packages

No packages published