Skip to content

Scitator/workflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

ML project workflow

Project

  1. bin - bash files for running pipelines, place all .sh files here
  2. common - data preprocessing scripts, utils, everything like
    python common/scripts/{some-script}.py
    # or
    from common import utils
    
    typically, it our project/library core
  3. docker - project Docker files for pure reproducibility
  4. presets - datasets, notebooks, etc - all you don't need to push to git, use
    • presets/data for datasets
    • presents/notebooks for notebooks
    • presets/serving for serving artefacts
  5. requirements - different project python requirements for docker, tests, CI, etc
  6. serving - microservices, etc - production with Reaction
  7. training - model, experiment, etc - research with Alchemy & Catalyst, use
    • training/configs - for all configs, just all .yml files

Workflow

tip: you can save all answers to presets/_faq.md.

Before ML (miniFAQ)

  1. What problem are you trying to solve?
  2. How do you think it can be solved? What is your hypothesis?
  3. What is the value of the hypothesis you are testing?
  4. What are the main metrics? How to measure them?
  5. How can metrics prove that hypothesis works?
  6. Is it possible to check it without ML? How?
  7. How will your solution be integrated into the current system?
  8. What can go wrong? What kind of corner cases can occur?

ML (plan)

  1. Perform data exploratory analysis, check that the data and labeling are correct.
  2. Plot main statistics, find outliers and recheck them again.
  3. Do data preprocessing, get clean data from raw one.
  4. Split data into train/valid/test parts and fix this data split for future experiments.
  5. Run adversarial split on your train/valid/test parts to check split correctness.
  6. Overfit your model on one batch from train part to ensure, that pipeline work correctly.
  7. Use train/valid parts for model training (log all your experiments) and valid part for model final postrocessing.
  8. Track metrics for all experiments (use tables for this). Do not forget to write tests for used metrics.

After ML (todo)

  1. Write down all experiments, check their performance on test part, select best one.
  2. Trace the model :)

Extra

To keep your code simple and readable, you can use catalyst-codestyle

# install
pip install -U catalyst-codestyle
# and run
catalyst-make-codestyle && catalyst-check-codestyle

About

ML project workflow

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages