Skip to content

nogibjj/mlops-presentation-11-2022

Repository files navigation

CI Codespaces Prebuilds

Template for MLOPs projects with GPU

  1. First thing to do on launch is to open a new shell and verify virtualenv is sourced.
  • TBD

Things included are:

  • Makefile

  • Pytest

  • pandas

  • Pylint

  • Dockerfile

  • GitHub copilot

  • jupyter and ipython

  • Most common Python libraries for ML/DL and Hugging Face

  • githubactions

Verify GPU works

The following examples test out the GPU

  • run pytorch training test: python utils/quickstart_pytorch.py
  • run pytorch CUDA test: python utils/verify_cuda_pytorch.py
  • run tensorflow training test: python utils/quickstart_tf2.py
  • run nvidia monitoring test: nvidia-smi -l 1 it should show a GPU
  • run whisper transcribe test ./utils/transcribe-whisper.sh and verify GPU is working with nvidia-smi -l 1

Additionally, this workspace is setup to fine-tune Hugging Face

fine-tune

python hf_fine_tune_hello_world.py

Used in Following Projects

Used as the base and customized in the following Duke MLOps and Applied Data Engineering Coursera Labs:

References

About

This is a repo for demonstrating mlops best practices

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published