Skip to content

pitikorn32/cuda-cudnn-gpu-devcontainer

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NVIDIA CUDA + cuDNN DevContainer Template with GPU Support

Build and run a DevContainer with Python 3, CUDA 11.8 and cuDNN. This is a better way to run Tensorflow/AutoKeras on Windows with GPU support without frustrating installation and compatibility issues. .py and .ipynb scripts are supported without the need to install Anaconda/Jupyter Notebook.

Prerequisites

See here for more detailed hardware and system requirements of running Tensorflow.

Be warned that some deep learning models require more GPU memory than others and may cause the Python kernel to crash. You may need to set a smaller batch for training.

Start DevContainer

Modify requirements.txt to include packages you'd like to install. ipykernel is required for executing IPython notebook cells in VS Code.

Open the folder in VS Code, press F1 to bring up the Command Palette, and select Dev Containers: Open Folder in Container...

Wait until the DevContainer is up and running, then test if the Tensorflow can detect the GPU correctly:

python3 -c "import tensorflow as tf; print(tf.config.list_physical_devices('GPU'))"

Test run using the example file:

python3 autokeras-test.py

Or open autokeras-test.ipynb and run the cells.

After that, simply start Docker then open the directory in VS Code to use the built container.

Resources

See here for the latest version of libcudnn8 and libcudnn8-dev in install-dev-tools.sh.

About

NVIDIA CUDA + cuDNN DevContainer Template with GPU Support

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 57.2%
  • Shell 31.5%
  • Python 11.3%