-
Notifications
You must be signed in to change notification settings - Fork 2
AWS GPU Resources
Sean McDaniel edited this page Jan 2, 2020
·
10 revisions
Supporting GPU computation with SABER (and docker more generally) requires some additional configuration to make sure the correct GPU libraries and environment variables are available. This page documents how to set up an EC2 instance to work with SABER and GPUs.
- Create a new EC2 Ubuntu instance with the Deep Learning AMI (Ubuntu)
- Connect to your instance (ssh with the user 'Ubuntu', the IP address from AWS, and the PEM certificate you generated during step 1).
sudo apt-get update && sudo apt-get upgrade
- Ensure docker is installed successfully (
which docker
) if not, go here: https://docs.docker.com/install/linux/docker-ce/ubuntu/. - Check to see if NVIDIA drivers are present run
which nvcc
if not runsudo apt-get install -y cuda-9-1 nvidia-cuda-toolkit
- Make the default docker runtime NVIDIA (required by CWL unfortunately) by editing /etc/docker/daemon.json with
{
"default-runtime": "nvidia",
"runtimes": {
"nvidia": {
"path": "nvidia-container-runtime",
"runtimeArgs": []
}
}
}
- Restart
- Test that this is working using docker run --runtime=nvidia --rm nvidia/cuda:7.5-devel nvidia-smi
- Test your container! An example of a Theano/Keras container can be found in saber/xbrain/unets
Table of Contents
- Overview
- Setup and Configuration:
- Conduit:
- FAQs
- Data Access
- Tools:
- Examples: