Dockerfiles for setting up GPUs with NVIDIA CUDA and other machine intelligence packages
-
Updated
Dec 13, 2016 - Shell
Dockerfiles for setting up GPUs with NVIDIA CUDA and other machine intelligence packages
Github Actions Runner for training classifiers with CUDA acceleration using nvidia-docker and tensorflow core
Containerized development workflow for the NorLab_MPPI and SNOW_AutoRally projects leveraging nvidia-docker technology. Believe it or not, it's configured for developing with ROS melodic in python 3.6.
A documentation explaining how you can install Docker Nvidia on your offline network to benefit the power of your GPU in your containers.
How to copile OPENCV to use CUDA within a DOCKER image
Unofficial minimal instructions for managing NVIDIA Multi-Instance GPU (MIG) in a docker container
This repository contains the runpod serverless component of the SDGP project "quizzifyme"
Sample YAML file for setting up the NVIDIA driver and NVIDIA device plugin for better scheduling
Prometheus NVIDIA-Docker exporter
Build and run Docker containers leveraging NVIDIA GPUs including fedora 33 rpm build: make fedora33
NVidea CUDA base image on Ubuntu Linux, used to run Machine Learning
FastNode: A Neuro-Graphic Self-Learnable Engine for Cognitive GUI Automation
Simple Deep Learning setup for running TensorFlow on Google Cloud GPU machines. It uses prepackaged Docker container maintained by Google for setting up TensorFlow.
A docker container for ccminer
Docker image for running Udacity Deep Learning projects
Docker image of Claymore GPU miner
Docker-in-Docker with NVIDIA GPU support 🐳
Add a description, image, and links to the nvidia-docker topic page so that developers can more easily learn about it.
To associate your repository with the nvidia-docker topic, visit your repo's landing page and select "manage topics."