Skip to content

Training LSTMs on GPUs simplified with Keras, Docker and Azure

License

Notifications You must be signed in to change notification settings

kbhattmsft/keras-gpu-docker

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Predicting time series with LSTMs

This repository contains data and a sample notebook to build a simple time series model using an LSTM network. In order to build and train the model, we're using the Keras framework on top of the Tensorflow library. The code is executed on GPUs through nvidia-docker for efficiency purposes.

Although the sample data and the model are trivial and hence don't require GPUs, this should give a starting point for more elaborate models and larger datasets.

Running it on Azure

Deploy to Azure Visualize

Please note that N-series (GPU enabled instances) are not available in all regions. Please take that into consideration when you're creating the resource group as the location of the resource group is taken as a basis for all resources

This basically provisions an N-series instance running Ubuntu on Azure. The machine has nvidia-docker installed and starts the Jupyter notebooks with a sample notebook of how to build an LSTM model using Keras. You can access Jupyter through the VM's DNS name, and/or connect to the machine through SSH.

Everything is setup to utilize the GPU of the machine for the training. Note that the sample notebook only utilizes a single GPU; with Keras currently you can only do model parallelization (training multiple models and averaging outcomes) if you want to go for multiple GPUs. If you need to do data parallelization, you might need to access the underlying tensorflow layers and implement that yourself.

About

Training LSTMs on GPUs simplified with Keras, Docker and Azure

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 98.1%
  • Shell 1.3%
  • Python 0.6%