Stacked Denoising Autoencoders (SDA) implemented in TensorFlow to analyze clinical health records and construct deep learning models to predict future patient complications.
Switch branches/tags
Nothing to show
Clone or download
Latest commit ee2dee9 Aug 12, 2016


Deep learning project in TensorFlow and Torch to analyze clinical health records and construct deep learning models to predict future patient complications.


This project uses Stacked Denoising Autoencoders (SDA) [P. Vincent] to perform feature learning on a given dataset. Two overall steps are necessary for fully configuring the network to encode the input data: pre-training, and fine-tuning.

During unsupervised pre-training, parameters in the neural network are learned and configured layer by layer greedily by minimizing the reconstruction loss between each input and its decoded counterpart. A supervised softmax classifier on top of the network provides fine tuning for all parameters of the network (weights and biases for each autoencoder layer plus softmax weights/biases).

Following this configuration, the input data can be read into the model and encoded into a different representation depending on the user's desired parameters (layer dimensions, activations, noise level, etc.). For example, this technique can be used to transform a sparse feature space of 30000 dimensions into a dense feature space of 400 dimensions as a primer for better training performance.


The current working source code is located in tf/ Currently reads train/test data from csv files in batch style. The following three datasets must be present for the SDA to output newly learned features:

  • X training values
  • Y training values
  • X testing values

An additional dataset is needed if the output of SDA encoding is directly used for classification via the provided softmax classifier:

  • Y testing values

In the future, a version of the program will be constructed to be optimized on a multi(4)-gpu system.

# Start a TensorFlow session
sess = tf.Session()

# Initialize an unconfigured autoencoder with specified dimensions, etc.
sda = SDAutoencoder(dims=[784, 256, 64, 32],
                    activations=["sigmoid", "tanh", "sigmoid"],

# Pretrain weights and biases of each layer in the network.

# Read in test y-values to softmax classifier.
sda.finetune_parameters(X_TRAIN_PATH, Y_TRAIN_PATH, output_dim=10)

# Write to file the newly represented features.
sda.write_encoded_input(filepath="../data/transformed.csv", X_TEST_PATH)

For an example of how training is performed and subsequent accuracy is evaluated, a basic procedure is implemented on the MNIST data set in tf/


Testing on the MNIST data set, the softmax classifier on top of features extracted from the deep feature learning of the SDA can achieve approximately 98.3% accuracy in identifying the digits. To achieve this result, the model in tf/ is set up with the following parameters (which may not necessarily be optimal) with 500000 data points for layer-wise pretraining and 3000000 data points for fine tuning:

sda = SDAutoencoder(dims=[784, 400, 200, 80],
                    activations=["sigmoid", "sigmoid", "sigmoid"],

Total execution time for feature learning, training, and evaluation was just under 9 minutes on my 1.3 GHz MacAir processor (under a minute on a GPU machine using one GTX 1080). This result improves upon the benchmark of 92% achieved by just a simple softmax classifier without feature learning. It is also comparable to some simple 2D convolutional network models, which are optimized to take advantage of the 2D structures in image data.

In the future, we plan to do additional testing to optimize hyperparameters in the model and improve execution speed in various parts of the model.

Current status

  • (Done) SDA implemented in in TensorFlow.
  • (Done) Implement softmax classifier.
  • (To do) Implement command line execution of program.
  • (WIP) Testing for any silent bugs.
  • (To do) Enable multi-gpu support in the architecture.
  • (WIP) Add compatibility for other data-loading methods
  • (To do) Add pre-processing methods in TF
  • (WIP) More documentation