Skip to content

Tensorflow 2.0 implementation of time-lagged autoencoder using Recurrent Neural Networks

License

Notifications You must be signed in to change notification settings

tugot17/RNN-Time-lagged-Autoencoder

Repository files navigation

RNN-Time-lagged-Autoencoder

Tensorflow 2.0 implementation of Time-lagged auto-encoder using Recurrent Neural Networks. Model was inspired by Time-lagged autoencoders: Deep learning of slow collective variables for molecular kinetics (Christoph Wehmeyer & Frank Noé)

We have a three-dimensional time series dataset. Data points in this time series can be grouped into 4 states (0,1,2,3), which can not be separated by simple geometric means.

drawing

Our goal is to reduce dataset dimension from 3D to 1D, in such a way that the four different states are unraveled. We assume that in training set we have only observations, but not labels.

To perform the tesk I've user modyfied idea of time-lagged autoencoder from original paper. Instead of Dense Autoencoder, I've used RNN autoencoder to make use of the fact that given data is a time series.

drawing

The obtained results are surprisingly good, thanks to the nonlinear transformation I've managed to practically split all data into 4 separate clusters. The accuracy on a validation set (for validation we have both observations and label) is about 99.5%

drawing

Getting Started

To run it You need jupyter notebook installed or You can run it using google colab The main file is RNN Time-lagged autoencoder.ipynb

Prerequisites

-tensorflow 2.0
-numpy
-tqdm
-matplotlib
-sklearn
-mpl_toolkits

Authors

License

This project is licensed under the MIT License - see the LICENSE.md file for details

About

Tensorflow 2.0 implementation of time-lagged autoencoder using Recurrent Neural Networks

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published