Software for unsupervised deep architectures
Get uncomplicated access to unsupervised deep neural networks, from building their architecture to their training and evaluation
Ruta is based in the well known open source deep learning library Keras and its R interface. It has been developed to work with the TensorFlow backend. In order to install these dependencies you will need the Python interpreter as well, and you can install them via the Python package manager pip or possibly your distro's package manager if you are running Linux.
$ sudo pip install tensorflow $ sudo pip install keras
Otherwise, you can follow the official installation guides:
# Just get Ruta from the CRAN install.packages("ruta") # Or get the latest development version from GitHub devtools::install_github("fdavidcl/ruta")
All R dependencies will be automatically installed. These include the Keras R interface and
purrr. For convenience we also recommend installing and loading either
purrr, so that the pipe operator
%>% is available.
The easiest way to start working with Ruta is to use the
autoencode() function. It allows for selecting a type of autoencoder and transforming the feature space of a data set onto another one with some desirable properties depending on the chosen type.
iris[, 1:4] %>% as.matrix %>% autoencode(2, type = "denoising")
You can learn more about different variants of autoencoders by reading A practical tutorial on autoencoders for nonlinear feature fusion.
Ruta provides the functionality to build diverse neural architectures (see
autoencoder()), train them as autoencoders (see
train()) and perform different tasks with the resulting models (see
reconstruct()), including evaluation (see
evaluate_mean_squared_error()). The following is a basic example of a natural pipeline with an autoencoder:
library(ruta) library(purrr) # Shuffle and normalize dataset x <- iris[, 1:4] %>% sample %>% as.matrix %>% scale x_train <- x[1:100, ] x_test <- x[101:150, ] autoencoder( input() + dense(256) + dense(36, "tanh") + dense(256) + output("sigmoid"), loss = "mean_squared_error" ) %>% make_contractive(weight = 1e-4) %>% train(x_train, epochs = 40) %>% evaluate_mean_squared_error(x_test)
Note: if using Tensorflow 2 you may need to run the following lines before training an autoencoder:
if (tensorflow::tf$executing_eagerly()) tensorflow::tf$compat$v1$disable_eager_execution()