Skip to content
Efficient Unitary Neural Network(EUNN) implementation in Tensorflow
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


Unitary neural network is able to solve gradient vanishing and gradient explosion problem and help learning long term dependency. EUNN is an efficient and strictly enforced unitary parametrization based on SU(2) group. This repository contains the implementation of Efficient Unitary Neural Network(EUNN) in tensorflow.

If you find this work useful, please cite arXiv:1612.05231.

I am working on submitting this code to tf.contrib so that in the future you can use it directly from official tensorflow.


requires TensorFlow > 1.2.0




To use EUNN in your model, simply copy

Then you can use EUNN in the same way you use built-in LSTM:

from eunn import EUNNCell
cell = EUNNCell(hidden_size, capacity, fft, complex)


  • hidden_size: Integer.
  • capacity: Optional. Integer. Only works for tunable style.
  • fft: Optional. Bool. If True, EUNN is set to FFT style. Default is False.
  • complex: Optional. Bool. If True, EUNN is set to complex domain. Default is True.


  • For complex domain, the data type should be tf.complex64
  • For real domain, the data type should be tf.float32

Example tasks for EUNN

Copying memory task and pixel-permuted MNIST task for RNN in the paper are shown here. Due to copyright issue, we cannot release TIMIT task.

Copying Memory Task

python --model eunn --T 200 --fft

Pixel-Permuted MNIST Task

python --model eunn --iter 20000 --hidden 512 --complex False 


MIT License

You can’t perform that action at this time.