LogSumExp Optimisation #1563

mgermain opened this Issue Oct 15, 2013 · 6 comments

4 participants


It could be nice to have a numerically more stable LogSumExp built-in.
Instead of doing T.log(T.sum(T.exp(x))) we could use something like LogSumExp(x).

def LogSumExp(x):
    x_max = T.max(x)
    T.log(T.sum(T.exp(x - x_max))) + x_max 

Or even better with axis support.

def LogSumExp(x, axis=None):
    x_max = T.max(x, axis=axis, keepdims=True)
    return T.log(T.sum(T.exp(x - x_max), axis=axis, keepdims=True)) + x_max
Theano member

This implementation is already in Pylearn:


There is a PR that add test for it: lisa-lab/pylearn2#1390

Maybe we can just port it to Theano.

Theano member

Yes, we want an optimization for that in theano. This is the point of ticket #2082.

Theano member

"an optimization for that in theano"

Is there an example I could look at to learn how it should be implemented ?


+1 This would be very useful.

@nouiz nouiz added the CCW label Jun 13, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment