LogSumExp Optimisation #1563

Open
mgermain opened this Issue Oct 15, 2013 · 6 comments

4 participants

@mgermain

It could be nice to have a numerically more stable LogSumExp built-in.
Instead of doing T.log(T.sum(T.exp(x))) we could use something like LogSumExp(x).

def LogSumExp(x):
    x_max = T.max(x)
    T.log(T.sum(T.exp(x - x_max))) + x_max 

Or even better with axis support.

def LogSumExp(x, axis=None):
    x_max = T.max(x, axis=axis, keepdims=True)
    return T.log(T.sum(T.exp(x - x_max), axis=axis, keepdims=True)) + x_max
@nouiz
Theano member

This implementation is already in Pylearn:

http://deeplearning.net/software/pylearn2/library/expr.html?highlight=log_sum_exp#pylearn2.expr.basic.log_sum_exp

There is a PR that add test for it: lisa-lab/pylearn2#1390

Maybe we can just port it to Theano.

@mgermain
@abergeron
Theano member

Yes, we want an optimization for that in theano. This is the point of ticket #2082.

@nouiz
Theano member
@mgermain

"an optimization for that in theano"

Is there an example I could look at to learn how it should be implemented ?

@NeilGirdhar

+1 This would be very useful.

@nouiz nouiz added the CCW label Jun 13, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment