Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] smoothed log sum implementation #18

Open
floswald opened this issue May 2, 2021 · 1 comment
Open

[FR] smoothed log sum implementation #18

floswald opened this issue May 2, 2021 · 1 comment

Comments

@floswald
Copy link

floswald commented May 2, 2021

thanks for this cool little package. I have a feature request/question. In discrete choice models (for example in this paper equation 14 ) we often have a smoothed version of the log sum function. that is, instead of

alpha + log( sum (exp( x - alpha ) ) )

we'd have

alpha + σ log( sum (exp( (x - alpha)/σ ) ) )

I was trying to think how to add this to the package (maybe in my own fork if this turns out non of interest here), but I'm not totally sure where. My best guess would have been to do the division by sigma in places like here, but not totally certain. thanks for any hints!

@devmotion
Copy link
Member

I've encountered this smoothed version in OT (e.g., https://papers.nips.cc/paper/2016/file/2a27b8144ac02f67687f76782a3b5d8f-Paper.pdf) but just used logsumexp with an iterator to implement it (https://github.com/devmotion/StochasticOptimalTransport.jl/blob/7960dc8668c27236da3b0429a86fb2ad57f9c382/src/utils.jl#L111-L113, seems I should switch to LogExpFunctions now it's reexported from StatsFuns).

Unfortunately, the implementation of logsumexp is not as simple and straightforward to read as I would like it to be, mainly because it is supposed to cover a variety of different use cases in an optimal way, e.g., regular arrays, iterators, and GPU arrays (JuliaStats/StatsFuns.jl#97). It's an implementation of the one-pass algorithm. To me it seems one would have to change

return @. first(xmax_r) + log1p(last(xmax_r))
and
_logsumexp_onepass_result((xmax, r)::Tuple) = xmax + log1p(r)
(there the final result is computed in a numerically stable way from alpha and sum(exp(x - alpha)) - 1) and one would have to modify the reductions in
function _logsumexp_onepass_op(x1, x2)
,
function _logsumexp_onepass_op(x, (xmax, r)::Tuple)
, and
function _logsumexp_onepass_op((xmax1, r1)::Tuple, (xmax2, r2)::Tuple)
.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants