Skip to content
This repository has been archived by the owner on Aug 15, 2019. It is now read-only.

Add tf.log_sigmoid #916

Merged
merged 6 commits into from
Apr 14, 2018
Merged

Add tf.log_sigmoid #916

merged 6 commits into from
Apr 14, 2018

Conversation

jgartman
Copy link
Contributor

@jgartman jgartman commented Apr 2, 2018

This PR adds tf.log_sigmoid to the API


This change is Reviewable

@nsthorat
Copy link
Contributor

nsthorat commented Apr 9, 2018

Apologies for the delay, I wanted to chat with some colleagues about this PR.

TF (python) is pretty large and has a lot of ops we might not need in the JS world so I think going forward we'll try to be use-case oriented (if somebody needs an op we'll add it).

Secondly, when we're deciding whether to implement a kernel for an op (adding it to backend.ts) we're going to try to mirror TF's python / C API split. Our Node.js binding will be through the backend.ts API so we want to make sure there's a 1:1 relationship between the TF C API and our backend.ts.

If you check out tf.log_sigmoid: https://www.tensorflow.org/api_docs/python/tf/log_sigmoid

You'll see that it's implementation is here: https://github.com/tensorflow/tensorflow/blob/r1.7/tensorflow/python/ops/math_ops.py#L2345

It looks like actually the python code for log_sigmoid calls -softplus(-x) where softplus is part of the C API.

Would you mind reworking this a little so our backend.ts implementation in WebGL is for softplus, and the log_sigmoid op could just call that?

Also related, was this PR driven by a use case or just parity with TF in general (which is not unreasonable)?

@jgartman
Copy link
Contributor Author

jgartman commented Apr 9, 2018

Thanks for the feedback @nsthorat, I really appreciate it. This PR was not use case driven, it was just for parity with the python API and for saving a user the two keystrokes they get with being able to type x.log_sigmoid() instead of x.log().sigmoid() (obviously a potentially huge time saver). I'll go ahead and rework this and will definitely try and keep what you said in mind going forward.

@nsthorat
Copy link
Contributor

nsthorat commented Apr 9, 2018

Thanks for understanding and for putting the hard work into this PR!

@nsthorat
Copy link
Contributor

Awesome, looking really good. Couple comments inline. All the math looks great, nice work.


Review status: 0 of 8 files reviewed at latest revision, all discussions resolved, some commit checks failed.


src/kernels/webgl/unaryop_gpu.ts, line 129 at r3 (raw file):

export const SOFTPLUS = `
  // mirrors the implementation of tf.nn.softplus: https://goo.gl/vkcvwX

Can you move these comments outside the string? These comments will make it all the way into the bundle, but if we can move them outside the SOFTPLUS implementation here they'll get stripped out (and not bloat bundle size). I realize this isn't the best for readability, but I think you should be able to just say what "too_large" and "too_small" are.


src/ops/unary_ops.ts, line 460 at r3 (raw file):

        (backend, save) => save(backend.sigmoid(x)), {x}, grad);
  }

can you also add a softplus top-level op that just calls the kernel (and gradient)?


src/ops/unary_ops.ts, line 474 at r3 (raw file):

  @doc({heading: 'Operations', subheading: 'Basic math'})
  @operation
  static log_sigmoid<T extends Tensor>(x: T): T {

can you make this called logSigmoid() to be consistent with lowerCamel


Comments from Reviewable

@jgartman
Copy link
Contributor Author

Review status: 0 of 8 files reviewed at latest revision, 3 unresolved discussions.


src/kernels/webgl/unaryop_gpu.ts, line 129 at r3 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

Can you move these comments outside the string? These comments will make it all the way into the bundle, but if we can move them outside the SOFTPLUS implementation here they'll get stripped out (and not bloat bundle size). I realize this isn't the best for readability, but I think you should be able to just say what "too_large" and "too_small" are.

Done.


src/ops/unary_ops.ts, line 460 at r3 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

can you also add a softplus top-level op that just calls the kernel (and gradient)?

Done.


src/ops/unary_ops.ts, line 474 at r3 (raw file):

Previously, nsthorat (Nikhil Thorat) wrote…

can you make this called logSigmoid() to be consistent with lowerCamel

Done.


Comments from Reviewable

@nsthorat
Copy link
Contributor

:lgtm_strong:

Thanks a lot, nice work on these PRs!


Reviewed 7 of 8 files at r4.
Review status: 7 of 8 files reviewed at latest revision, all discussions resolved.


Comments from Reviewable

@nsthorat nsthorat merged commit 68a0561 into tensorflow:master Apr 14, 2018
@jgartman jgartman deleted the log_sigmoid branch April 14, 2018 17:12
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants