Align API with TF #956
Align API with TF #956
Conversation
@dsmilkov I just saw this. Would you be fixing tensorflow/tfjs#195 in this PR? |
Nice observation! Yes, those will be removed as part of that PR |
Reviewed 9 of 18 files at r1, 8 of 14 files at r2. src/index.ts, line 68 at r2 (raw file):
put this under webgl src/kernels/webgl/pool_gpu.ts, line 108 at r2 (raw file):
why keep an accumulator around when you know this statically? src/ops/lrn.ts, line 44 at r2 (raw file):
this will break caffe can you just throw for the TF backend? I'd like to not break API if possible. Going forward we can reject these changes. src/ops/multinomial_test.ts, line 31 at r2 (raw file):
the API still allows normalized, but we now don't have coverage src/ops/pool.ts, line 165 at r2 (raw file):
hmm... I depend on this for some channel-normalization stuff in the activation visualization demos. What about just throwing in the backend? Alternatively, we could think about finding a way to expose the WebGL backend directly so we can call a WebGL kernel that does this. Comments from Reviewable |
Thanks for the fast review! Review status: 17 of 24 files reviewed at latest revision, 5 unresolved discussions, some commit checks failed. src/index.ts, line 68 at r2 (raw file): Previously, nsthorat (Nikhil Thorat) wrote…
Can't do it since it's a type. src/kernels/webgl/pool_gpu.ts, line 108 at r2 (raw file): Previously, nsthorat (Nikhil Thorat) wrote…
to align with TF avgPool behavior where padded values are ignored (different than treating them as 0) src/ops/lrn.ts, line 44 at r2 (raw file): Previously, nsthorat (Nikhil Thorat) wrote…
Chatted offline. LRN is rarely used and we are keeping the default behavior. src/ops/multinomial_test.ts, line 31 at r2 (raw file): Previously, nsthorat (Nikhil Thorat) wrote…
Chatted offline. We have webgl and cpu coverage, and there is something we can do later on for other backends. src/ops/pool.ts, line 165 at r2 (raw file): Previously, nsthorat (Nikhil Thorat) wrote…
We can use Comments from Reviewable |
**NOTE: Depends on tensorflow/tfjs-core#956 - Implement: `argMin`, `argMax`, `logicalXor`, `log1p`, `eluDer`, `clip`, `mod`, `round`, `sign`, `rsqrt`, `reciprocal`, `asinh`, `acosh`, `atanh`, `squaredDifference`, `expm1`, `atan2`, `conv2d`, `conv2dDerInput`, `conv2dDerFilter`, `maxPool`, `maxPoolBackprop`, `avgPool`, `avgPoolBackprop`, `tile`, `resizeBilinear`, `batchNorm`, `LRN`, `multinomial`, `softplus`. - Implement `depthwiseConv2D` for `dilations=1`. `dilations>1` requires `spaceToBatch` to be added to core - Implement the backend methods: `time`, `memory`. - Add binding functionality for types of attributes: `float` and `list(int)` - Test the ops using `tfjs-core` tests - Throw error if - padding type is `number` since TF backend doesn't have kernels for that - only `valid` and `same` is supported. We support padding type `number` in the webgl and cpu backend. - multinomial gets called with normalized probabilities. TF backend only supports logits (and you can't undo softmax), while we support both logits and probabilities in the webgl and cpu backend.
Aligns the backend API and functionality (NaN propagation, dtype strictness, kernel signatures) with TensorFlow Python.
backend.minPool
since TF doesn't have it.normRegion
param inlocalResponseNormalization
kernel since TF doesn't support it.leakyRelu
,prelu
andpreluDer
from the backend and implement using higher-level ops, aligning with TF Python.logits
instead ofprobabilities
andnormalized: boolean
param for backwards compatibility.argMin
andargMax
take singleaxis: number
instead ofaxes: number[]
eluDer(x: T): T
signature toeluDer(dy: T, y: T): T
to align with TF.max/avgPool
andconv2d
to align with TF.avgPool
out of bounds (padding) behavior to align with TFindices
inoneHot
andgather
to be of dtypeint32
Fixes tensorflow/tfjs#195
This change is