Skip to content
This repository was archived by the owner on Aug 15, 2019. It is now read-only.

Conversation

@manrajgrover
Copy link
Contributor

@manrajgrover manrajgrover commented Feb 23, 2018

This PR adds absolute difference loss to the api.


This change is Reviewable

@manrajgrover manrajgrover changed the title [WIP] Loss Operations: Adds Absolute Difference loss Loss Operations: Adds Absolute Difference loss Feb 24, 2018
@nsthorat
Copy link
Contributor

nsthorat commented Mar 4, 2018

Sorry for the delay! Few comments inline.


Review status: 0 of 5 files reviewed at latest revision, 10 unresolved discussions.


src/ops/loss_ops.ts, line 65 at r1 (raw file):

  /**
   * Adds an Absolute Difference loss to the training procedure.

I know this is what TF says, but they have a Graph model, how about:

Computes the absolute difference loss between two tensors.


src/ops/loss_ops.ts, line 33 at r2 (raw file):

export class LossOps {
  /**
   * Computes the weighted loss.

between two tensors


src/ops/loss_ops.ts, line 35 at r2 (raw file):

   * Computes the weighted loss.
   *
   * @param losses `Tensor` of shape `[batch_size, d1, ... dN]`.

remove ticks around Tensor, we parse the type automatically in the docs


src/ops/loss_ops.ts, line 38 at r2 (raw file):

   * @param weights `Tensor` whose rank is either 0, or the same rank as
   *    `losses`, and must be broadcastable to `losses` (i.e., all
   * dimensions must be either `1`, or the same as the corresponding

indent these 2 lines 4 more


src/ops/loss_ops.ts, line 44 at r2 (raw file):

  @operation
  static computeWeightedLoss<T extends Tensor, O extends Tensor>(
      losses: T, weights?: Tensor, reduction = Reduction.NONE): O {

TF has a default of Reduction.SUM_BY_NONZERO_WEIGHTS, can we keep that as well?


src/ops/loss_ops.ts, line 45 at r2 (raw file):

  static computeWeightedLoss<T extends Tensor, O extends Tensor>(
      losses: T, weights?: Tensor, reduction = Reduction.NONE): O {
    if (weights === undefined) {

use == null


src/ops/loss_ops.ts, line 46 at r2 (raw file):

      losses: T, weights?: Tensor, reduction = Reduction.NONE): O {
    if (weights === undefined) {
      weights = ops.scalar(1);

instead of doing it like this, can we conditionally do the ".mul" below? We will save a shader call and a texture.


src/ops/loss_ops.ts, line 80 at r2 (raw file):

  @operation
  static absoluteDifference<T extends Tensor, O extends Tensor>(
      labels: T, predictions: T, weights?: Tensor, reduction = Reduction.NONE):

TF has default of SUM_BY_NONZERO_WEIGHTS, can we keep that?


src/ops/lossop_tests.ts, line 3 at r2 (raw file):

/**
 * @license
 * Copyright 2017 Google Inc. All Rights Reserved.

2018


src/ops/lossop_tests.ts, line 134 at r2 (raw file):

         Math.abs(3 - 6) * 2) /
            20);
  });

would you mind also testing computeWeightedLoss directly too?


Comments from Reviewable

@manrajgrover
Copy link
Contributor Author

@nsthorat Hi, not a problem :)

  losses: T, weights?: Tensor, reduction = Reduction.NONE): O {
if (weights === undefined) {
  weights = ops.scalar(1);

instead of doing it like this, can we conditionally do the ".mul" below? We will save a shader call and a texture.

I'm not sure if this would be possible considering recent changes. Could you please recheck?

Made rest of the changes.

@nsthorat
Copy link
Contributor

:lgtm_strong:


Review status: 0 of 4 files reviewed at latest revision, 1 unresolved discussion.


src/ops/loss_ops.ts, line 61 at r4 (raw file):

        loss = loss.div(ops.onesLike(losses).mul(weights).sum()) as O;
      } else if (reduction === Reduction.SUM_BY_NONZERO_WEIGHTS) {
        const mask = ops.where(

qq, in sum by non-zero weights, do we need to do all this? doesn't a multiplication by a weight of 0 do the same thing as manually checking?


Comments from Reviewable

@manrajgrover
Copy link
Contributor Author

qq, in sum by non-zero weights, do we need to do all this? doesn't a multiplication by a weight of 0 do the same thing as manually checking?

Hi, I'm not sure if I get it. Could you please elaborate more on it?

@dsmilkov
Copy link
Contributor

Left a few comments , but generally looks great!


Reviewed 1 of 2 files at r3, 1 of 2 files at r5.
Review status: 2 of 4 files reviewed at latest revision, 1 unresolved discussion, some commit checks failed.


src/ops/loss_ops.ts, line 55 at r5 (raw file):

    if (reduction === Reduction.NONE) {
      loss = weightedLoss as O;

return weightedLoss; here directly.

and remove the else { } and un-indent the code in the else.


src/ops/loss_ops.ts, line 58 at r5 (raw file):

    } else {
      loss = weightedLoss.sum() as O;
      if (reduction === Reduction.MEAN) {

Make it explicit if (reduction === Reduction.SUM) { return loss}


src/ops/loss_ops.ts, line 59 at r5 (raw file):

      loss = weightedLoss.sum() as O;
      if (reduction === Reduction.MEAN) {
        loss = loss.div(ops.onesLike(losses).mul(weights).sum()) as O;

can this be simplified to return loss.div(weights.sum()) ?


src/ops/loss_ops.ts, line 63 at r5 (raw file):

        const mask = ops.where(
            weights.equal(ops.scalar(0)), ops.zerosLike(weights),
            ops.onesLike(weights));

can this be simplified to:

const numNonZeros = weights.notEqual(0).sum();
return loss.div(numNonZeros);

src/ops/lossop_tests.ts, line 1 at r5 (raw file):

/**

rename this file to loss_ops_tests.ts so that the prefix loss_ops matches the file where the implementation is.


Comments from Reviewable

@manrajgrover
Copy link
Contributor Author

@dsmilkov Addressed the comments

@dsmilkov
Copy link
Contributor

dsmilkov commented Apr 1, 2018

Reviewed 1 of 6 files at r6.
Review status: 1 of 4 files reviewed at latest revision, 1 unresolved discussion, some commit checks failed.


src/ops/loss_ops.ts, line 48 at r6 (raw file):

      reduction = Reduction.SUM_BY_NONZERO_WEIGHTS): O {
    if (weights == null) {
      weights = ops.onesLike(losses);

Keep weights null and check for their value inside the if statements so we can save shader calls and improve numerical stability. see comments below.


src/ops/loss_ops.ts, line 51 at r6 (raw file):

    }

    const weightedLoss = losses.mul(weights);

we can save a shader call if we check that weights are null here , that is:
`const weightedLoss = (weights == null) ? losses : losses.mul(weights);


src/ops/loss_ops.ts, line 52 at r6 (raw file):

    const weightedLoss = losses.mul(weights);
    let loss;

Remove this loss variable. See comments below.


src/ops/loss_ops.ts, line 57 at r6 (raw file):

      return weightedLoss as O;
    } else {
      loss = weightedLoss.sum() as O;

Dont' precompute the sum of weightedLoss, since we won't use it in the Reduction.MEAN case. Do it inside the if statement. That is:
if (reduction == Reduction.SUM) {
return weightedLoss.sum();
}
if (reduction == Reduction.MEAN) {
// see comment below
}


src/ops/loss_ops.ts, line 62 at r6 (raw file):

        return loss;
      } else if (reduction === Reduction.MEAN) {
        return loss.div(weights.sum());

return (weights == null) ? weightedLoss.mean() : weightedLoss.sum().div(weights.sum())

note we want mean() directly, since it avoids the potential overflow that x.sum().div(n) has. Also note we call weightedLoss.sum(), since the temp loss variable should go away.


src/ops/loss_ops.ts, line 65 at r6 (raw file):

      } else if (reduction === Reduction.SUM_BY_NONZERO_WEIGHTS) {
        const numNonZeros = weights.notEqual(ops.scalar(0)).sum();
        return loss.div(numNonZeros);

weightedLoss.sum().div(numNonZeros)


src/ops/lossop_tests.ts, line 1 at r5 (raw file):

Previously, dsmilkov (Daniel Smilkov) wrote…

rename this file to loss_ops_tests.ts so that the prefix loss_ops matches the file where the implementation is.

My appologies for the typo: the filename must end with test.ts not tests.ts for the actual test runner to pick it up.


Comments from Reviewable

@dsmilkov
Copy link
Contributor

dsmilkov commented Apr 1, 2018

Thanks. Took another pass. There is a couple of small improvements we can do regarding perf and numerical stability, but this looks great already!

@manrajgrover
Copy link
Contributor Author

@dsmilkov In Reduction.SUM_BY_NONZERO_WEIGHTS, if weights are undefined, weights.notEqual will throw an error. I'm not sure what to do in this case since we are not assigning weights as 1. I don't think we can totally get away from saving a shader call. Or can we?

Tensorflow defaults to 1 in this case.

Also, Typescript is complaining about not returning anything at the end. Should we return weightedLoss as default?

@dsmilkov
Copy link
Contributor

dsmilkov commented Apr 2, 2018

When weights is undefined, you don't need to call weights.notEqual(). The number of non-zero weights is losses.size since all the weights are implicitly 1, and of the same shape as losses

Regarding Typescript, throw an Error('Unknown reduction ${reduction}') at the end if none of the if statements fired.

@manrajgrover
Copy link
Contributor Author

manrajgrover commented Apr 2, 2018

@dsmilkov I had thought about throwing an error but then I realized error will automatically be thrown if anything other than Reduction is passed to reduction param.

Made the required changes. :)

@dsmilkov
Copy link
Contributor

dsmilkov commented Apr 6, 2018

:lgtm_strong: Great work Manraj!!


Review status: 1 of 4 files reviewed at latest revision, 1 unresolved discussion, some commit checks failed.


Comments from Reviewable

@tensorflow tensorflow deleted a comment from googlebot Apr 6, 2018
@dsmilkov
Copy link
Contributor

dsmilkov commented Apr 6, 2018

Hi Manraj,

Few tests are failing when I synced to master (see log). Nothing major, I think it's related to the fact that we made tf.div(a, b) be strict and expect a and b to be of the same dtype, to align with TF Python. Ping me when you fix those and I'll merge! Thank you!

@googlebot
Copy link

So there's good news and bad news.

👍 The good news is that everyone that needs to sign a CLA (the pull request submitter and all commit authors) have done so. Everything is all good there.

😕 The bad news is that it appears that one or more commits were authored or co-authored by someone other than the pull request submitter. We need to confirm that all authors are ok with their commits being contributed to this project. Please have them confirm that here in the pull request.

Note to project maintainer: This is a terminal state, meaning the cla/google commit status will not change from this state. It's up to you to confirm consent of the commit author(s) and merge this pull request when appropriate.

@nsthorat nsthorat merged commit ecfefd1 into tensorflow:master Apr 7, 2018
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants