Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Adding tf.image.non_max_suppression_with_overlaps #16543

Merged
merged 4 commits into from Jul 10, 2018
Merged

Adding tf.image.non_max_suppression_with_overlaps #16543

merged 4 commits into from Jul 10, 2018

Conversation

chrert
Copy link
Contributor

@chrert chrert commented Jan 29, 2018

This commit introduces the op tf.image.non_max_suppression_overlaps.
It allows to perform non-max-suppression with an overlap criterion different to IOU by providing an n-by-n matrix with precomputed overlap values for each box pair.

In this way it is possible to use a custom overlap criterion for NMS (eg. intersection-over-area) without implementing a specialized op.

There are currently no unit tests. If you are willing to integrate this pull request, I will copy and rewrite the existing unit tests for NonMaxSuppressionV2.

@googlebot
Copy link

Thanks for your pull request. It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA).

📝 Please visit https://cla.developers.google.com/ to sign.

Once you've signed, please reply here (e.g. I signed it!) and we'll verify. Thanks.


  • If you've already signed a CLA, it's possible we don't have your GitHub username or you're using a different email address on your commit. Check your existing CLA data and verify that your email is set on your git commits.
  • If your company signed a CLA, they designated a Point of Contact who decides which employees are authorized to participate. You may need to contact the Point of Contact for your company and ask to be added to the group of authorized contributors. If you don't know who your Point of Contact is, direct the project maintainer to go/cla#troubleshoot. The email used to register you as an authorized contributor must be the email used for the Git commit.
  • In order to pass this check, please resolve this problem and have the pull request author add another comment and the bot will run again. If the bot doesn't comment, it means it doesn't think anything has changed.

@chrert
Copy link
Contributor Author

chrert commented Jan 29, 2018

signed cla

@googlebot
Copy link

CLAs look good, thanks!

@jhseu jhseu added the API review API Review label Feb 5, 2018
@jhseu jhseu requested review from rmlarsen and rjpower and removed request for rmlarsen February 5, 2018 23:43
Copy link
Contributor

@rjpower rjpower left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks fine to me overall. Adding jch1 to double check from a object detection perspective.

Please:

  • Double check the performance now that we're jumping through a double indirection for suppress_check_fn?

  • Since the op doesn't replace the existing NonMaxSuppression, let's use a different name so it's more apparent it can be used concurrently: NonMaxSuppressionWithOverlaps?

  • Add or adapt the NonMaxV2 tests.

@chrert chrert changed the title Adding tf.image.non_max_suppression_overlaps Adding tf.image.non_max_suppression_with_overlaps Feb 8, 2018
@chrert
Copy link
Contributor Author

chrert commented Feb 8, 2018

The latest commit changes the name and adds the missing unit tests.
Is there a convenient way to check performance implications?

@rjpower
Copy link
Contributor

rjpower commented Feb 8, 2018

Thanks.

I'm not aware of any existing benchmark.

If you run through with a reasonable number of boxes (50000-100000?) before and after that would be sufficient. I just want to double check it's not a major performance change for existing code.

@chrert
Copy link
Contributor Author

chrert commented Feb 9, 2018

I use the following script to measure the average runtime with > 1000000 boxes:

import tensorflow as tf
import numpy as np
import timeit

def create_boxgrid():
    coord_linspace = np.linspace(start=0.0, stop=1.0, num=600)
    x_grid, y_grid = np.meshgrid(coord_linspace, coord_linspace)
    box_centers = np.stack((x_grid, y_grid), axis=2).reshape((-1,2))
    
    def create_boxes(width, height):
        xmin = box_centers[:, 0] - width/2
        xmax = box_centers[:, 0] + width/2
        ymin = box_centers[:, 1] - height/2
        ymax = box_centers[:, 1] + height/2
        return np.stack((ymin, xmin, ymax, xmax), axis=1)
    
    return np.concatenate((create_boxes(0.2, 0.2), create_boxes(0.5, 0.3), create_boxes(0.3, 0.5)))

if __name__ == '__main__':
    boxes = create_boxgrid()
    num_boxes = len(boxes)
    scores = np.ones(shape=(num_boxes,), dtype=np.float32)
    print('Number of boxes: {:d}'.format(num_boxes))

    keep_idx = tf.image.non_max_suppression(boxes, scores, num_boxes)
    with tf.Session() as sess:
        def run():
            return sess.run(keep_idx)
        
        # warmup
        print('Warming up...')
        for _ in range(1000):
            np_keep_idx = run()
        
        num_runs = 100000
        print('Measuring time for {:d} runs...'.format(num_runs))
        elapsed_time = timeit.timeit(run, number=num_runs)
        avg_time = elapsed_time / num_runs
        print('Elapsed time: {:f} seconds'.format(elapsed_time))
        print('Average time: {:f} seconds'.format(avg_time))
        print('Keep {:d} boxes'.format(len(np_keep_idx)))

Version in master:

Number of boxes: 1080000
Warming up...
Measuring time for 100000 runs...
Elapsed time: 5.595638 seconds
Average time: 0.000056 seconds
Keep 410 boxes

New version:

Number of boxes: 1080000
Warming up...
Measuring time for 100000 runs...
Elapsed time: 5.595950 seconds
Average time: 0.000056 seconds
Keep 410 boxes

Looks like the performance is pretty much the same!

@asimshankar
Copy link
Contributor

(From API review: We'd be interested in @jch1 s opinion as well)

@rjpower rjpower requested a review from jch1 February 13, 2018 18:25
@rjpower
Copy link
Contributor

rjpower commented Feb 13, 2018

Thanks for running the tests! Looks good to me pending feedback from @jch1 .

@tensorflowbutler
Copy link
Member

Nagging Assignees @rjpower, @jch1: It has been 14 days with no activity and this issue has an assignee. Please update the label and/or status accordingly.

rjpower
rjpower previously approved these changes Mar 8, 2018
@rjpower
Copy link
Contributor

rjpower commented Mar 8, 2018

Sorry for the delay. Everything looks good from here.

@rjpower rjpower unassigned jch1 Mar 8, 2018
@asimshankar asimshankar added awaiting testing (then merge) kokoro:force-run Tests on submitted change and removed API review API Review labels Mar 8, 2018
@kokoro-team kokoro-team removed kokoro:force-run Tests on submitted change labels Mar 8, 2018
@kokoro-team kokoro-team removed kokoro:run kokoro:force-run Tests on submitted change labels Jun 22, 2018
@gunan
Copy link
Contributor

gunan commented Jun 22, 2018

Here is the API failure. Note that you have to run the following on a python 2 environment.
Once the API update is merged, I will direct this to tensorflow API reviewers to get an API change approval.

ERROR:tensorflow:TensorFlow API backwards compatibility test
This test ensures all changes to the public API of TensorFlow are intended.
If this test fails, it means a change has been made to the public API. Backwards
incompatible changes are not allowed. You can run the test as follows to update
test goldens and package them with your change.
    $ bazel build tensorflow/tools/api/tests:api_compatibility_test
    $ bazel-bin/tensorflow/tools/api/tests/api_compatibility_test \
          --update_goldens True
You will need an API approval to make changes to the public TensorFlow API. This
includes additions to the API.
ERROR:tensorflow:1 differences found between API and golden.
Issue 1	: 'path: "tensorflow.image"\ntf_module {\n  member {\n    name: "ResizeMethod"\n   [truncated]... != 'path: "tensorflow.image"\ntf_module {\n  member {\n    name: "ResizeMethod"\n   [truncated]...
Diff is 11284 characters long. Set self.maxDiff to None to see it.

@chrert
Copy link
Contributor Author

chrert commented Jun 25, 2018

Thanks, I've added the changes to the API golden files (I hope that's how it is supposed to be done).

@gunan gunan added API review API Review kokoro:run kokoro:force-run Tests on submitted change labels Jun 26, 2018
@kokoro-team kokoro-team removed kokoro:run kokoro:force-run Tests on submitted change labels Jun 26, 2018
@gunan
Copy link
Contributor

gunan commented Jun 26, 2018

The change looks correct to me.
I will rerun tests, and added this to the API review queue.

@josh11b
Copy link
Contributor

josh11b commented Jul 9, 2018

Approval for API review.

@josh11b josh11b removed the API review API Review label Jul 9, 2018
@gunan
Copy link
Contributor

gunan commented Jul 9, 2018

Thank you very much for the API review.
@chrert May I ask one last conflict resolution from you, then I will merge this PR.

@chrert
Copy link
Contributor Author

chrert commented Jul 10, 2018

Thanks for the approval!
I resolved the conflict.

@kokoro-team kokoro-team removed kokoro:run kokoro:force-run Tests on submitted change labels Jul 10, 2018
@gunan gunan added the ready to pull PR ready for merge process label Jul 10, 2018
@tensorflow-copybara tensorflow-copybara merged commit 38e8524 into tensorflow:master Jul 10, 2018
tensorflow-copybara pushed a commit that referenced this pull request Jul 10, 2018
@chrert chrert deleted the non_max_suppression_overlaps branch July 11, 2018 06:17
@chrert chrert restored the non_max_suppression_overlaps branch July 11, 2018 06:17
@chrert chrert deleted the non_max_suppression_overlaps branch July 11, 2018 06:17
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes ready to pull PR ready for merge process
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet