Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fuzzing benchmark for FFT operators #47872

Closed
wants to merge 6 commits into from

Conversation

peterbell10
Copy link
Collaborator

@peterbell10 peterbell10 commented Nov 12, 2020

Stack from ghstack:

Differential Revision: D25237499

peterbell10 added a commit that referenced this pull request Nov 12, 2020
ghstack-source-id: 66d51f2883cded714a4ce3460e70cfe87dcb3baa
Pull Request resolved: #47872
@dr-ci
Copy link

dr-ci bot commented Nov 12, 2020

💊 CI failures summary and remediations

As of commit c547000 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

codecov.io: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 14 times.

@mruberry
Copy link
Collaborator

Hey @peterbell10!

I've asked @ngimel to suggest a reviewer for this and #47871 since she's more familiar with benchmarking.

@ngimel ngimel requested a review from robieta November 13, 2020 17:47
@robieta
Copy link

robieta commented Nov 16, 2020

At a high level this looks good. I would suggest moving SpectralOpFuzzer to torch/utils/benchmark/op_fuzzers and moving the benchmark script itself to torch/utils/benchmark/examples. I have a TODO to extend the op microbenchmarks to use more fuzzed values so having a canned fuzzer for spectral ops would be quite helpful. (As well as just generally good to have.)

The one question I have on the fuzzer is that you are restricting shapes to those that factor nicely; do you think it makes sense to also test some more pathological shapes? (Albeit a small fraction of the time.)

@peterbell10
Copy link
Collaborator Author

moving the benchmark script itself to torch/utils/benchmark/examples

Shouldn't it be somewhere in the benchmarks/ folder, or is that being phased out?

The one question I have on the fuzzer is that you are restricting shapes to those that factor nicely; do you think it makes sense to also test some more pathological shapes? (Albeit a small fraction of the time.)

In general when comparing two FFT libraries this is definitely worth doing. But in my case I want to benchmark the pytorch interface code and how it calls the underlying FFT library, not necessarily the FFT library itself. For my purposes, using non-regular sizes just increases the benchmark runtime.

Perhaps it could still be useful as a parameter to the SpectralOpFuzzer. e.g. probability_regular=0.8 similar to FuzzedTensor's probability_contiguous.

peterbell10 added a commit to peterbell10/pytorch that referenced this pull request Nov 16, 2020
ghstack-source-id: 66d51f2883cded714a4ce3460e70cfe87dcb3baa
Pull Request resolved: pytorch#47872
@robieta
Copy link

robieta commented Nov 16, 2020

Shouldn't it be somewhere in the benchmarks/ folder, or is that being phased out?

I think the main thing is not having ad-hoc code in the operator benchmarks folder. @ngimel It seems like a lot of these one-off scripts have popped up recently. Any strong preferences for where they should live?

Perhaps it could still be useful as a parameter to the SpectralOpFuzzer. e.g. probability_regular=0.8 similar to FuzzedTensor's probability_contiguous.

SGTM.

peterbell10 added a commit that referenced this pull request Nov 16, 2020
ghstack-source-id: 6bc25e4ab3136fd9355d5211d193ca8e2714f444
Pull Request resolved: #47872
@codecov
Copy link

codecov bot commented Nov 17, 2020

Codecov Report

Merging #47872 (c547000) into gh/peterbell10/26/base (d47770f) will decrease coverage by 0.03%.
The diff coverage is 0.00%.

@@                    Coverage Diff                     @@
##           gh/peterbell10/26/base   #47872      +/-   ##
==========================================================
- Coverage                   81.14%   81.11%   -0.04%     
==========================================================
  Files                        1839     1841       +2     
  Lines                      198447   198537      +90     
==========================================================
- Hits                       161036   161034       -2     
- Misses                      37411    37503      +92     

@peterbell10
Copy link
Collaborator Author

@ngimel, @robieta any new thoughts on where to put the benchmark files, or is examples the best place for now?

@mruberry
Copy link
Collaborator

@ngimel, @robieta any new thoughts on where to put the benchmark files, or is examples the best place for now?

Just fyi that the team is on holiday this week, so it may take a few (working) days to get a response.

@ngimel
Copy link
Collaborator

ngimel commented Nov 28, 2020

I don't have a strong preference, we can leave the code in examples.

Copy link

@robieta robieta left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. I'll start merging the stack. Thanks for your patience.

@facebook-github-bot
Copy link
Contributor

@robieta merged this pull request in 74d6a61.

@facebook-github-bot facebook-github-bot deleted the gh/peterbell10/26/head branch December 5, 2020 15:30
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants