Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

observers: use clamp instead of min/max in calculate_qparams #43150

Closed
wants to merge 1 commit into from

Conversation

vkuzo
Copy link
Contributor

@vkuzo vkuzo commented Aug 17, 2020

Stack from ghstack:

Summary:

The current logic was expensive because it created tensors on CUDA.
Switching to clamp since it can work without needing to create tensors.

yields a ~20% latency improvement on the CUDA microbenchmark for small tensors
(prev diff: P139074571, this diff: P139074706)

Test Plan:

benchmarks

Reviewers:

Subscribers:

Tasks:

Tags:

Differential Revision: D23170427

Summary:

The current logic was expensive because it created tensors on CUDA.
Switching to clamp since it can work without needing to create tensors.

Test Plan:

benchmarks

Reviewers:

Subscribers:

Tasks:

Tags:

[ghstack-poisoned]
@dr-ci
Copy link

dr-ci bot commented Aug 17, 2020

💊 CI failures summary and remediations

As of commit 9724ed7 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 1 time.

@facebook-github-bot
Copy link
Contributor

This pull request has been merged in 3264ba0.

@facebook-github-bot facebook-github-bot deleted the gh/vkuzo/127/head branch August 21, 2020 14:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants