Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[te] Fix casting of unsigned char, and abs(int) #44157

Closed
wants to merge 7 commits into from

Conversation

bertmaher
Copy link
Contributor

@bertmaher bertmaher commented Sep 3, 2020

Stack from ghstack:

Differential Revision: D23528507

bertmaher added a commit that referenced this pull request Sep 3, 2020
ghstack-source-id: 9f939c4a78b832da338d851725a3b3b77c409b9e
Pull Request resolved: #44157
@dr-ci
Copy link

dr-ci bot commented Sep 3, 2020

💊 CI failures summary and remediations

As of commit 0dd0f65 (more details on the Dr. CI page):


  • 1/1 failures possibly* introduced in this PR
    • 1/1 non-CircleCI failure(s)

ci.pytorch.org: 1 failed


This comment was automatically generated by Dr. CI (expand for details).Follow this link to opt-out of these comments for your Pull Requests.

Please report bugs/suggestions on the GitHub issue tracker or post in the (internal) Dr. CI Users group.

See how this bot performed.

This comment has been revised 26 times.

Copy link

@ZolotukhinM ZolotukhinM left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good! I guess you'll need to replace self.assertEqual(ref, t(x)) with np.testing.assert_allclose(...) to appease the CI gods though.

@@ -1152,20 +1152,48 @@ def rand(dtype, device="cuda"):
torch.int16,
torch.int32,
torch.int64,
torch.float16,
torch.bfloat16,
# torch.float16,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: maybe add a comment whether we want these to be supported at all?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, yeah, float16 should be coming in soon-ish. The commented lines are a transient state for this stack :-)

@@ -186,7 +186,7 @@ void CudaPrinter::visit(const For* v) {
}

void CudaPrinter::visit(const Cast* v) {
os() << cudaDtypeCppString(v->dtype());
os() << "(" << cudaDtypeCppString(v->dtype()) << ")";

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ouch, how did it work? :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know when it became a thing but apparently you can cast by saying float(x), like it's a constructor. But that sorta doesn't parse if you do unsigned char(x). Clearly we need whitespace in identifiers! (I hope this reddit post is a joke: https://www.reddit.com/r/ProgrammerHumor/comments/4on81i/til_c_allows_u200b_zero_width_space_in_identifiers/ )

@@ -209,6 +209,9 @@ void CudaPrinter::visit(const Intrinsics* v) {
if (returnType == ScalarType::Half || returnType == ScalarType::Float) {
func_name = func_name + "f";
}
if (v->op_type() == IntrinsicsOp::kFabs && is_integral(returnType)) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

does nvrtc not handle std::abs or ::abs? All this exp/expf is so last decade (and is not used in eager part of the codebase at all).

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it does, actually. I don't know why we went the + "f" route, except that maybe the old fuser uses it? I'll take a note to use std:: in a follow-up.

@codecov
Copy link

codecov bot commented Sep 8, 2020

Codecov Report

❗ No coverage uploaded for pull request base (gh/bertmaher/15/base@f6a7861). Click here to learn what that means.
The diff coverage is n/a.

Impacted file tree graph

@@                   Coverage Diff                   @@
##             gh/bertmaher/15/base   #44157   +/-   ##
=======================================================
  Coverage                        ?   69.29%           
=======================================================
  Files                           ?      381           
  Lines                           ?    47652           
  Branches                        ?        0           
=======================================================
  Hits                            ?    33022           
  Misses                          ?    14630           
  Partials                        ?        0           

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update f6a7861...0dd0f65. Read the comment docs.

@facebook-github-bot
Copy link
Contributor

@bertmaher merged this pull request in 960c088.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Merged oncall: jit Add this issue/PR to JIT oncall triage queue
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants