Skip to content

Conversation

bdhirsh
Copy link
Contributor

@bdhirsh bdhirsh commented Jul 11, 2023

cc @jcaip, this is an E2E test of compiling a small model with a SparseSemiStructuredTensor subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.

Stack from ghstack (oldest at bottom):

@pytorch-bot
Copy link

pytorch-bot bot commented Jul 11, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/104974

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit bb20c9b with merge base 7921243 (image):

NEW FAILURE - The following job has failed:

FLAKY - The following job failed but was likely due to flakiness present on trunk:

This comment was automatically generated by Dr. CI and updates every 15 minutes.

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jul 12, 2023
cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Jul 12, 2023
cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
@bdhirsh bdhirsh force-pushed the gh/bdhirsh/430/head branch from e755cfa to 7aa167d Compare October 10, 2023 00:13
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
…SemiStructuredTensor"

cc jcaip, this is an E2E test of compiling a small model with a `SparseSemiStructuredTensor` subclass tensor used as one of the parameters.

The generated inductor code looks like this (P788647425): you can see that the subclass desugars the matmul into a sparse_mm() + contiguous() call, and inductor is able to fuse the contiguous() call into the relu() that follows it.

A few things to note:

(1) the test actually... fails. I haven't figured out why, but the results don't match the non-sparse version. FWIW, the code that inductor outputs looks reasonable, so it's not immediately clear if it's a compile() bug, or something to do with sparsity giving less accurate results (or both).
(1) inference mode is a bit broken with torch.compile still: in the test, i needed to make sure that the model was instantiated, compiled and run all inside of inference_mode.





[ghstack-poisoned]
bdhirsh added a commit that referenced this pull request Oct 11, 2023
Copy link
Contributor

Looks like this PR hasn't been updated in a while so we're going to go ahead and mark this as Stale.
Feel free to remove the Stale label if you feel this was a mistake.
If you are unable to remove the Stale label please contact a maintainer in order to do so.
If you want the bot to never mark this PR stale again, add the no-stale label.
Stale pull requests will automatically be closed after 30 days of inactivity.

@github-actions github-actions bot added the Stale label Dec 10, 2023
@github-actions github-actions bot closed this Jan 9, 2024
@github-actions github-actions bot deleted the gh/bdhirsh/430/head branch February 19, 2024 01:58
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

release notes: sparse release notes category Stale

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant