Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[AOTInductor] Switch ProxyExecutor to use AtenTensorHandle #109748

Closed
wants to merge 1 commit into from

Conversation

SherlockNoMad
Copy link
Contributor

@SherlockNoMad SherlockNoMad commented Sep 20, 2023

@pytorch-bot
Copy link

pytorch-bot bot commented Sep 20, 2023

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/109748

Note: Links to docs will display an error until the docs builds have been completed.

❌ 1 New Failure, 1 Unrelated Failure

As of commit 67da854 with merge base 6138750 (image):

NEW FAILURE - The following job has failed:

BROKEN TRUNK - The following job failed but were present on the merge base:

👉 Rebase onto the `viable/strict` branch to avoid these failures

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

@SherlockNoMad SherlockNoMad added the topic: not user facing topic category label Sep 20, 2023
@SherlockNoMad SherlockNoMad changed the title [PT2 Inference] Enable ProxyExecutor with Runtime [AOTInductor] Switch ProxyExecutor to use AtenTensorHandle Sep 20, 2023
Copy link
Contributor

@desertfire desertfire left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM. You need to rebase to the latest code, and adapt to the API changes in RAIIAtenTensorHandle.

torch/csrc/inductor/aoti_torch/c/shim.h Outdated Show resolved Hide resolved
SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 21, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 21, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

Copy link
Contributor

@chenyang78 chenyang78 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not necessarily in this PR, but can we come up with a test. Particularly, since we changed fill_output_arg to use correct APIs, we seems to be able to run some test with extern kernels?

self.writeline(
f"AOTI_TORCH_ERROR_CODE_CHECK(aoti_torch_new_uninitialized_tensor(&{arg}_handle));"
)
self.writeline(f"RAIIAtenTensorHandle {arg}({arg}_handle);")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, wondering if we need to guard these lines with config.aot_inductor.abi_compatible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no, generate_extern_kernel_args_decl_if_needed is only used in the fbcode path, which always need abi=true.

@@ -1813,7 +1825,9 @@ def generate_extern_kernel_alloc_and_find_schema_if_needed_fbcode(

tensor_args_var = f"tensor_args_var_{next(self.kernel_callsite_id)}"
tensor_call_args_str = ", ".join(tensor_call_args)
self.writeline(f"void* {tensor_args_var}[] = {{{tensor_call_args_str}}};")
self.writeline(
f"AtenTensorHandle {tensor_args_var}[] = {{{tensor_call_args_str}}};"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same here - do we need to guard AtenTensorHandle with config.aot_inductor.abi_compatible?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ditto. this is in fbcode path.

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 22, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

1 similar comment
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 22, 2023
Summary:
Pull Request resolved: pytorch#109748

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659

fbshipit-source-id: 9b5f4c560099f9cd1ca979ff4db7d3eb9caee405
@SherlockNoMad SherlockNoMad force-pushed the export-D49471659 branch 2 times, most recently from 8b1a1b3 to ebef713 Compare September 26, 2023 16:26
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 26, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 26, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 26, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 26, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

SherlockNoMad added a commit to SherlockNoMad/pytorch that referenced this pull request Sep 27, 2023
Summary:

Switch ProxyExecutor to use AtenTensorHandle.

bypass-github-pytorch-ci-checks
OSS CI has a irrelevant failure.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

Summary:

Switch ProxyExecutor to use AtenTensorHandle.

bypass-github-pytorch-ci-checks
OSS CI has a irrelevant failure.

Test Plan: E2E Test

Reviewed By: yifuwang

Differential Revision: D49471659
@facebook-github-bot
Copy link
Contributor

This pull request was exported from Phabricator. Differential Revision: D49471659

@facebook-github-bot
Copy link
Contributor

@pytorchbot merge -f 'Landed internally'

(Initiating merge automatically since Phabricator Diff has merged, using force because this PR might not pass merge_rules.json but landed internally)

@pytorchmergebot
Copy link
Collaborator

Merge started

Your change will be merged immediately since you used the force (-f) flag, bypassing any CI checks (ETA: 1-5 minutes). Please use -f as last resort and instead consider -i/--ignore-current to continue the merge ignoring current failures. This will allow currently pending tests to finish and report signal before the merge.

Learn more about merging in the wiki.

Questions? Feedback? Please reach out to the PyTorch DevX Team

Advanced Debugging
Check the merge workflow status
here

@huydhn
Copy link
Contributor

huydhn commented Sep 27, 2023

@pytorchbot drci

(Please ignore this, I'm testing Dr.CI)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants