-
Notifications
You must be signed in to change notification settings - Fork 685
use ET_CHECK macro for sanity checks in memory shim layer #14690
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Summary: This diff introduce `aoti_torch_delete_tensor_object` for deleting tensors created during cuda backend inference. Reviewed By: [ghstack-poisoned]
Summary: This function introduce aoti_torch_create_tensor_from_blob_v2, a function that create tensor from data blob and custom stride and size. Worth to notice that unlike aoti_torch_empty_strided, the tensor created by aoti_torch_create_tensor_from_blob_v2 will not have the control of the memory blob. Therefore when we delete it, the memory will not be freed. Reviewed By: Differential Revision: [ghstack-poisoned]
Summary: Introduced aoti_torch__reinterpret_tensor, which creates a new tensor view that reinterprets the same underlying memory with custom shape and strides. Reviewed By: Differential Revision: [ghstack-poisoned]
Summary: This diff introduce `aoti_torch_copy_`, the function for copying tensor inside cuda backend. Right now it only support copy between tensors with same dtype. Reviewed By: Differential Revision: [ghstack-poisoned]
Summary: this is a comprehensive update to use ET_CHECK macro to replace original if..else check for better follow et's law Reviewed By: Differential Revision: [ghstack-poisoned]
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/14690
Note: Links to docs will display an error until the docs builds have been completed. ❌ 1 New Failure, 8 PendingAs of commit 9f1fb01 with merge base 65100f6 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #14690 * #14689 * #14688 * __->__ #14687 * #14686 Summary: This function introduce aoti_torch_create_tensor_from_blob_v2, a function that create tensor from data blob and custom stride and size. Worth to notice that unlike aoti_torch_empty_strided, the tensor created by aoti_torch_create_tensor_from_blob_v2 will not have the control of the memory blob. Therefore when we delete it, the memory will not be freed. Reviewed By: Differential Revision:
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #14690 * #14689 * #14688 * __->__ #14700 * #14686 Summary: This is a manual cherry pick of #14687 This function introduce aoti_torch_create_tensor_from_blob_v2, a function that create tensor from data blob and custom stride and size. Worth to notice that unlike aoti_torch_empty_strided, the tensor created by aoti_torch_create_tensor_from_blob_v2 will not have the control of the memory blob. Therefore when we delete it, the memory will not be freed. Reviewed By: Differential Revision:
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #14690 * #14689 * __->__ #14688 * #14687 * #14686 Summary: Introduced aoti_torch__reinterpret_tensor, which creates a new tensor view that reinterprets the same underlying memory with custom shape and strides. Reviewed By: Differential Revision:
Stack from [ghstack](https://github.com/ezyang/ghstack) (oldest at bottom): * #14690 * __->__ #14689 * #14688 * #14687 * #14686 Summary: This diff introduce `aoti_torch_copy_`, the function for copying tensor inside cuda backend. Right now it only support copy between tensors with same dtype. Reviewed By: Differential Revision:
Stack from ghstack (oldest at bottom):
Summary:
this is a comprehensive update to use ET_CHECK macro to replace original if..else check for better follow et's law
Reviewed By:
Differential Revision: