-
Notifications
You must be signed in to change notification settings - Fork 25.6k
Clean up tensor_util and torch_util #68427
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
desertfire
commented
Nov 16, 2021
- Merge torch_util into tensor_util to reduce the number of util files
- Remove unused functions in tensor_util.cpp
- Move TensorHash into hash.h
- Move TensorCompare into lazy_graph_executor.cpp
- Merge torch_util into tensor_util to reduce the number of util files - Remove unused functions in tensor_util.cpp - Move TensorHash into hash.h - Move TensorCompare into lazy_graph_executor.cpp
CI Flow Status⚛️ CI FlowRuleset - Version:
You can add a comment to the PR and tag @pytorchbot with the following commands: # ciflow rerun, "ciflow/default" will always be added automatically
@pytorchbot ciflow rerun
# ciflow rerun with additional labels "-l <ciflow/label_name>", which is equivalent to adding these labels manually and trigger the rerun
@pytorchbot ciflow rerun -l ciflow/scheduled -l ciflow/slow For more information, please take a look at the CI Flow Wiki. |
🔗 Helpful links
💊 CI failures summary and remediationsAs of commit 99b6d48 (more details on the Dr. CI page):
🕵️ 1 new failure recognized by patternsThe following CI failures do not appear to be due to upstream breakages:
|
Job | Step | Action |
---|---|---|
Run mypy | 🔁 rerun | |
Fail if there were any warnings | 🔁 rerun |
ci.pytorch.org: 1 failed
This comment was automatically generated by Dr. CI (expand for details).
Please report bugs/suggestions to the (internal) Dr. CI Users group.
int64_t size = ctensor.numel() * ctensor.element_size(); | ||
switch (ctensor.scalar_type()) { | ||
case at::ScalarType::Bool: | ||
return DataHash(ctensor.data_ptr<bool>(), size); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this seems needlessly complicated doesn't it? wonder why we can't just
- assert scalar_type() is supported (maybe not necessary, or maybe use c10 util for it)
- treat data_ptr as uint8_t* and call DataHAsh
not sure why it was necessary to handle all the cases separately.. but maybe it's better not to change it now...
Summary: Some changes to torch/csrc/lazy/core were done on the lazy_tensor_staging branch (#68427). Merge those back into the trunck. [ghstack-poisoned]
…/lazy/core" Summary: Some changes to torch/csrc/lazy/core were done on the lazy_tensor_staging branch (#68427). Merge those back into the trunck. [ghstack-poisoned]
Summary: Some changes to torch/csrc/lazy/core were done on the lazy_tensor_staging branch (#68427). Merge those back into the trunck. [ghstack-poisoned]
Summary: Pull Request resolved: #69012 Some changes to torch/csrc/lazy/core were done on the lazy_tensor_staging branch (#68427). Merge those back into the trunk. Test Plan: Imported from OSS Reviewed By: wconstab Differential Revision: D32708696 Pulled By: desertfire fbshipit-source-id: e54b978f2bdb9c7db27880f60246fdf1e8b41019