-
Notifications
You must be signed in to change notification settings - Fork 662
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat empty op #5659
Feat empty op #5659
Conversation
consistent_empty op
…' into feat-add_empty_op
} else { | ||
// output i is inplaced. | ||
// check thread_local TensorMeta and tensor_impl TensorMeta. | ||
CHECK_OR_RETURN(tensor_impl->tensor_meta()->shape() == output_tensor_metas->at(i)->shape()); | ||
CHECK_OR_RETURN(tensor_impl->tensor_meta()->dtype() == output_tensor_metas->at(i)->dtype()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
如果是inplace则直接检察infer的结果
// using thread_local TensorMeta pointer if inplace. | ||
// using tensor_impl TensorMeta pointer if not inplace. | ||
return output_tensor_metas->at(i); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
非inplace时正常 infer 到 thread_local TensorMeta 中,inplace 时 infer 到实际的 tensor_impl 中
Speed stats:
|
CI failed, removing label automerge |
Speed stats:
|
CHECK_OR_RETURN(ParseSbpParallelFromString(sbp_str, &sbp_parallel)); | ||
CHECK_OR_RETURN( | ||
(sbp_parallel.has_split_parallel() && sbp_parallel.split_parallel().axis() == 0) | ||
|| sbp_parallel.has_broadcast_parallel()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
这个 CHECK 应该删掉?
CHECK_OR_RETURN(ParseSbpParallelFromString(sbp_str, &sbp_parallel)); | ||
CHECK_OR_RETURN( | ||
(sbp_parallel.has_split_parallel() && sbp_parallel.split_parallel().axis() == 0) | ||
|| sbp_parallel.has_broadcast_parallel()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
还有这个
本PR添加EmptyOp,使Tensor的创建统一通过Op。
文档: