Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat empty op #5659

Merged
merged 59 commits into from
Aug 7, 2021
Merged
Show file tree
Hide file tree
Changes from 19 commits
Commits
Show all changes
59 commits
Select commit Hold shift + click to select a range
c94183d
fix bugs in shareing EagerBlobObject::blob_desc_.shape and EagerBlobO…
lixinqi Jul 28, 2021
631ed38
Merge branch 'master' into refactor_eager_blob_object_shape
lixinqi Jul 28, 2021
75be6a9
feat(EmptyOp): add flow.empty
wyg1997 Jul 29, 2021
f355238
docs(EmptyOp): add doctest and refine document
wyg1997 Jul 29, 2021
275d4b7
docs(EmptyOp): refine document
wyg1997 Jul 29, 2021
7631138
refactor(Tensor): Tensor constructor use empty_op
wyg1997 Jul 29, 2021
55e5c83
refactor(Tensor): remove useless code
wyg1997 Jul 29, 2021
8221bc6
Merge remote-tracking branch 'origin/master' into feat-add_empty_op
wyg1997 Aug 3, 2021
fd72499
feat(EmptyOp): support construct in given device and add
wyg1997 Aug 3, 2021
6512197
feat(EmptyOp): support unpacked tuple shape
wyg1997 Aug 3, 2021
a86be56
Merge remote-tracking branch 'origin/master' into refactor_eager_blob…
wyg1997 Aug 3, 2021
a5dd716
refine array functor code
wyg1997 Aug 3, 2021
52e3966
Merge remote-tracking branch 'origin/master' into feat-add_empty_op
wyg1997 Aug 3, 2021
f4faf11
Merge remote-tracking branch 'origin/refactor_eager_blob_object_shape…
wyg1997 Aug 3, 2021
a2d2147
docs(EmptyOp): update empty op document
wyg1997 Aug 3, 2021
fdcaa11
refine code
wyg1997 Aug 3, 2021
38c3827
docs(EmptyOp): add test and document for consistent empty op
wyg1997 Aug 3, 2021
a341960
update document
wyg1997 Aug 3, 2021
d9a1dd6
Merge branch 'master' into feat-add_empty_op
wyg1997 Aug 4, 2021
b52f857
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
87a02af
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
a5f1e73
fix merge bugs
wyg1997 Aug 4, 2021
143d471
fix(*): fix infer distribution
wyg1997 Aug 4, 2021
dd09067
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
6a1df24
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
d00b380
test(EmptyOp): fix ConsistentEmptyOp CPU_ONLY test bug
wyg1997 Aug 4, 2021
9d9fccf
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
7429b74
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
b9dea29
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
e4e80db
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
911e765
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
e210383
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 4, 2021
3b2d05c
fix(*): init shape when InitBlob
wyg1997 Aug 5, 2021
2223c0b
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 5, 2021
25717bc
fix(*): Constant and Empty Op use broadcast sbp
wyg1997 Aug 5, 2021
a790dfc
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 5, 2021
3560c43
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 5, 2021
31184f7
fix(indexing): replace MakeTensor with functional::Empty
wyg1997 Aug 5, 2021
5478eff
fix(*): fix compile bug
wyg1997 Aug 5, 2021
476afdc
refine code
wyg1997 Aug 5, 2021
a88ed9b
fix(nnGraph): make eager tensor
wyg1997 Aug 5, 2021
6367fb3
Merge branch 'master' into feat-add_empty_op
wyg1997 Aug 5, 2021
b9faece
auto format by CI
oneflow-ci-bot Aug 5, 2021
0da4d90
fix(Stride): infer stride before initializing shape
wyg1997 Aug 5, 2021
66b571e
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 5, 2021
a75bdfb
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 5, 2021
2fbd207
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 5, 2021
30d74d6
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 6, 2021
8713b41
Merge branch 'master' into feat-add_empty_op
wyg1997 Aug 6, 2021
8caeb3f
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 6, 2021
76a932e
Merge branch 'master' into feat-add_empty_op
wyg1997 Aug 6, 2021
cdb16ad
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 6, 2021
ae9d3a7
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 6, 2021
3701588
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 6, 2021
90e4ad1
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 7, 2021
fa7f007
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 7, 2021
6373845
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 7, 2021
d898283
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 7, 2021
43ce019
Merge branch 'master' into feat-add_empty_op
oneflow-ci-bot Aug 7, 2021
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
3 changes: 2 additions & 1 deletion docs/source/oneflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ oneflow
load,
masked_fill,
matmul,
empty,
mish,
ones,
ones_like,
Expand All @@ -47,4 +48,4 @@ oneflow
zeros,
zeros_like

.. autofunction:: oneflow.data.load_mnist(train_batch_size=100, test_batch_size=100, data_format='NCHW')
.. autofunction:: oneflow.data.load_mnist(train_batch_size=100, test_batch_size=100, data_format='NCHW')
36 changes: 8 additions & 28 deletions oneflow/api/python/framework/tensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -51,23 +51,6 @@ const DType* GetTensorDType(const Tensor& tensor) {
return DType::Get(tensor.dtype()).GetOrThrow().get();
}

std::shared_ptr<Tensor> MakeLocalTensor(const std::shared_ptr<const Shape>& shape,
const DType* dtype, const Symbol<Device>& device,
bool is_lazy, bool requires_grad, bool is_leaf) {
return MirroredTensor::MakeTensor(shape, dtype->data_type(), device, is_lazy, requires_grad,
is_leaf)
.GetPtrOrThrow();
}

std::shared_ptr<Tensor> MakeConsistentTensor(
const std::shared_ptr<const Shape>& shape, const DType* dtype,
Symbol<cfg::ParallelDistribution>& parallel_distribution, Symbol<ParallelDesc> parallel_desc,
bool is_lazy, bool requires_grad, bool is_leaf) {
return ConsistentTensor::MakeTensor(shape, dtype->data_type(), parallel_distribution,
parallel_desc, is_lazy, requires_grad, is_leaf)
.GetPtrOrThrow();
}

Maybe<void> EagerMirroredTensorZeros(const std::shared_ptr<Tensor>& t) {
const auto& tensor = JUST(t->AsMirroredTensor());
CHECK_OR_RETURN(tensor->is_eager()) << "eager tensors supported only";
Expand Down Expand Up @@ -180,21 +163,17 @@ Maybe<Tensor> MakeLocalTensorByNumpy(py::object array, const DType* desired_dtyp
auto* np_arr = reinterpret_cast<PyArrayObject*>(np_arr_pyobject);
bool init_from_numpy = py::isinstance<py::array>(array);
const npy_intp* dims_ptr = PyArray_SHAPE(np_arr);
const auto shape = std::make_shared<Shape>(DimVector(dims_ptr, dims_ptr + PyArray_NDIM(np_arr)));
const Shape shape = Shape(DimVector(dims_ptr, dims_ptr + PyArray_NDIM(np_arr)));
DataType flow_dtype = JUST(numpy::GetOFDataTypeFromNpArray(np_arr));
std::shared_ptr<Tensor> tensor =
MirroredTensor::MakeTensor(shape, flow_dtype, device, /* is_lazy */ false, requires_grad,
/* is_leaf */ true)
.GetPtrOrThrow();
std::shared_ptr<Tensor> tensor = JUST(functional::Empty(shape, flow_dtype, device));
JUST(SwitchCopyMirroredTensorFromUntypedArray(SwitchCase(flow_dtype), tensor, np_arr_raii));
if (flow_dtype == DataType::kDouble && !init_from_numpy && desired_dtype == nullptr) {
desired_dtype = DType::Float().get();
}
if (desired_dtype != nullptr) {
autograd::NoGradGuard no_grad;
tensor = JUST(functional::Cast(tensor, desired_dtype->data_type()));
tensor->set_requires_grad(requires_grad);
}
tensor->set_requires_grad(requires_grad);
return tensor;
}

Expand Down Expand Up @@ -318,11 +297,12 @@ Maybe<Tensor> NewTensor(py::args args, py::kwargs kwargs, const DType* desired_d
return Error::ValueError("invalid arg: " + py::str(arg).cast<std::string>());
}
}
const Shape shape = Shape(dim_vector);
CHECK_NOTNULL_OR_RETURN(desired_dtype);
std::shared_ptr<MirroredTensor> tensor = JUST(
MirroredTensor::MakeTensor(std::make_shared<Shape>(dim_vector), desired_dtype->data_type(),
device, /* is_lazy */ false, requires_grad, /* is_leaf */ true));
return std::static_pointer_cast<Tensor>(tensor);
std::shared_ptr<Tensor> tensor =
JUST(functional::Empty(shape, desired_dtype->data_type(), device));
tensor->set_requires_grad(requires_grad);
return tensor;
}

std::shared_ptr<Tensor> ApiNewTensor(py::args args, py::kwargs kwargs) {
Expand Down
6 changes: 6 additions & 0 deletions oneflow/core/common/shape.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,12 @@ Shape& Shape::operator=(const Shape& shape) {
return *this;
}

Shape& Shape::assign(const DimVector& dim_vec) {
dim_vec_ = dim_vec;
UpdateElemCnt();
return *this;
}

Shape& Shape::CheckNumAxesIdenticalAndAssign(const ShapeView& shape_view) {
CHECK_EQ(NumAxes(), shape_view.NumAxes());
std::copy(shape_view.ptr(), shape_view.ptr() + shape_view.NumAxes(), dim_vec_.data());
Expand Down
1 change: 1 addition & 0 deletions oneflow/core/common/shape.h
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,7 @@ class Shape final {
Shape(const std::initializer_list<int64_t>& dim_vec);
~Shape() = default;
Shape& operator=(const Shape& shape);
Shape& assign(const DimVector& dim_vec);
Shape& CheckNumAxesIdenticalAndAssign(const ShapeView& shape_view);
Shape& LeftOnesExtendedAssign(const ShapeView& shape_view);

Expand Down
2 changes: 1 addition & 1 deletion oneflow/core/common/shape_view.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -73,7 +73,7 @@ template<typename DimT>
void ShapeViewBase<DimT>::ToShape(Shape* shape) const {
DimVector dim_vec;
this->ToDimVector(&dim_vec);
*shape = Shape(std::move(dim_vec));
shape->assign(dim_vec);
}

template class ShapeViewBase<const int64_t>;
Expand Down
12 changes: 8 additions & 4 deletions oneflow/core/eager/eager_blob_object.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -53,10 +53,14 @@ Maybe<void> EagerBlobObject::TryInitBlob() {

Maybe<void> EagerBlobObject::InitBlob() {
CHECK_NE_OR_RETURN(blob_desc_.data_type(), DataType::kInvalidDataType);
if (!blob_desc_.shape().is_initialized()) { blob_desc_.set_shape(Shape(DimVector{})); }
char* header_buffer =
reinterpret_cast<char*>(const_cast<int64_t*>(blob_desc_.shape().dim_vec().data()));
blob_.reset(new Blob(*mem_case_, &blob_desc_, header_buffer, nullptr));
{
header_buffer_.reset();
int64_t header_byte_size = blob_desc_.AlignedByteSizeOfBlobHeader();
const auto& FreeHeader = [header_byte_size](char* dptr) { std::free(dptr); };
char* ptr = reinterpret_cast<char*>(std::malloc(header_byte_size));
header_buffer_ = std::unique_ptr<char, std::function<void(char*)>>(ptr, FreeHeader);
}
blob_.reset(new Blob(*mem_case_, &blob_desc_, header_buffer_.get(), nullptr));
Comment on lines +57 to +64
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

revert之前的改动,不共享shape的内存

return Maybe<void>::Ok();
}

Expand Down
2 changes: 2 additions & 0 deletions oneflow/core/eager/eager_blob_object.h
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@ class EagerBlobObject final : public BlobObject {
~EagerBlobObject() override {
non_pod_initer_.reset();
tensor_buffer_.reset();
header_buffer_.reset();
blob_.reset();
}

Expand Down Expand Up @@ -78,6 +79,7 @@ class EagerBlobObject final : public BlobObject {

private:
std::unique_ptr<Blob> blob_;
std::unique_ptr<char, std::function<void(char*)>> header_buffer_;
std::shared_ptr<TensorBuffer> tensor_buffer_;
std::size_t blob_body_bytes_;
std::unique_ptr<MemoryAllocator> non_pod_initer_;
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,30 @@ Maybe<EagerMirroredTensorImpl*> TensorImpl4Tensor(const std::shared_ptr<Tensor>&
return tensor->mut_eager_mirrored_tensor_impl();
}

class MutMirroredTensorMeta : public TensorMeta {
public:
MutMirroredTensorMeta() : TensorMeta(std::make_shared<const Shape>(), kInvalidDataType) {}
MutMirroredTensorMeta(const MutMirroredTensorMeta&) = default;
MutMirroredTensorMeta(MutMirroredTensorMeta&&) = default;
~MutMirroredTensorMeta() override = default;
};

std::vector<TensorMeta*>* ThreadLocalDefaultOutputMutTensorMetas(int64_t size) {
static thread_local std::vector<MutMirroredTensorMeta> struct_vec;
static thread_local std::vector<TensorMeta*> ptr_vec;
struct_vec.resize(size);
ptr_vec.resize(size);
if (size == 1) {
ptr_vec.at(0) = &struct_vec.at(0); // unfold loop
} else if (size == 2) {
ptr_vec.at(0) = &struct_vec.at(0); // unfold loop
ptr_vec.at(1) = &struct_vec.at(1); // unfold loop
} else {
for (int i = 0; i < size; ++i) { ptr_vec.at(i) = &struct_vec.at(i); }
}
return &ptr_vec;
}

} // namespace

Maybe<void> NaiveInterpret(const UserOpExpr& user_op_expr, const TensorTuple& inputs,
Expand All @@ -69,12 +93,16 @@ Maybe<void> NaiveInterpret(const UserOpExpr& user_op_expr, const TensorTuple& in
}
std::shared_ptr<EagerBlobObjectList> output_eager_blob_objects =
std::make_shared<EagerBlobObjectList>(outputs->size());
auto* output_tensor_metas = ThreadLocalDefaultOutputMutTensorMetas(outputs->size());
for (int i = 0; i < outputs->size(); i++) {
if (!outputs->at(i)) {
outputs->at(i) =
const auto& tensor_impl =
std::make_shared<MirroredTensor>(std::make_shared<EagerMirroredTensorImpl>());
}
if (JUST(outputs->at(i)->has_eager_blob_object())) {
outputs->at(i) = tensor_impl;
output_tensor_metas->at(i) = tensor_impl->mut_tensor_meta();
} else {
bool has_eager_blob_object = JUST(outputs->at(i)->has_eager_blob_object());
CHECK_OR_RETURN(has_eager_blob_object);
output_eager_blob_objects->at(i) = JUST(outputs->at(i)->eager_blob_object());
}
lixinqi marked this conversation as resolved.
Show resolved Hide resolved
}
Expand Down Expand Up @@ -109,14 +137,21 @@ Maybe<void> NaiveInterpret(const UserOpExpr& user_op_expr, const TensorTuple& in
return CHECK_JUST(TensorImpl4Tensor(inputs.at(i)))->mut_tensor_meta();
},
[&](int32_t i) -> TensorMeta* {
return CHECK_JUST(TensorImpl4Tensor(outputs->at(i)))->mut_tensor_meta();
// using thread_local TensorMeta pointer if inplace.
// using tensor_impl TensorMeta pointer if not inplace.
return output_tensor_metas->at(i);
Comment on lines +141 to +143
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

非inplace时正常 infer 到 thread_local TensorMeta 中,inplace 时 infer 到实际的 tensor_impl 中

}));

for (int i = 0; i < output_eager_blob_objects->size(); i++) {
auto* tensor_impl = JUST(TensorImpl4Tensor(outputs->at(i)));
if (!output_eager_blob_objects->at(i)) {
auto* tensor_impl = JUST(TensorImpl4Tensor(outputs->at(i)));
JUST(tensor_impl->InitEagerBlobObject(JUST(outputs->at(i)->device())->mem_case()));
output_eager_blob_objects->at(i) = JUST(tensor_impl->eager_blob_object());
} else {
// output i is inplaced.
// check thread_local TensorMeta and tensor_impl TensorMeta.
CHECK_OR_RETURN(tensor_impl->tensor_meta()->shape() == output_tensor_metas->at(i)->shape());
CHECK_OR_RETURN(tensor_impl->tensor_meta()->dtype() == output_tensor_metas->at(i)->dtype());
Comment on lines +152 to +156
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

如果是inplace则直接检察infer的结果

}
}

Expand Down
6 changes: 1 addition & 5 deletions oneflow/core/framework/tensor.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -42,11 +42,7 @@ namespace one {
} else {
const auto& impl =
std::make_shared<EagerMirroredTensorImpl>(tensor_meta, requires_grad, is_leaf);
const auto& tensor = std::make_shared<MirroredTensor>(impl);
const auto& outputs = std::make_shared<TensorTuple>();
outputs->push_back(tensor);
JUST(RunEmptyOp(outputs.get()));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

看起来是这里的删除,使得MakeTensor不再创建blob_object了

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

对,后来这里要完全去掉,eager tensor只能有op的接口创建

return tensor;
return std::make_shared<MirroredTensor>(impl);
}
}

Expand Down
8 changes: 4 additions & 4 deletions oneflow/core/framework/tensor.h
Original file line number Diff line number Diff line change
Expand Up @@ -106,7 +106,7 @@ class Tensor {
virtual bool has_autograd_meta() const = 0;
virtual void set_autograd_meta(const std::shared_ptr<AutogradMeta>& autograd_meta) = 0;

virtual user_op::TensorDesc* mut_tensor_meta() = 0;
virtual TensorMeta* mut_tensor_meta() = 0;

virtual Maybe<MirroredTensor> AsMirroredTensor() = 0;

Expand Down Expand Up @@ -221,7 +221,7 @@ class Parameter final : public TensorIf<Parameter> {
return tensor_->set_autograd_meta(autograd_meta);
}

user_op::TensorDesc* mut_tensor_meta() override { return tensor_->mut_tensor_meta(); }
TensorMeta* mut_tensor_meta() override { return tensor_->mut_tensor_meta(); }

Maybe<MirroredTensor> AsMirroredTensor() override {
if (const auto& mirrored_tensor = std::dynamic_pointer_cast<MirroredTensor>(tensor_)) {
Expand Down Expand Up @@ -310,7 +310,7 @@ class MirroredTensor final : public TensorIf<MirroredTensor>,
Maybe<EagerMirroredTensorImpl*> mut_eager_mirrored_tensor_impl() override {
return impl_->mut_eager_mirrored_tensor_impl();
}
user_op::TensorDesc* mut_tensor_meta() override { return impl_->mut_tensor_meta(); }
TensorMeta* mut_tensor_meta() override { return impl_->mut_tensor_meta(); }

Maybe<MirroredTensor> MakeEagerTensor(
const std::shared_ptr<vm::EagerBlobObject> eager_blob_object, const Symbol<Device>& device,
Expand Down Expand Up @@ -411,7 +411,7 @@ class ConsistentTensor final : public TensorIf<ConsistentTensor> {
return impl_->tensor_meta();
}

user_op::TensorDesc* mut_tensor_meta() override { return impl_->mut_tensor_meta(); }
TensorMeta* mut_tensor_meta() override { return impl_->mut_tensor_meta(); }

Maybe<MirroredTensor> AsMirroredTensor() override { UNIMPLEMENTED_THEN_RETURN(); }

Expand Down
12 changes: 10 additions & 2 deletions oneflow/core/framework/tensor_impl.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ limitations under the License.
#include "oneflow/core/vm/vm_util.h"
#include "oneflow/core/operator/operator.h"
#include "oneflow/core/control/global_process_ctx.h"
#include "oneflow/core/register/ofblob.h"

namespace oneflow {
namespace one {
Expand Down Expand Up @@ -134,9 +135,16 @@ const std::shared_ptr<const Shape>& EagerMirroredTensorImpl::shape() const {

std::atomic<bool> synced(false);

const auto& shape_ptr = eager_blob_object_->blob_desc().shape_ptr();
CHECK_JUST(PhysicalRun([&](InstructionsBuilder* builder) -> Maybe<void> {
JUST(builder->AccessBlobByCallback(
this, [&synced](uint64_t) { synced = true; }, "const"));
this,
[&synced, &shape_ptr](uint64_t of_blob_ptr) {
const auto* of_blob = reinterpret_cast<OfBlob*>(of_blob_ptr);
of_blob->blob().shape_view().ToShape(const_cast<Shape*>(shape_ptr.get()));
synced = true;
},
"const"));
return Maybe<void>::Ok();
}));

Expand All @@ -146,7 +154,7 @@ const std::shared_ptr<const Shape>& EagerMirroredTensorImpl::shape() const {
});

eager_blob_object_->set_is_shape_synced(true);
return eager_blob_object_->blob_desc().shape_ptr();
return shape_ptr;
}

Maybe<MirroredTensorImpl> EagerMirroredTensorImpl::detach() const {
Expand Down
8 changes: 8 additions & 0 deletions oneflow/core/functional/functional_api.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -271,6 +271,14 @@
signature: "Tensor ConsistentConstant(*, Shape shape, Scalar value, DataType dtype, Placement placement, SbpList sbp_tuple)"
bind_python: True

- name: "empty"
signature: "Tensor Empty(*, Shape shape, DataType dtype, Device device=None)"
bind_python: True

- name: "consistent_empty"
signature: "Tensor ConsistentEmpty(*, Shape shape, DataType dtype, Placement placement, SbpList sbp_tuple)"
bind_python: True

- name: "zeros_like"
signature: "Tensor ZerosLike(Tensor x)"
bind_python: True
Expand Down