Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add new flatten op test=develop #25393

Merged
merged 19 commits into from
Aug 6, 2020
Merged

Conversation

yaoxuefeng6
Copy link
Contributor

@yaoxuefeng6 yaoxuefeng6 commented Jul 6, 2020

PR types

New features

PR changes

APIs

Describe

add new flatten op

According to paddle 2.0 standard
1, add flatten api in manipulation file, kernal(named 'flatten_contiguous_range') to support flatten a tensor in a contiguous range of axes and related ut.
2, add Flatten class in imperative mode and related ut.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Jul 6, 2020

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Jul 7, 2020

Thanks for your contribution!
Please add test = develop in your commit message to trigger CI to ensure your PR can be merged.
See Paddle CI Manual for details.

@@ -9808,7 +9809,7 @@ def soft_relu(x, threshold=40.0, name=None):
return out


def flatten(x, axis=1, name=None):
def flatten_2d(x, axis=1, name=None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

这样会导致新版本发布后,老的代码跑不了了。
可以保持老的flatten不变,直接在paddle/tensor/manipulation.py下增加新的flatten的实现。(两个地方都用flatten作为函数名)


inp_np = np.ones([5, 2, 3, 4]).astype('float32')
with paddle.imperative.guard():
inp_np = to_variable(inp_np)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use paddle.enable_imperative to enable imperative mode


Args:
x (Variable): A tensor of rank >= axis. A tensor with type float32,
float64, int8, int32, int64.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

rank -> number of dimensions
to avoid confusion with the meaning: the number of linearly independent rows of a matrix

或者直接不提对tensor的维数的要求?

with type -> with data type

Returns:
Variable: A tensor with the contents of the input tensor, with input \
axes flattened by indicated start axis and end axis. \
A Tensor with type same as input x.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

start_axis and stop_axis
type -> data type


Raises:
ValueError: If x is not a variable.
ValueError: If axis is illegal.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

variable -> Variable
axis -> start_axis or stop_axis

out = helper.create_variable_for_type_inference(x.dtype)
x_shape = helper.create_variable_for_type_inference(x.dtype)
helper.append_op(
type='flatten_new',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

尽管用户接触不到,但用flatten_new作为新引入的这个OP的名字可能不太合适,会给后续的维护带来理解上的困难。
考虑用flatten_contiguous_range之类的?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

好的

jzhang533
jzhang533 previously approved these changes Jul 13, 2020
Copy link
Contributor

@jzhang533 jzhang533 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

cryoco
cryoco previously approved these changes Jul 14, 2020
Copy link
Contributor

@cryoco cryoco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for no_check_set

@yaoxuefeng6 yaoxuefeng6 dismissed stale reviews from cryoco and jzhang533 via 560810d July 14, 2020 12:58
cryoco
cryoco previously approved these changes Jul 14, 2020
Copy link
Contributor

@jzhang533 jzhang533 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

jzhang533
jzhang533 previously approved these changes Jul 29, 2020
Copy link
Contributor

@jzhang533 jzhang533 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm (resolved conflict)

out = self._helper.create_variable_for_type_inference(input.dtype)
x_shape = self._helper.create_variable_for_type_inference(input.dtype)
self._helper.append_op(
type="flatten2",
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

升级成flatten_contiguous_range

@yaoxuefeng6 yaoxuefeng6 dismissed stale reviews from jzhang533 and cryoco via ee096b1 August 4, 2020 13:22
Copy link
Contributor

@jzhang533 jzhang533 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

Copy link
Contributor

@XiaoguangHu01 XiaoguangHu01 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yaoxuefeng6 yaoxuefeng6 merged commit 2246200 into PaddlePaddle:develop Aug 6, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants