Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

improve unique op #26537

Merged
merged 5 commits into from
Aug 25, 2020
Merged

improve unique op #26537

merged 5 commits into from
Aug 25, 2020

Conversation

zhangting2020
Copy link
Contributor

@zhangting2020 zhangting2020 commented Aug 21, 2020

PR types

Function optimization

PR changes

OPs

Describe

improve unique op

image
image

@paddle-bot-old
Copy link

Thanks for your contribution!
Please wait for the result of CI firstly. See Paddle CI Manual for details.

@zhangting2020 zhangting2020 changed the title add unique_v2 op improve unique op Aug 22, 2020
wangchaochaohu
wangchaochaohu previously approved these changes Aug 22, 2020
Copy link
Contributor

@wangchaochaohu wangchaochaohu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

zhiqiu
zhiqiu previously approved these changes Aug 24, 2020
Copy link
Contributor

@zhiqiu zhiqiu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM for op_function_generator.cc

Copy link
Contributor

@luotao1 luotao1 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

np_inverse = inverse.numpy() # [1 2 2 0 3 2]
np_counts = counts.numpy() # [1 1 3 1]

x_data = np.array([[2, 1, 3], [3, 0, 1], [2, 1, 3]])
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

x = paddle.to_tensor(x_data)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

文档问题我另提一个PR进行修改

dtype=x.dtype, stop_gradient=True)
inverse = helper.create_variable_for_type_inference(
dtype=core.VarDesc.VarType.INT64, stop_gradient=True)
outputs = {"Out": out, "Index": inverse}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Index: inverse 名称不对应,看起来比较奇怪,有什么特别的原因么?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

新版本的unique API可返回4个输出:out,indices、inverse、counts,
旧版本的unique API返回2个输出:out,index(实际是inverse,虽然其值也是索引)

由于旧版本的OP将输出 Inverse命名为了Index。为了兼容性,名字无法修改,因此在OpMaker中专门做了说明,详见unique.cc 103行:

AddOutput("Index",  "Equivalent to inverse in numpy.unique, ..."

实际这里的Index对应了numpy.unique的输出unique_inverse。


helper = LayerHelper('unique', **locals())
attrs = {
'dtype': int(core.VarDesc.VarType.INT32),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

numpy的返回值默认是INT64。
或者增加dtype参数,默认值使用'int64'

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  • fluid下旧的unique接口中有dtype参数:paddle.fluid.layers.unique(x, dtype='int32'), API链接。在c++的OpMaker中该参数是必选参数。
  • 在2.0 beta版本中unique API去除了dtype参数,为了保证OP的兼容性,OpMaker中此属性不能删除,因此在tensor下新增的unique API不得不设置dtype属性(因为是必选参数),但是实际不会用到,所以这里设置为int32其实并不会使用。

Copy link
Contributor

@swtkiwi swtkiwi left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhangting2020 zhangting2020 merged commit 0a895bc into PaddlePaddle:develop Aug 25, 2020
@zhangting2020 zhangting2020 deleted the unique branch August 25, 2020 04:59
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

6 participants