Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[QualcommQnn] add ops #9538

Merged
merged 3 commits into from
Oct 17, 2022
Merged

Conversation

zhupengyang
Copy link
Collaborator

@zhupengyang zhupengyang commented Oct 11, 2022

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

@paddle-bot
Copy link

paddle-bot bot commented Oct 11, 2022

Thanks for your contribution!

hong19860320
hong19860320 previously approved these changes Oct 11, 2022
Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhupengyang zhupengyang merged commit f8656fd into PaddlePaddle:develop Oct 17, 2022
@zhupengyang zhupengyang deleted the qnn_add_ops_2 branch October 17, 2022 02:13
csy0225 pushed a commit to csy0225/Paddle-Lite that referenced this pull request Oct 20, 2022
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm
zhupengyang added a commit to zhupengyang/Paddle-Lite that referenced this pull request Oct 27, 2022
support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm
zhupengyang added a commit that referenced this pull request Oct 31, 2022
* windows ci fix (#9559)

* [NNAdapter] support device data (#9493)

* [QualcommQnn] support exp, log, reduce_mean, reduce_max, reduce_sum, floor (#9505)

* [QualcommQnn] add ops (#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* [NNAdapter] support vit model (#9583)

* [NNAdapter] set output lod according to input lod

* [NNAdapter] slice support EndsTensorList

 * [NNAdapter] fuse pass (5d->4d)

* fix cmake cxx flags (#9467)
csy0225 added a commit that referenced this pull request Nov 4, 2022
* [QualcommQnn] add ops (#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* add float64 type to lite

* add float64 kernel for set value

* change the third-party-libs url due to flatbuf update.

* fix include files conflict

* fix bug

* Fix heterogeneous execution errors

* fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug

* fix comment

Co-authored-by: zhupengyang <zhu_py@qq.com>
lishicheng1996 pushed a commit to lishicheng1996/Paddle-Lite that referenced this pull request Nov 18, 2022
…le#9580)

* [QualcommQnn] add ops (PaddlePaddle#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* add float64 type to lite

* add float64 kernel for set value

* change the third-party-libs url due to flatbuf update.

* fix include files conflict

* fix bug

* Fix heterogeneous execution errors

* fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug

* fix comment

Co-authored-by: zhupengyang <zhu_py@qq.com>
QShiX pushed a commit to QShiX/Paddle-Lite that referenced this pull request Nov 18, 2022
…le#9580)

* [QualcommQnn] add ops (PaddlePaddle#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* add float64 type to lite

* add float64 kernel for set value

* change the third-party-libs url due to flatbuf update.

* fix include files conflict

* fix bug

* Fix heterogeneous execution errors

* fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug

* fix comment

Co-authored-by: zhupengyang <zhu_py@qq.com>
mjp9527 pushed a commit that referenced this pull request Nov 22, 2022
* [X86] Add set value op and double data type to framework. (#9580)

* [QualcommQnn] add ops (#9538)

support fusion_elementwise_mul_activation, fusion_elementwise_sub_activation, fusion_elementwise_div_activation, fusion_elementwise_min_activation, fusion_elementwise_max_activation, fusion_elementwise_pow_activation, instance_norm, prelu, arg_max, arg_min, flatten, flatten2, norm

* add float64 type to lite

* add float64 kernel for set value

* change the third-party-libs url due to flatbuf update.

* fix include files conflict

* fix bug

* Fix heterogeneous execution errors

* fix control_flow_op_control_flow_op_shared_inputs_and_outputs_place_sync_pass bug

* fix comment

Co-authored-by: zhupengyang <zhu_py@qq.com>

* [PaddleSpeech] Add OPs and others needed by fastspeech_2 model (#9706)

* [Host] add 3 OPs: set_value, round, share_data
test=develop

* [Host] add expand_v2 OP registration with type kBool
test=develop

* [Arm] add reduce_sum OP Int64 registration and neon implement & add reduce_max OP kInt32 registration
test=develop

* [X86] fix bug in set_value OP
test=develop

* [Extra] move 2 round and share_data to extra
test=develop

* [proto] fix a bug
test=develop

Co-authored-by: csy0225 <78470701+csy0225@users.noreply.github.com>
Co-authored-by: zhupengyang <zhu_py@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants