Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add activation xpu gelu, test=develop, test=xpu #7527

Merged
merged 1 commit into from
Nov 5, 2021

Conversation

Gradie
Copy link
Contributor

@Gradie Gradie commented Nov 2, 2021

No description provided.

@paddle-bot-old
Copy link

paddle-bot-old bot commented Nov 2, 2021

Thanks for your contribution!

Copy link
Collaborator

@zhupengyang zhupengyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

记得加单测哦

@Gradie
Copy link
Contributor Author

Gradie commented Nov 3, 2021

记得加单测哦

gelu的单测在Paddle-Lite/lite/tests/kernels/activation_compute_test.cc 905-940行

Copy link
Collaborator

@zhupengyang zhupengyang left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhupengyang zhupengyang merged commit f210c6e into PaddlePaddle:develop Nov 5, 2021
newway pushed a commit to newway/Paddle-Lite that referenced this pull request Jan 5, 2022
zhupengyang pushed a commit that referenced this pull request Jan 17, 2022
* [XPU] change match_matrix_tensor op from old version to refector verison (#7012)

* [XPU] change sequence concat op from old version to refector verison (#6847)

* [XPU] change sequence_reverse api from old version to refector version (#6798)

* [XPU] fix some bugs for transformer (#7014)

* [XPU] Mul quant (#6850)

* [XPU] bugfix on fc max (#7152)

* expand_v2 supports dynamic shape (#7116)

* [XPU] change search_noaligned_mat_mul op to fc_batched_vsl op (#7081)

* [xpu] support qkv-fused weight reuse in scs_tran_match (#7293)

* [XPU] Scs trans match (#7307)

* [XPU] add lod_array_length; argmax support int32 (#7314)

* [XPU] change fc_int16 op to fc_fusion (#7029)

* [XPU] super big ernie support (#7184)

* [XPU] free default workspace of xpu_ctx before setting up a new gm workspace (#7422)

* [XPU] use get_max_ptr function (#7482)

* [xpu] Fc int31 (#7514)

* [xpu] fix continuous encoder fuse and fc max size

* [xpu] refactor fc int31 for KL2

* use get_max_ptr function (#7529)

* add activation xpu gelu (#7527)

* [xpu] more check with multi_encoder pass (#7593)

* [XPU]use get_max_ptr_size in search attention op (#7528)

* [XPU]use get_max_ptr_size in bigru op (#7498)

* [XPU] change sequence_topk_avg_pooling op from old version to refector verison (#7411)

* update xpu api sequence_unpad (#7640)

* update xpu api l2_norm (#7724)

* [XPU] __xpu__resnet_fuse_pass should not match ResNeXt50 (#7824)

* [xpu] support encoder mul shape without equal length and more check (#7753)

* [XPU] use new search_varconv (#7865)

* [XPU] use new sequence_topk_avg_pooling (#7834)

* [XPU] fix xpu memory leak bug for arm arch (#8010)

* [XPU] new new op in mmdnn (#7998)

* [XPU] fix xpu l2_norm bug (#7983)

* [XPU] grnn_cell op  in mmdnn (#8139)

* [XPU] use new concat in mmdnn (#8184)

* [XPU] build 2.10 depending on new xpu_sdk_url: xdnn 2.3.0 and xre 4.0.7.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants