Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[XPU] add lod_array_length; argmax support int32 #7314

Merged
merged 1 commit into from
Oct 21, 2021

Conversation

zhupengyang
Copy link
Collaborator

test=develop

@paddle-bot-old
Copy link

Thanks for your contribution!

Copy link
Collaborator

@hong19860320 hong19860320 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@zhupengyang zhupengyang merged commit f4260a8 into PaddlePaddle:develop Oct 21, 2021
@zhupengyang zhupengyang deleted the xpu_argmax branch October 21, 2021 01:31
newway pushed a commit to newway/Paddle-Lite that referenced this pull request Jan 5, 2022
zhupengyang pushed a commit that referenced this pull request Jan 17, 2022
* [XPU] change match_matrix_tensor op from old version to refector verison (#7012)

* [XPU] change sequence concat op from old version to refector verison (#6847)

* [XPU] change sequence_reverse api from old version to refector version (#6798)

* [XPU] fix some bugs for transformer (#7014)

* [XPU] Mul quant (#6850)

* [XPU] bugfix on fc max (#7152)

* expand_v2 supports dynamic shape (#7116)

* [XPU] change search_noaligned_mat_mul op to fc_batched_vsl op (#7081)

* [xpu] support qkv-fused weight reuse in scs_tran_match (#7293)

* [XPU] Scs trans match (#7307)

* [XPU] add lod_array_length; argmax support int32 (#7314)

* [XPU] change fc_int16 op to fc_fusion (#7029)

* [XPU] super big ernie support (#7184)

* [XPU] free default workspace of xpu_ctx before setting up a new gm workspace (#7422)

* [XPU] use get_max_ptr function (#7482)

* [xpu] Fc int31 (#7514)

* [xpu] fix continuous encoder fuse and fc max size

* [xpu] refactor fc int31 for KL2

* use get_max_ptr function (#7529)

* add activation xpu gelu (#7527)

* [xpu] more check with multi_encoder pass (#7593)

* [XPU]use get_max_ptr_size in search attention op (#7528)

* [XPU]use get_max_ptr_size in bigru op (#7498)

* [XPU] change sequence_topk_avg_pooling op from old version to refector verison (#7411)

* update xpu api sequence_unpad (#7640)

* update xpu api l2_norm (#7724)

* [XPU] __xpu__resnet_fuse_pass should not match ResNeXt50 (#7824)

* [xpu] support encoder mul shape without equal length and more check (#7753)

* [XPU] use new search_varconv (#7865)

* [XPU] use new sequence_topk_avg_pooling (#7834)

* [XPU] fix xpu memory leak bug for arm arch (#8010)

* [XPU] new new op in mmdnn (#7998)

* [XPU] fix xpu l2_norm bug (#7983)

* [XPU] grnn_cell op  in mmdnn (#8139)

* [XPU] use new concat in mmdnn (#8184)

* [XPU] build 2.10 depending on new xpu_sdk_url: xdnn 2.3.0 and xre 4.0.7.1
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants