Skip to content

[pull] develop from PaddlePaddle:develop#116

Merged
pull[bot] merged 31 commits intolsdlab:developfrom
PaddlePaddle:develop
Apr 27, 2023
Merged

[pull] develop from PaddlePaddle:develop#116
pull[bot] merged 31 commits intolsdlab:developfrom
PaddlePaddle:develop

Conversation

@pull
Copy link
Copy Markdown

@pull pull bot commented Apr 27, 2023

See Commits and Changes for more details.


Created by pull[bot]

Can you help keep this open source service alive? 💖 Please sponsor : )

mengziheng and others added 30 commits April 27, 2023 09:49
* add pad op

* add_some_code

* modify some code

* add some code

* add some code

* modify some code

* add some code

* modify some code

* Update composite_backward_api.h

* modify some code

* add some code

* add some code

* add some code
* update cmake3.16 to 3.18

* test

* Update Dockerfile.ubuntu
* [XPU] remove scale_loss in parallel.py

* [XPU] throw Unimplemented when using Reducer
* test,test=develop

* test,test=develop

* test,test=develop

* test,test=develop

* test,test=develop

* test,test=develop

* test,test=develop

* test,test=develop
* updata Adamw.py

out.backward()  -> loss.backward()

* Update adamw.py
* modify concat_grad add sum comp rule

* modify opcompat
* add jacobian and hessian in paddle.autograd

* disable unitest 'func_multi_input' for bug in high-order gradient of multiply

* add dimension checks

* add support for 0-D tensor

* change return type from Jacobian to Hessian in hessian function

* refine Jacobian _flatten function for single xs

* refine support for 0-D tensor

* 1. add 'func_multi_input' unitest for multiply_grad_kernel bug fixed
already.
2. support non-inplace math operation via magical method overwriting.

* add unitest for math operation and raise error when 0-D tensor is indexed

* add ndim check on ys and xs according to is_batched, and add one unitest

* refine docstring of jacobian and hessian

* move paddle.incubate.autograd.Jacobian/Hessian to paddle.incubate.autograd.functional.Jacobian/Hessian

* remove single_input unitest case because numerical differentiation is wrong

* remove 3 unitest for numerical result(reference result) is wrong

* 1. rename autodiff.py to autograd.py
2. increase TIMEOUT to 100

* cancel modification for functional Jacobian/Hessian

* 1. use tuple as return type instead of list
2. refine docstring

* add more unitest case to improve coverage

* remove 2 unitest of Hessian for numerical result is wrong

* remove 1 unitest of Hessian for numerical result is wrong

* remove 1 unitest of Hessian for numerical result is wrong

* change unit test to shape check

* correct doc and replace incubate API to stable API in _grad
…ecessary (#53352)

* [Fix CppExtension Unittest] Change CUDAExtension to CppExtension if necessary

* Temporarily test cpp_extension under GPU

* Split mixed_extension unittest
* support OD level and skip dynamic loss scaling for bf16
* trans fused_feedward Compute function to phi

* add register info

* remove maxfunctor

* move fused feedward to phi

* remove sig file

* remove fliud include

* add include

* add include

* add sig file

* add output register info

* fix sig file

* Update fused_feedforward_sig.cc

* fix grad kernel

* update output register info

* fix

* open fused_feedforward static build

* add optional and fix code style

* fix output info for fused attention

* add optional param

* merge
* support fp16 for maxout op

* format code

* change api

* add test for static float16

* format code

* formatting code

* atol alignment

* experiment—1

* experiment-2

* experiment-3

* format code
…ly (#53382)

* [CINN Support 0D-Tensor] CINN supports 0D-Tensor with trick temporarily

* Add unittest
* [static op generation] triangular_solve

* [phi] mv triangular_solve_grad to static_backward

* [phi] fix import

* [phi] mv to ops.yaml、 backward.yaml

* fix forward attr

* [phi] fix triangular_solve_grad args
* Update slim approve list

* Fix id, test=document_fix
…dient (#53250)

[Dy2St]Get grad names when call append backward to fix high order gradient (#53250)
* [phi] move sequence_pool kernel to phi

* mv kernels impl

* fix parameter error

* clean include

* fix compat filename

* [phi] move fluid sequence_pool_grad to phi

* [phi][compat] sig rm GradVarName

* [phi] fix sequence_pool out type

* [phi] rm impl, add const string

* [phi] fix const str

* fix sequence_pooling cmake

* [phi] mv sequence_pooling_test

* [phi] fix grad sig

* [phi] fix sequence_pool is_test error

* [phi] fix sequence_pooling gpu include

* [phi] mv to impl

* [phi] fix SequencePoolFunctor cu include

* [phi] modify out max_index int32_t

* [phi] add pooltype mapping determine

* [phi] fix sequence_pool_sig

* [phi] fix sequence_pool_sig sum

* [phi] try ci

* [phi] fix max_index optional
@pull pull bot added the ⤵️ pull label Apr 27, 2023
…tion (#52093)

* change judgement for DropoutGradGPUKernelDriver

* add UnrollerWithoutVecSize and after this Loaddata to be refined

* pass unittest

* use same unroller with XPU

* BroadcastWithInt64Index

* BroadcastDataLoader template partial specialization

* fix compile errs in ROCms

* PR comment
@pull pull bot merged commit 3474e09 into lsdlab:develop Apr 27, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.