-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[OpenCL] fix opencl multi-thread CL_INVALID_CONTEXT #10529
Merged
hong19860320
merged 1 commit into
PaddlePaddle:develop
from
wasupandceacar:fix_opencl_multi_thread_cl_invalid_context
Jul 24, 2024
Merged
[OpenCL] fix opencl multi-thread CL_INVALID_CONTEXT #10529
hong19860320
merged 1 commit into
PaddlePaddle:develop
from
wasupandceacar:fix_opencl_multi_thread_cl_invalid_context
Jul 24, 2024
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Thanks for your contribution! |
@zhupengyang @hong19860320 一直没人啊老铁 |
@zhupengyang @hong19860320 你们这还有没有人维护 |
@hong19860320 @zhupengyang 两周了还没人回复? |
@hong19860320 @zhupengyang |
zhupengyang
approved these changes
Jul 24, 2024
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
建议测试一下两个线程同时跑两个OpenCL模型的情况。 隐约记得之前 @zhaoyang-star 添加这个是为了解决这个问题。 |
hong19860320
pushed a commit
that referenced
this pull request
Aug 26, 2024
* [Doc] fix doc/index.rst (#10530) Co-authored-by: chenhuan09 <chenhuan09@baidu.com> * [PASS] xpu__fc_fuse_pass batchnorm fusion (#10532) * [XPU] modify xpu_fc_pass to fused bn to xpu_fc * [XPU] add int8 support for bn fc fusion * [XPU] fix codestyle * [XPU] remove unused code * [OpenCL] Fix multi-thread CL_INVALID_CONTEXT (#10529) * [Metal] Fix project build metal shader bugs (#10544) * [Metal] Fix concat error, fix fetch typo error (#10541) * [Metal] fix build_xcode.sh error, add building with metal option (#10542) * [XPU] Add greedy L3 tune strategy (#10546) * support search gan model. 1.add pixel_unshuffle support &2.enable fill_constant calc offline on arm and opencl & 3.enable reshape_calc_offline_pass on arm and opencl (#10537) * support search gan model. 1. add pixel_unshuffle support 2. enable fill_constant calc offline on arm and opencl 3. enable reshape_calc_offline_pass on arm and opencl 4. use chinese comments 5. add test for new kernel. test=develop * support search gan model. 1. add pixel_unshuffle support 2. enable fill_constant calc offline on arm and opencl 3. enable reshape_calc_offline_pass on arm and opencl 4. use chinese comments 5. add test for new kernel. test=develop * support search gan model. 1. add pixel_unshuffle support 2. enable fill_constant calc offline on arm and opencl 3. enable reshape_calc_offline_pass on arm and opencl 4. use chinese comments 5. add test for new kernel. 6. fix metal pre-commit test=develop * [Doc] Update python_demo.md (#10555) * [OpenCL] fix opencl init bugs & optimize :do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. (#10557) * [OpenCL]do not init OpenCL runtime if use a arm only model, when LITE_WITH_OPENCL ENABLED test=develop * [OpenCL] fix opencl init bugs. reduce opencl memory when not use opencl models 1. add one way to allow use to close opencl, to reduce memory. 2. create opencl runtime and context when use , avoid of static loading. 3. if use an arm model , do not create opencl runtime / context. 4. when find opencl target kernel , create opencl. test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 1. create opencl runtime and context when use , avoid of static loading. 2. if use an arm model , do not create opencl runtime / context. 3. when find opencl target kernel , create opencl. 4. when opencl env check found, enable opencl test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 5. default make option on, to let user feel nothing. add api for Professional users test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 5. only reset flag once test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop * [OpenCL] fix opencl init bugs. do not create opencl when user do not use Use opencl. auto enable opencl when opencl model load and opencl check or config. 6.fix pr review test=develop --------- Co-authored-by: cmcamdy <1027740945@qq.com> Co-authored-by: chenhuan09 <chenhuan09@baidu.com> Co-authored-by: GaoYuYang <gaomeyy@gmail.com> Co-authored-by: wasupandceacar <wasupandceacar@gmail.com> Co-authored-by: newway <237745+newway@users.noreply.github.com> Co-authored-by: xiebaiyuan <xiebaiyuan@139.com> Co-authored-by: Kayzwer <68285002+Kayzwer@users.noreply.github.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
PR devices
OpenCL
PR types
Bug fixes
PR changes
Backends
Description
Fix CL_INVALID_CONTEXT error when the model does predictions in multiple threads.
Previous issues: #9931 #10267
Caused by
Paddle-Lite/lite/backends/opencl/cl_runtime.cc
Line 27 in bd60e69
Use
local_thread
here will lead to different opencl contexts in different threads, as mentioned in #9931 (comment). Also, this is not a good solution, as discussed in #7888 (comment), which will cause other bugs.New solution: Use single instance, all threads share a global
CLRuntime
.