New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deadlock with updates in epics base 3.14.12.6-rc1 and 3.15.5-rc1 #29
Comments
Here is how the deadlock happens,
|
Hi Xiaoqiang,
|
Hi Bruce, I used the simscope.py as the test program. After building pcaspy 0.6.2 or the current HEAD in the repo,
At another terminal, launch the MEDM panel, change the Update time to 0.01 second, and click Run.
After a few seconds (in my case), the sinuous wave stops updating. And now if you attach gdb to the PCASpy program, and exec command |
The official release of EPICS base 3.14.12.6 has fixed this deadlock. |
Note that the fix mentioned above only remedies one particular race condition but there are more... I filed a bug report here |
The deadlock you discovered is indeed serious. In theory the deadlock should happen soon if I do "caput -c " on a fast changing PV. I used simscopy.py as the test program (updating in 100Hz) and ran the "caput -c " in 50Hz.
However it cannot be reproduced. In what usage do you see this deadlock, in a cagateway? |
Currenly all pcaspy applications assume that they can call
Driver::updatePVs()
at a separate thread. And the main thread will pick this request and send monitor events to clients.In the release of 3.14.12.6-rc1 and 3.15.5-rc1, the pcas server receives an update to support dynamic length array. The affecting change is the additional call to [
chan.getPVI().nativeCount()
] (epics-base/epics-base@0743417#diff-4eafaaeaa6480eaed1cfd866d63cd0adR886) insidecasStrmClient::monitorResponse
.When the deadlock happens, here is the backtrace from both main thread 1 and python thread 2,
The text was updated successfully, but these errors were encountered: