-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OPC-N2 remains stuck in a infinite wait - advices #73
Comments
Hi @teokino Thanks for raising this issue. I haven't seen or heard of this happening before, and am not really sure what could be causing the issue under the hood. As far as I'm aware, this shouldn't be possible, as SPI is timing based and it should immediately feed you back nonsensical data if something is truly wrong. Before I dig too deep into it, could you tell me the version of the library you're using ( |
Thank you @dhhagan. My configuration is about a RPi 3 and a OPCN2 connected using the USB-SPI device and Python 3.5.3. I got py-opc (1.5.0) and firmware 18.2 alpha ---> Alphasense OPC-N2v18.2 ) |
Since you are using USB-SPI device, please state your pyusbiss version. |
Hi @DancingQuanta. My pyusbiss version is (0.2.0). |
@teokino Did you install the package (py-opc) via pip? Or through GitHub? We've made some changes recently that aren't yet reflected in the version you're using (1.5.0) - you could give the most recent version a try if you would like to, otherwise it will be available through pypi within the next few days. |
I have used pip and it is impossible update the libraries. Unfortunately I do not have an internet connection and I can not access the sd-card. The RPi is in a hard-to-reach place to which I am connected to remote desktop using an ethernet cable. for this reason I was wondering if there was a way to get around the problem. I hypothesized the use of threads by comparing the timing of the last data captured with the current time of the RPi. |
Thinking better, I could download the file from github and install it offline remotely using pip. In that case maybe the problem could be resolved. |
@teokino That would probably be the easiest solution. v1.6.0 was just released! |
Thank you! I will try :) |
Running a python application for a long time may have an event where anything could happen. Your experience is interesting. It would be useful to describe how the function |
Another question: Are you able to get information whereabout in |
The function get_data_points is called in a While-True loop at a specific sampling period (variable sampling_period = 5 seconds) then the data are stored in a InfluxDb database.
I don't think that the problem is related to a long python data collection. I have another 2 python scripts for collecting the environmental data. One for temperature, humidity and pressure captures every 0.5 seconds. Instead the latter is a PMS5003 every 30 sec. Regarding the second question, I am quite sure that python stuck at |
Okay. Looks like we don’t know where the issue is in alpha.pm().
Are you happy to keep running your current system and monitor weekly?
Are you able to edit a file in your system? If so, you can try putting a
print statement in middle of alpha.pm() in the sites packages? If it happen
again, then we will know roughly where the issue may be.
On Wed, 27 Jun 2018 at 21:26, Matteo Chini ***@***.***> wrote:
The function get_data_points is called in a While-True loop at a specific
sampling period (variable sampling_period = 5 seconds) then the data are
stored in a InfluxDb database.
try:
while True:
datapoints, alpha = get_data_points(alpha)
try:
bResult=client.write_points(datapoints)
print("Write points {0} Bresult:{1}".format(datapoints,bResult))
except:
pass
# Wait for next sample
time.sleep(sampling_period)
# Run until keyboard ctrl-c
except KeyboardInterrupt:
print ("Program stopped by keyboard interrupt [CTRL_C] by user. ")
I don't think that the problem is related to a long python data
collection. I have another 2 python scripts for collecting the
environmental data. One for temperature, humidity and pressure captures
every 0.5 seconds. Instead the latter is a PMS5003 every 30 sec.
Regarding the second question, I am quite sure that python stuck at
alpha.pm(). In the code I placed some print-out for identifying every
stage (I know that is not a very elegant way to do the work but is
effective in the development phase). After 20 days, looking the shell I
seen that the last print-out was just before values_PM = alpha.pm().
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#73 (comment)>, or mute
the thread
<https://github.com/notifications/unsubscribe-auth/AIB3VWcrfuE5Ubj_GxYhhh6VVmW_E43Eks5uA-pWgaJpZM4U5b-0>
.
--
Andy Tolmie
|
Hi @DancingQuanta! Unfortunatly for me is impossible to access the filesystem. All packages were installed by the computer technicians of the company on my suggestion. For the timeline of monitoring, what do you mean with "Are you happy to keep running your current system and monitor weekly" ? |
Monitor weekly means check the data on a same day each week. |
Mmmh the same day is difficult because it depends on the tasks of the company at which the sensor is installed. |
Okay. Let hope 1.6.0 helps. |
Hello to all,
thanks to your suggestions I have carried out continuous monitoring using OPC-N2. For two months the data were collected without any kind of problem.
Recently I realized that for 10 days, the code execution was blocked pending data. The connection had been established but the execution is stuck at
values_PM = alpha.pm ()
. Once the process was killed and restarted again, everything came back to work.I would like to ask for advice on how to proceed. In your opinion, is it the case of working with threads? Having available the timing related to the last data collected, I can compare it with the current timing. In case the difference is higher than a certain threshold I could re-launch the command in a new thread.
In your opinion, is the right way to operate?
Below is an extract of the code used
The text was updated successfully, but these errors were encountered: