Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: unpack_from requires a buffer of at least 4 bytes #7

Closed
saihenjin opened this issue May 25, 2017 · 4 comments
Closed

Error: unpack_from requires a buffer of at least 4 bytes #7

saihenjin opened this issue May 25, 2017 · 4 comments

Comments

@saihenjin
Copy link

saihenjin commented May 25, 2017

I'm running into an issue with this library. I'm setting up a function, then running it twice as two concurrent threads.

This function waits for a rising edge on a given Bool tag, then reads from a string tag. It does this continuously (wait for rising edge, read. wait for rising edge, read...).

This function works fine alone, and even works fine as a thread as long as there is only one thread. When I want to start both threads, I get the error "unpack_from requires a buffer of at least 4 bytes"
Here is the traceback:

Exception in thread 1:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\LogixInterface\main.py", line 90, in readOnTrigger
triggerFunc(PLC, triggerTag, trigStateOrEdgeType)
File "C:\LogixInterface\main.py", line 15, in waitForEdge
oldVal = PLC.Read(tagName)
File "C:\Python27\lib\eip.py", line 81, in Read
return _readTag(self, args[0], 1)
File "C:\Python27\lib\eip.py", line 148, in _readTag
if not _connect(self): return None
File "C:\Python27\lib\eip.py", line 439, in _connect
self.OTNetworkConnectionID = unpack_from('<I', retData, 44)[0]
error: unpack_from requires a buffer of at least 4 bytes

Then shortly afterwards, Thread 2 also fails with this Traceback:

Exception in thread 2:
Traceback (most recent call last):
File "C:\Python27\lib\threading.py", line 801, in __bootstrap_inner
self.run()
File "C:\Python27\lib\threading.py", line 754, in run
self.__target(*self.__args, **self.__kwargs)
File "C:\LogixInterface\main.py", line 102, in writeOnTrigger
triggerFunc(PLC, triggerTag, trigStateOrEdgeType)
File "C:\LogixInterface\main.py", line 15, in waitForEdge
oldVal = PLC.Read(tagName)
File "C:\Python27\lib\eip.py", line 81, in Read
return _readTag(self, args[0], 1)
File "C:\Python27\lib\eip.py", line 151, in _readTag
InitialRead(self, t, b)
File "C:\Python27\lib\eip.py", line 1059, in InitialRead
retData = self.Socket.recv(1024)
timeout: timed out

Any idea what's going on here? Like I said, if I just run a single thread, I get no error and everything is happy, but as soon as I want to run two I get the unpack error followed by a timeout.

I've attached my code in case you want to see what I'm trying.
main.txt

@dmroeder
Copy link
Owner

Thanks for posting the code you are using, this will be very helpful. I'll dig into this as soon as I get a chance.

@dmroeder
Copy link
Owner

First, when you are passing the instance of PLC() to your functions, it is named PLC, I would not do it that way because the module is named the same.

Second, put a small delay between starting your threads, say 100ms. Your second thread is requesting data before first thread has established the connection, so it's trying to establish the connection twice.

Let me know how this works for you.

main (2).txt

@saihenjin
Copy link
Author

This improves it a little bit, but I do still run into this error rarely (something like once every 100 starts). The more reliable solution I have found is to just not share the PLC object between the threads -- let each thread have its own instance.

Thanks a lot!

@dmroeder
Copy link
Owner

That is actually the way I would do it. Have an instance per thread. That is my test I did initially, then I worked my way back to closer what you were doing.

Your code will be pretty useful to me I think. I will do some more stability tests and it should help come up with better error handling.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants