-
-
Notifications
You must be signed in to change notification settings - Fork 114
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance #10
Comments
Yes, I've noticed that there is a significant performance constraints in the processing of the "client" side of the EtherNet/IP CIP requests. I haven't looked into it in too much detail. I've noticed that PyPy seems to help, but it really shouldn't be that slow. I'll take a look, too, and see what I can find out. Let's take a look at the mix of requests you're trying to parse, and get this code tightened up for you; you should be able to use this efficiently in production. I'm on my way back from Munich over the next 36 hours, so I may not be immediately responsive... |
Sounds great! I won't have access to a PLC until monday anyway so It'll have to wait if you want full info. But I've essentially been using the "getattr.py" script in server/enip to do the testing, and the tags were a mixed bunch of single (non-array) types. |
OK I finally have some time to sit down with a PLC. It's a CompactLogix. I'm running this command (these are all boolean tags):
... and the output, minus the very long slab of individual tag values, is:
I came up with the numbers for depth and multiple by experimentation; larger numbers either gave errors or did not increase performance noticably. I am wondering a bit about the -m number; normally we're able to use a request size of almost 500 bytes, but I don't know if this corresponds exactly to that number. Running the same command with pypy (only increasing the -r to 10000 to account for warmup time) gives a much better result:
However in both tests the CPU is pegged at 100% suggesting that the bottleneck is not in the network or the PLC. A similar test using a library written in C (https://github.com/EPICSTools/ether_ip.git) gives performance roughly at the pypy level, but causing no measurable CPU load. Tell me if you need more details about any of this, or any other interesting tests to perform. |
Interesting. Well, the pypy test tells us that we are probably able to I've been working on a branch 'feature-performance' in the cpppo Git repo;
On Wed, Nov 4, 2015 at 8:50 AM, Johan Forsberg notifications@github.com
|
I can get up to 300 TPS using CPython2/3, up to 700 TPS using pypy now, on my i7 Mac. Still work to do, but performance is probably no longer at the top of the priority list... |
# This is the 1st commit message: Initial foray in to support for generic CIP Service Code requests # This is the commit message #2: No requirement for existence of .multiple segment in failed responses # This is the commit message #3: Correct handling of service_code operations in client connector I/O # This is the commit message #4: HART Requests almost working o Cannot derive HART from Logix; service codes overlap # This is the commit message #5: Initial working HART I/O card request # This is the commit message #6: Support intermixed Tags and already parsed operation in parse_operations # This is the commit message #7: Test and decode the Read primary variable response, however: o Still broken; the CIP Encapsulation path is still suppsed to be to the Connection Manager @0x06/1! The 0x52 Route Path is Port 1, Address 2, and the message path should be to @0x035D/8. # This is the commit message #8: Success. Still needs cleanup # This is the commit message #9: Further attempts to refine HART pass-thru. o HART I/O card is not responding as defined in documentation # This is the commit message #10: Cleanups for python3, source analysis, unit tests # This is the commit message #11: Attempt to parse Read Dynamic Variables reply; 3 unrecognzied bytes? # This is the commit message #12: Update to attempt to parse real HART I/O card response o Minimal Read Dynamic Variables status response? Not successful o Implement minimal simulated pass-thru Init/Query, HART commands 1,2,3 o Minor changes to client.py Send RR Data, to have timeout and ticks compatible with RSLogix; no difference
While investigating the possibility of using cpppo in "production" to read thousands of tags as quickly as possible, I've noticed that the performance is limited by CPU usage and not network or PLC. Some testing with PyPy showed a significant increase in throughput (similar to the library we're using now, which is written in C) but still apparently limited by the CPU.
What are your thoughts about performance? Have you considered options like Cython for optimizing "bottlenecks"?
The text was updated successfully, but these errors were encountered: