-
-
Notifications
You must be signed in to change notification settings - Fork 195
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is it able to send the next message immediately after the last response is received #169
Comments
Hi, There might be some improvement to be done in the user space timing management, but I doubt you'll ever get to the millisecond range. Maybe if we could have cvar set by python-can...? I don't know. hope that could help |
Yes, By using PythonIsotpSocketConnection. the libraries are all the latest version. I do not have code right now, Sorry.
Do you have any idea? Thank you |
Carefull Like I said, really hard to get good timing from the user space. I might have an extra thread that could be removed.. PR are welcomed :) |
Yes, I do use By using Tomorrow, I will have a try in office. If it does work better, I will try to PR |
right! Using a notification mechanism is on my todo list for a while.. I get an issue raised every 3 months-ish about someone who reads the bus in parallel for debugging and consume all the messages. The notifier pattern is the solution. Let me know if you plan to fix the CanStack object to use that. |
Yes, I do try to replace somewhere on CanStack with can.Notifier/Listener. Although performance good on virtual bus simulation, bad on real bus. the timing using can.Notifier/Listener is worse, because getting message is taking much time than bus.recv(), and it is not stable. Maybe can.AsyncBufferedReader is another way, while my trial is using the Sync way. |
In any case, I think the Notifier pattern should be used, at least to fix the issue of people depleting the rx queue with their debug task. For the timing... I know it disappointing. I built everything based on the fact that strict timing couldn't be achieved, making the Linux driver the most performant option. I'm not saying it can't be improved, but working from Python in the user space with no direct control on the hardware... well it has its limits. |
After using notifier/listener, isotp.rxfn method will directly using get_message, getting the msg from the Listener received Queue, instead of bus.recv() which should obviously improved the speed. But disappointing, it takes more time. Then I try to add some logger print on the isotp.rxfn to see how it works. Actually, it would get much faster than ever. here is an example of the CanStack.
|
This module uses sleeps to wait on the data. So there's no synchronicity between the IO and the user thread. You can get lucky and get called right after the data arrives, or unlucky and right before the data arrives. I am making some reading about this. I have a knowledge gap when comes to these mechanisms. I just learned what epoll was, the solution is probably in that direction. I will switch to the notifier pattern very soon anyway, then will investigate what I can do to improve the speed. These problems are too often reported |
Hi I will try to improve the timings, but that may require a break in some interface of the python can-isotp module. That will yields v2.0 |
Just to let you know that I got very promising results after refactoring a bit the codebase. I'm locally working isotp v2.x, which will offer a threaded version of the Transport layer that does blocking read. This performs MUCH better. Still under development in my free time left : isotp v1.x with sleep() |
Hello @pylessard Thank for your time Sorry I do not have to much time debug on it currently Thanks, |
Also I try to found out why previous one isotpv1.9 takes a bit more time betwwen two diagnosis request.
&
|
If you want to test the new logic, try these branches If you use a simple CanStack and the standard Connection to support it, it should quite seemless |
Yes, I tried on the two branches. In the recently, I do not have to much time on it. Maybe later, I tried again after it is released |
The improvement I made improved the latency between the reception of a flow control message and the next message. I just pushed to the reduce-context-switch-latency branch. I added a parameter that allow for a small wait window to send a new frame before going back to a blocking read after completing a frame reception. I've locally been able to send multiple requests in a ping/pong manner between 2 stacks running in their own thread and the delay for the new frame to start going out was <0.5ms Give it a try please, the default value of the param should be good for you. |
Hello @pylessard I doubt there may be caused by the GIL(I guessed). Regards, |
You're using the CanStack object right? I'd like to have timestamps, dlc, IDs, first data byte. |
And if you could enable isotp debug logging, that would be very helpful as well. The logger name is "isotp" Thank you for your help. |
Thank you, very interesting. What I notice is that the delay is introduced before the next request is enqueued. Meaning that the delay is introduced at higher level than the isotp stack. Do you know why this happen? Do you have a short snipper of high-level code that shows how the uds requests are sent? |
def send(self, payload):
self.conn.empty_rxqueue()
self.conn.send(payload)
def recv(self, IggPending = True, timeout_val=1):
done_receiving = False
while not done_receiving:
try:
done_receiving = True
data = self.conn.wait_frame(timeout=timeout_val, exception=False)
if not data:
return None
'''Ignore Pending Message Flag'''
if ((data[0] & 0xff == 0x7f) and (data[2] & 0xff == 0x78)) and IggPending:
timeout_val += 5
done_receiving = False
else:
return data
except KeyboardInterrupt as e:
print("KeyboardInterrupt")
self.close()
except Exception as e:
print(e)
return None Use the send() recv() method to send or receive msg Sorry, I do not have the latest code, but it is somelike send(bytearray[0x02, 0x10, 0x02])
msg = recv()
if msg and msg[0] == 0x50:
return True, msg
return False, msg
send(bytearray[0x02, 0x10, 0x03])
msg = recv()
if msg and msg[0] == 0x50:
return True, msg
return False, msg First, I suspected it is caused by GIL, and I tried sys.setswitchinterval(1e-6), but seem nothing improve |
Thanks, will review tonight. Cheers |
So, I decided to turn toward Stack Overflow here I have implemented something that works even better than a spinwait after the reception of a frame. On my side, I get very good timings by making 2 process talk together via a vcan interface. You may want to give it a try, same 2 branches. I still think that something at high level on your side introduces that delay, but I don't see it in your code snippet. Could you have a thread in your python application that never sleeps? |
I tried the latest branches as you said. In my previous experience, vcan does have much better performance than the real bus, seems it directly send/recv on the same queue. And also I have another trial (isotp_v2 + udsoncan.client) vs (isotp_v2 + udsoncan.connection + my send() recv() method) on real CAN Bus.
And here is the result
|
Sorry I am stuck at Ethernet works these days, recently I have no time to validate it. But if I tried, I will share the NEWS Thank you |
Sure, no issue. I will go forward with the release of 2.x anyhow. |
These 2 should have better performances. I suggest to use a NotifierBasedCanStack. Will close as I believe it is fixed. reopen if you think it deserves more work. |
resurrecting this thread since it seems recent and relevant. i'm encountering some performance issues on large block transfers and i believe it is due to: below screenshot notes a monkey patching the connection's empty_rxqueue to perform a 'no op' results in fast transfers with <1ms response time between transfers, but is of course a hack and prone to issues. i had followed the advice on this thread regarding NotifierBasedCanStacks etc and digging into the call stack led me to the can-isotp impl. i'm not too familiar with the handshaking that goes on between rx threads and the caller but seems like a symmetrical event to i.e.
|
Will investigate, I have not tested the performances when calling empty_rx_queue. Maybe I forgot something. |
in case it helps, my specific use case is looping on transfer_data after performing a request_download. here's some example (untested) code to work with:
|
@tkeairns : You put the finger on the issue. When I wrote the v2.x version of Try this branch please: improve-empty-rxqueue-performance if it works. I will merge and release right away |
I released isotp v2.0.4 that should solve this. |
you updated before i had a chance to test, tested the latest release and it looks good. thanks for the speedy fix! |
Sorry for that. I never know if I'll get an answer or not and I got impatient :) Cheers |
Dear Developer
I am using the library and python-can to write a flashing program, which expected flashing as fast as possible.
I set the sleep_time as 0, and stmin as 0 too.
When the one request is responded by ECU(T0), then the program will send the next request immediately(T1), without any delay.
but what I observe there is always a time gap about 3ms(T1-T0),then the message could be sent out.
Please see the can log below
How could I narrow this 3ms time gap?
I focus on the method
process_tx
in ISOTP, suspect some logic waste too much time .Because I find when I try to send a request which have multip frame response, the receive the FirstFrame response, the program is able to immediately(within 1ms) send out the FlowControlFrame to the ECU
Many Thanks in advance
The text was updated successfully, but these errors were encountered: