Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is it able to send the next message immediately after the last response is received #169

Closed
xRowe opened this issue Aug 24, 2023 · 36 comments

Comments

@xRowe
Copy link

xRowe commented Aug 24, 2023

Dear Developer

I am using the library and python-can to write a flashing program, which expected flashing as fast as possible.
I set the sleep_time as 0, and stmin as 0 too.

When the one request is responded by ECU(T0), then the program will send the next request immediately(T1), without any delay.
but what I observe there is always a time gap about 3ms(T1-T0),then the message could be sent out.
Please see the can log below

Timestamp: 1692882442.954155        ID: 0733    S Rx                DL:  8    02 10 03 00 00 00 00 00     Channel: 0
Timestamp: 1692882442.960545        ID: 0633    S Rx                DL:  8    06 50 03 00 32 01 f4 00     Channel: 0
Timestamp: 1692882442.963371        ID: 0733    S Rx                DL:  8    02 10 01 00 00 00 00 00     Channel: 0
Timestamp: 1692882442.970310        ID: 0633    S Rx                DL:  8    06 50 01 00 32 01 f4 00     Channel: 0
Timestamp: 1692882442.973529        ID: 0733    S Rx                DL:  8    02 10 02 00 00 00 00 00     Channel: 0
Timestamp: 1692882442.980591        ID: 0633    S Rx                DL:  8    06 50 02 00 32 01 f4 00     Channel: 0

How could I narrow this 3ms time gap?
I focus on the method process_tx in ISOTP, suspect some logic waste too much time .

Because I find when I try to send a request which have multip frame response, the receive the FirstFrame response, the program is able to immediately(within 1ms) send out the FlowControlFrame to the ECU

Many Thanks in advance

@pylessard
Copy link
Owner

Hi,
What connection are you using? I assume you are interfacing with my isotp module.
If you do so and talks with python-can, then everything is running in the user space. It is quite hard to have tight timing in the user space. Your 3ms delay indicates me that you are not on windows (because it would be much worse). Your best bet is to use the isotp kernel module and interface with that instead and use an isotpSocketconenction. The kernel module has timing precise in the microseconds range.

There might be some improvement to be done in the user space timing management, but I doubt you'll ever get to the millisecond range. Maybe if we could have cvar set by python-can...? I don't know.

hope that could help

@xRowe
Copy link
Author

xRowe commented Aug 24, 2023

Yes, By using PythonIsotpSocketConnection. the libraries are all the latest version.
Actually, I am using on Windows10, and Python3.11.3 which could provide a more accurate time.

I do not have code right now, Sorry.
my code send/recv sequence just like,

conn.send(bytearray([0x10],[0x03]))
payload = conn.recv(timeout=2)
conn.send(bytearray([0x10],[0x01])) #Put the New message to Queue, but actually the meesage sent on the can bus with average 3ms delay
payload = conn.recv(timeout=2)
conn.send(bytearray([0x10],[0x02])) #Put the New message to Queue, but actually the meesage sent on the can bus with average 3ms delay
payload = conn.recv(timeout=2)

Do you have any idea?

Thank you

@pylessard
Copy link
Owner

pylessard commented Aug 24, 2023

Carefull PythonIsotpSocketConnection doesn't work on windows. There's PythonIsoTpConnection that uses python-can (sorry for the name similarity).

Like I said, really hard to get good timing from the user space. I might have an extra thread that could be removed.. PR are welcomed :)
But something bothers me. You make a flashing software, but uses ReadDataByIdentifier and sends multiple small payloads. Generally, we use something like TransferData and sends long payload (1KB+). Makes these small holes much more acceptable.

@xRowe
Copy link
Author

xRowe commented Sep 7, 2023

Yes, I do use TransferData to download, but there is a little big of the package, need too many blocks, a block consumes 3ms, 1000 blocks will take 3*1000ms. so I prefer to find a way to make it faster.

By using PythonIsoTpConnection, I try to substitue the bus.recv with the bus.notifier partial on the CanStack.
the timing gap seems better that I simulate on "virtual" can.

Tomorrow, I will have a try in office. If it does work better, I will try to PR

@pylessard
Copy link
Owner

right! Using a notification mechanism is on my todo list for a while.. I get an issue raised every 3 months-ish about someone who reads the bus in parallel for debugging and consume all the messages. The notifier pattern is the solution.

Let me know if you plan to fix the CanStack object to use that.
And for the timing, maybe it can be improved, but I doubt we can do magic here. Someone once told me that he got much better performances with a spinwait rather than a sleep... Seems a bit overkill to me, but if that works

@xRowe
Copy link
Author

xRowe commented Sep 20, 2023

Yes, I do try to replace somewhere on CanStack with can.Notifier/Listener.

Although performance good on virtual bus simulation, bad on real bus. the timing using can.Notifier/Listener is worse, because getting message is taking much time than bus.recv(), and it is not stable.
Maybe can.Notifier is created by a new thread. Or some reason on Windows

Maybe can.AsyncBufferedReader is another way, while my trial is using the Sync way.
I don't know. I get in a trap.

@pylessard
Copy link
Owner

In any case, I think the Notifier pattern should be used, at least to fix the issue of people depleting the rx queue with their debug task.

For the timing... I know it disappointing. I built everything based on the fact that strict timing couldn't be achieved, making the Linux driver the most performant option. I'm not saying it can't be improved, but working from Python in the user space with no direct control on the hardware... well it has its limits.

@xRowe
Copy link
Author

xRowe commented Oct 2, 2023

After using notifier/listener, isotp.rxfn method will directly using get_message, getting the msg from the Listener received Queue, instead of bus.recv() which should obviously improved the speed.

But disappointing, it takes more time. Then I try to add some logger print on the isotp.rxfn to see how it works. Actually, it would get much faster than ever.
I do not know if there is race condition between isotp's pooling thread and notifier/listener thread?
Do you have any opinion? Thanks.

here is an example of the CanStack.

class CanStack(TransportLayer):
    """
    The IsoTP transport using `python-can <https://python-can.readthedocs.io>`__ as CAN layer. python-can must be installed in order to use this class.
    All parameters except the ``bus`` parameter will be given to the :class:`TransportLayer<isotp.TransportLayer>` constructor

    :param bus: A python-can bus object implementing ``recv`` and ``send``
    :type bus: BusABC

    :param address: The address information of CAN messages. Includes the addressing mode, txid/rxid, source/target address and address extension. See :class:`isotp.Address<isotp.Address>` for more details.
    :type address: isotp.Address

    :param error_handler: A function to be called when an error has been detected. An :class:`isotp.protocol.IsoTpError<isotp.protocol.IsoTpError>` (inheriting Exception class) will be given as sole parameter
    :type error_handler: Callable

    :param params: List of parameters for the transport layer
    :type params: dict

    """

    def _tx_canbus_3plus(self, msg):
        self.bus.send(can.Message(arbitration_id=msg.arbitration_id, data = msg.data, is_extended_id=msg.is_extended_id, is_fd=msg.is_fd, bitrate_switch=msg.bitrate_switch))

    def _tx_canbus_3minus(self, msg):
        self.bus.send(can.Message(arbitration_id=msg.arbitration_id, data = msg.data, extended_id=msg.is_extended_id, is_fd=msg.is_fd, bitrate_switch=msg.bitrate_switch))

    def rx_canbus(self):
        # msg = self.bus.recv(self.timeout)
        msg = self.buffer.get_message(self.timeout)
        if msg is not None:
            return CanMessage(arbitration_id=msg.arbitration_id, data=msg.data, extended_id=msg.is_extended_id, is_fd=msg.is_fd, bitrate_switch=msg.bitrate_switch)
        else:
            self.logger.warning("Here a None Msg") # if this line is commented, the speed will be slow, or it gets faster 
            return None

    def __init__(self, bus, timeout=0.0, *args, **kwargs):
        # Backward compatibility stuff.
        message_input_args =  can.Message.__init__.__code__.co_varnames[:can.Message.__init__.__code__.co_argcount]
        if 'is_extended_id' in message_input_args:
            self.tx_canbus = self._tx_canbus_3plus
        else:
            self.tx_canbus = self._tx_canbus_3minus

        self.set_bus(bus)
        self.timeout = timeout

        self.buffer = can.BufferedReader()
        self.notifier = can.Notifier(self.bus, [self.buffer], timeout=0.1)

        TransportLayer.__init__(self, rxfn=self.rx_canbus, txfn=self.tx_canbus, *args, **kwargs)

    def set_bus(self, bus):
        if not isinstance(bus, can.BusABC):
            raise ValueError('bus must be a python-can BusABC object')
        self.bus=bus

@pylessard
Copy link
Owner

This module uses sleeps to wait on the data. So there's no synchronicity between the IO and the user thread. You can get lucky and get called right after the data arrives, or unlucky and right before the data arrives.

I am making some reading about this. I have a knowledge gap when comes to these mechanisms. I just learned what epoll was, the solution is probably in that direction.

I will switch to the notifier pattern very soon anyway, then will investigate what I can do to improve the speed. These problems are too often reported

@pylessard
Copy link
Owner

Hi
I have been looking into how async io (the concept, not the python lib) technically works. Since I've written the pure python implementation of the protocol, I've been claiming that user space tight timing was not really an option. I might have been wrong all along, I just didn't know how to do it properly

I will try to improve the timings, but that may require a break in some interface of the python can-isotp module. That will yields v2.0

@pylessard
Copy link
Owner

Just to let you know that I got very promising results after refactoring a bit the codebase. I'm locally working isotp v2.x, which will offer a threaded version of the Transport layer that does blocking read. This performs MUCH better.

Still under development in my free time

left : isotp v1.x with sleep()
right: isotp v2.x with blocking read()

image

@xRowe
Copy link
Author

xRowe commented Nov 10, 2023

Hello @pylessard

Thank for your time
I saw you create a branch which is not trying to compatible with isotp v2.x in udsoncan repository,
I tried just now, but seems not working efficiently.
By running the same FLASH procedure, now it will take around 53 seconds, but with the previous version isotpv1.9+udsoncan using Notifier/Listener as CanStack will take about 19 seconds only.

Sorry I do not have to much time debug on it currently

Thanks,
Rowe

@xRowe
Copy link
Author

xRowe commented Nov 10, 2023

Also I try to found out why previous one isotpv1.9 takes a bit more time betwwen two diagnosis request.
I try to add some log to check how much time it takes in each steps.
but weirdly, It show "process()" cost much time, but the "process()" time seems not equal to total time(each step consume)
in udsoncan-connection.py, measure "process()", it takes 125.60us
in isotp-protocol.py, measure inside the "process()", it takes 4100ns totaly
I am curious about who eats the time.

2023-11-10 19:47:58.321 [WARNING] isotp: 1000ns   400ns   2700ns  Totaly: 4100ns      
2023-11-10 19:47:58.321 [DEBUG] Connection: 0st:200 ns,1st:900 ns, process:125.60 us, 2nd:1.30 us, sleep:100 ns, Total1:127.90 us,Total1:128.10 us
udsoncan - connections.py
    def polling_task(self):
        t1 = self.clock()
        try:
            while self._running:
                t0 = self.clock()
                #self.logger.debug(f"polling_task start at   @{self.clock()}, Totaly takes {self.format_ns(t0 - t1)}")
                # self.timings.append(t0 - t1)
                
                t2 = self.clock()
                while not self.toIsoTPQueue.empty():
                    self.isotp_layer.send(self.toIsoTPQueue.get())
                t3 = self.clock()

                self.isotp_layer.process()
                t4 = self.clock()
                
                while self.isotp_layer.available():
                    self.fromIsoTPQueue.put(self.isotp_layer.recv())
                t5 = self.clock()

                # time.sleep(self.isotp_layer.sleep_time())

                t1 = self.clock()
                self.logger.debug(f"0st:{self.format_ns(t2 - t0)},1st:{self.format_ns(t3 - t2)}, process:{self.format_ns(t4 - t3)}, 2nd:{self.format_ns(t5 - t4)}, sleep:{self.format_ns(t1 - t5)}, Total1:{self.format_ns(t1 - t2)},Total1:{self.format_ns(t1 - t0)}")
                # self.logger.debug(f"polling_task stop at   @{self.clock()}, Totaly takes {self.format_ns(t1 - t0)}")

        except Exception as e:
            self.logger.error(str(e))

&

isotp - protocol.py
    def process(self) -> None:
        """
        Function to be called periodically, as fast as possible. 
        This function is non-blocking.
        """	
        t0 = self.clock()
        self.check_timeouts_rx()
        self.rate_limiter.update()
        t1= self.clock()
        msg = True
        while msg is not None:
            msg = self.rxfn()
            if msg is not None:
                self.logger.debug(f"Timestamp: @{self.clock()} Receiving : <{msg.arbitration_id:03X}> ({len(msg.data)})\t {binascii.hexlify(msg.data)}")
                self.process_rx(msg)
        t2 = self.clock()
        msg = True
        while msg is not None:
            msg = self.process_tx()
            if msg is not None:
                self.logger.debug(f"Timestamp: @{self.clock()} Sending : <{msg.arbitration_id:03X}> ({len(msg.data)})\t {binascii.hexlify(msg.data)}")
                self.txfn(msg)
        t3 = self.clock()
        self.logger.warning(f"{t1-t0}ns   {t2-t1}ns   {t3-t2}ns  Totaly: {t3-t0}ns")

@pylessard
Copy link
Owner

pylessard commented Nov 10, 2023

If you want to test the new logic, try these branches
isotp : https://github.com/pylessard/python-can-isotp/tree/reduce-context-switch-latency
udsoncan : https://github.com/pylessard/python-udsoncan/tree/support-isotp-v2

If you use a simple CanStack and the standard Connection to support it, it should quite seemless

@xRowe
Copy link
Author

xRowe commented Nov 10, 2023

Yes, I tried on the two branches.
but seems not working efficiently.
By running the same FLASH procedure, now it will take around 53 seconds, but with the previous version isotpv1.9+udsoncan using Notifier/Listener as CanStack will take about 19 seconds only.

In the recently, I do not have to much time on it. Maybe later, I tried again after it is released

@pylessard
Copy link
Owner

pylessard commented Nov 12, 2023

The improvement I made improved the latency between the reception of a flow control message and the next message.
In the case of 2 consecutive isotp frame, I confirm that the delay was still present.

I just pushed to the reduce-context-switch-latency branch. I added a parameter that allow for a small wait window to send a new frame before going back to a blocking read after completing a frame reception. I've locally been able to send multiple requests in a ping/pong manner between 2 stacks running in their own thread and the delay for the new frame to start going out was <0.5ms

Give it a try please, the default value of the param should be good for you.

@xRowe
Copy link
Author

xRowe commented Nov 16, 2023

Hello @pylessard
Today, I tried the two branch reduce-context-switch-latency and support-isotp-v2 again, I see there is new commits.
But bad news as below
I have two trials
1st(udsoncan -> support-isotp-v2 + Isotp- -> reduce-context-switch-latency)
set 'wait_for_tx_after_rx_time': 1e-6 or 1e-5 or even not set
and read_timeout=0.0 or 0.1 or even not set
flashing takes 43s
vs
2nd
(udsoncan -> support-isotp-v2 + Isotp- -> v1.x)
set the sleep_time =0 always
and timeout=0.0
flashing takes 20s

I doubt there may be caused by the GIL(I guessed).
If there are many threadings on one process, race condition will occur. Just like each thread are freuently acquire/release the GIL.

Regards,
Rowe

@pylessard
Copy link
Owner

You're using the CanStack object right?
Would it be possible to share a CAN log of the transmission. I only care about the first byte (isotp header). The reste can be hidden without problems.

I'd like to have timestamps, dlc, IDs, first data byte.
You think it is possible?

@pylessard
Copy link
Owner

And if you could enable isotp debug logging, that would be very helpful as well. The logger name is "isotp"

Thank you for your help.

@xRowe
Copy link
Author

xRowe commented Nov 20, 2023

The FC has the good timing, but the FF or SF(After last request-final response is got) is very slow

image

@pylessard
Copy link
Owner

Thank you, very interesting. What I notice is that the delay is introduced before the next request is enqueued. Meaning that the delay is introduced at higher level than the isotp stack. Do you know why this happen? Do you have a short snipper of high-level code that shows how the uds requests are sent?

image

@xRowe
Copy link
Author

xRowe commented Nov 20, 2023

    def send(self, payload):
        self.conn.empty_rxqueue()
        self.conn.send(payload)

    def recv(self, IggPending = True, timeout_val=1):
        done_receiving = False
        while not done_receiving:
            try:
                done_receiving = True

                data = self.conn.wait_frame(timeout=timeout_val, exception=False)

                if not data:
                    return None

                '''Ignore Pending Message Flag'''
                if ((data[0] & 0xff == 0x7f) and (data[2] & 0xff == 0x78)) and IggPending:
                    timeout_val += 5
                    done_receiving = False
                else:
                    return data

            except KeyboardInterrupt as e:
                print("KeyboardInterrupt")
                self.close()
            except Exception as e:
                print(e)
                return None

Use the send() recv() method to send or receive msg

Sorry, I do not have the latest code, but it is somelike

send(bytearray[0x02, 0x10, 0x02])
msg = recv()
if msg and msg[0] == 0x50:
            return True, msg
        return False, msg
send(bytearray[0x02, 0x10, 0x03])
msg = recv()
if msg and msg[0] == 0x50:
            return True, msg
        return False, msg

First, I suspected it is caused by GIL, and I tried sys.setswitchinterval(1e-6), but seem nothing improve

@pylessard
Copy link
Owner

Thanks, will review tonight. Cheers

@pylessard
Copy link
Owner

So, I decided to turn toward Stack Overflow here

I have implemented something that works even better than a spinwait after the reception of a frame. On my side, I get very good timings by making 2 process talk together via a vcan interface. You may want to give it a try, same 2 branches.

I still think that something at high level on your side introduces that delay, but I don't see it in your code snippet. Could you have a thread in your python application that never sleeps?

@xRowe
Copy link
Author

xRowe commented Nov 21, 2023

I tried the latest branches as you said.
Seems the timing is performing a bit better.

In my previous experience, vcan does have much better performance than the real bus, seems it directly send/recv on the same queue.

And also I have another trial (isotp_v2 + udsoncan.client) vs (isotp_v2 + udsoncan.connection + my send() recv() method) on real CAN Bus.
Code in short:

        self.isotp_params_fd = {
            'stmin': 0,                             # Will request the sender to wait 32ms between consecutive frame. 0-127ms or 100-900ns with values from 0xF1-0xF9
            'blocksize': 0,                         # Request the sender to send 8 consecutives frames before sending a new flow control message
            'wftmax': 0,                            # Number of wait frame allowed before triggering an error
            'tx_data_length': 64,                   # Link layer (CAN layer) works with 8 byte payload (CAN 2.0)
            'tx_data_min_length': 8,                # Minimum length of CAN messages. When different from None, messages are padded to meet this length. Works with CAN 2.0 and CAN FD.
            'tx_padding': 0xAA,                     # Will pad all transmitted CAN messages with byte 0x00.
            'rx_flowcontrol_timeout': 1000,         # Triggers a timeout if a flow control is awaited for more than 1000 milliseconds
            'rx_consecutive_frame_timeout': 1000,   # Triggers a timeout if a consecutive frame is awaited for more than 1000 milliseconds
            'squash_stmin_requirement': False,      # When sending, respect the stmin requirement of the receiver. If set to True, go as fast as possible.
            'max_frame_size': 0xFFFFFFFF,           # Limit the size of receive frame.
            'can_fd': True,                         # Does not set the can_fd flag on the output CAN messages
            'bitrate_switch': True,                 # Does not set the bitrate_switch flag on the output CAN messages
            'rate_limit_enable': False,             # Disable the rate limiter
            'rate_limit_max_bitrate': 1000000,      # Ignored when rate_limit_enable=False. Sets the max bitrate when rate_limit_enable=True
            'rate_limit_window_size': 0.2,          # Ignored when rate_limit_enable=False. Sets the averaging window size for bitrate calculation when rate_limit_enable=True
            'listen_mode': False                    # Does not use the listen_mode which prevent transmission.
            }

        self.tp_addr = isotp.Address(isotp.AddressingMode.Normal_11bits, txid=self.tx_id, rxid=self.rx_id)
        self.stack = isotp.CanStack(bus=self.bus, read_timeout=0.0,address=self.tp_addr, params=self.isotp_params_fd)
        self.conn = PythonIsoTpConnection(self.stack)
        # self.conn.subconn.set_sleep_timing(0, 0)
        # self.conn.isotp_layer.set_sleep_timing(1e-6, 1e-6)
        
        # self.open()
        with Client(self.conn,  request_timeout=2) as client:
            # try:
                client.change_session(DiagnosticSessionControl.Session.extendedDiagnosticSession)
                client.change_session(DiagnosticSessionControl.Session.defaultSession)
                client.change_session(DiagnosticSessionControl.Session.extendedDiagnosticSession)
                client.change_session(DiagnosticSessionControl.Session.defaultSession)

                client.transfer_data(1, bytes(range(64)))
                client.transfer_data(1, bytes(range(64)))

                client.ecu_reset(ECUReset.ResetType.hardReset)  # HardReset = 0x01

                time.sleep(1)

                self.send([0x10, 0x03])
                self.recv()
                self.send([0x10, 0x01])
                self.recv()
                self.send([0x10, 0x03])
                self.recv()
                self.send([0x10, 0x01])
                self.recv()

                self.send(bytearray([0x36, 0x01] + [0xFF]*64))
                self.recv()
                self.send(bytearray([0x36, 0x01] + [0xFF]*64))
                self.recv()

And here is the result

2023-11-21 16:58:46.002 [DEBUG] isotp: Given rxfn is considered blocking
2023-11-21 16:58:46.003 [DEBUG] isotp: Starting CanStack
2023-11-21 16:58:46.005 [DEBUG] isotp: Thread has started
2023-11-21 16:58:46.006 [DEBUG] isotp: Relay thread has started
2023-11-21 16:58:46.057 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:46.057 [DEBUG] isotp: Tx: <640> (08) [ ]	 021003aaaaaaaaaa
2023-11-21 16:58:46.059 [DEBUG] isotp: Rx: <640> (08) [i]	 021003aaaaaaaaaa
2023-11-21 16:58:46.076 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065003003200c8aa
2023-11-21 16:58:46.128 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:46.129 [DEBUG] isotp: Tx: <640> (08) [ ]	 021001aaaaaaaaaa
2023-11-21 16:58:46.140 [DEBUG] isotp: Rx: <640> (08) [i]	 021001aaaaaaaaaa
2023-11-21 16:58:46.147 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065001003200c8aa
2023-11-21 16:58:46.200 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:46.202 [DEBUG] isotp: Tx: <640> (08) [ ]	 021003aaaaaaaaaa
2023-11-21 16:58:46.203 [DEBUG] isotp: Rx: <640> (08) [i]	 021003aaaaaaaaaa
2023-11-21 16:58:46.219 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065003003200c8aa
2023-11-21 16:58:46.271 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:46.271 [DEBUG] isotp: Tx: <640> (08) [ ]	 021001aaaaaaaaaa
2023-11-21 16:58:46.283 [DEBUG] isotp: Rx: <640> (08) [i]	 021001aaaaaaaaaa
2023-11-21 16:58:46.290 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065001003200c8aa
2023-11-21 16:58:46.341 [DEBUG] isotp: Enqueuing a SendRequest for 66 bytes and TAT=Physical
2023-11-21 16:58:46.342 [DEBUG] isotp: Tx: <640> (64) [ ]	 10423601000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b
2023-11-21 16:58:46.342 [DEBUG] isotp: TxState=WAIT_FC - RxState=IDLE
2023-11-21 16:58:46.343 [DEBUG] isotp: Rx: <640> (64) [i]	 10423601000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b
2023-11-21 16:58:46.344 [DEBUG] isotp: Rx: <6C0> (08) [p]	 300000aaaaaaaaaa
2023-11-21 16:58:46.344 [DEBUG] isotp: Tx: <640> (08) [ ]	 213c3d3e3faaaaaa
2023-11-21 16:58:46.345 [DEBUG] isotp: TxState=IDLE - RxState=IDLE
2023-11-21 16:58:46.346 [DEBUG] isotp: Rx: <640> (08) [i]	 213c3d3e3faaaaaa
2023-11-21 16:58:46.362 [DEBUG] isotp: Rx: <6C0> (08) [p]	 037f367faaaaaaaa
2023-11-21 16:58:46.415 [DEBUG] isotp: Enqueuing a SendRequest for 66 bytes and TAT=Physical
2023-11-21 16:58:46.416 [DEBUG] isotp: Tx: <640> (64) [ ]	 10423601000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b
2023-11-21 16:58:46.417 [DEBUG] isotp: TxState=WAIT_FC - RxState=IDLE
2023-11-21 16:58:46.418 [DEBUG] isotp: Rx: <640> (64) [i]	 10423601000102030405060708090a0b0c0d0e0f101112131415161718191a1b1c1d1e1f202122232425262728292a2b2c2d2e2f303132333435363738393a3b
2023-11-21 16:58:46.419 [DEBUG] isotp: Rx: <6C0> (08) [p]	 300000aaaaaaaaaa
2023-11-21 16:58:46.419 [DEBUG] isotp: Tx: <640> (08) [ ]	 213c3d3e3faaaaaa
2023-11-21 16:58:46.419 [DEBUG] isotp: TxState=IDLE - RxState=IDLE
2023-11-21 16:58:46.420 [DEBUG] isotp: Rx: <640> (08) [i]	 213c3d3e3faaaaaa
2023-11-21 16:58:46.436 [DEBUG] isotp: Rx: <6C0> (08) [p]	 037f367faaaaaaaa
2023-11-21 16:58:46.488 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:46.488 [DEBUG] isotp: Tx: <640> (08) [ ]	 021101aaaaaaaaaa
2023-11-21 16:58:46.499 [DEBUG] isotp: Rx: <640> (08) [i]	 021101aaaaaaaaaa
2023-11-21 16:58:46.507 [DEBUG] isotp: Rx: <6C0> (08) [p]	 037f1178aaaaaaaa
2023-11-21 16:58:46.606 [DEBUG] isotp: Rx: <6C0> (08) [p]	 025101aaaaaaaaaa
2023-11-21 16:58:47.659 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:47.659 [DEBUG] isotp: Tx: <640> (08) [ ]	 021003aaaaaaaaaa
2023-11-21 16:58:47.671 [DEBUG] isotp: Rx: <640> (08) [i]	 021003aaaaaaaaaa
2023-11-21 16:58:47.678 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065003003200c8aa
2023-11-21 16:58:47.730 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:47.731 [DEBUG] isotp: Tx: <640> (08) [ ]	 021001aaaaaaaaaa
2023-11-21 16:58:47.732 [DEBUG] isotp: Rx: <640> (08) [i]	 021001aaaaaaaaaa
2023-11-21 16:58:47.749 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065001003200c8aa
2023-11-21 16:58:47.801 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:47.802 [DEBUG] isotp: Tx: <640> (08) [ ]	 021003aaaaaaaaaa
2023-11-21 16:58:47.812 [DEBUG] isotp: Rx: <640> (08) [i]	 021003aaaaaaaaaa
2023-11-21 16:58:47.819 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065003003200c8aa
2023-11-21 16:58:47.871 [DEBUG] isotp: Enqueuing a SendRequest for 2 bytes and TAT=Physical
2023-11-21 16:58:47.871 [DEBUG] isotp: Tx: <640> (08) [ ]	 021001aaaaaaaaaa
2023-11-21 16:58:47.882 [DEBUG] isotp: Rx: <640> (08) [i]	 021001aaaaaaaaaa
2023-11-21 16:58:47.889 [DEBUG] isotp: Rx: <6C0> (08) [p]	 065001003200c8aa
2023-11-21 16:58:47.940 [DEBUG] isotp: Enqueuing a SendRequest for 66 bytes and TAT=Physical
2023-11-21 16:58:47.941 [DEBUG] isotp: Tx: <640> (64) [ ]	 10423601ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
2023-11-21 16:58:47.942 [DEBUG] isotp: TxState=WAIT_FC - RxState=IDLE
2023-11-21 16:58:47.943 [DEBUG] isotp: Rx: <640> (64) [i]	 10423601ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
2023-11-21 16:58:47.943 [DEBUG] isotp: Rx: <6C0> (08) [p]	 300000aaaaaaaaaa
2023-11-21 16:58:47.944 [DEBUG] isotp: Tx: <640> (08) [ ]	 21ffffffffaaaaaa
2023-11-21 16:58:47.944 [DEBUG] isotp: TxState=IDLE - RxState=IDLE
2023-11-21 16:58:47.945 [DEBUG] isotp: Rx: <640> (08) [i]	 21ffffffffaaaaaa
2023-11-21 16:58:47.961 [DEBUG] isotp: Rx: <6C0> (08) [p]	 037f367faaaaaaaa
2023-11-21 16:58:48.012 [DEBUG] isotp: Enqueuing a SendRequest for 66 bytes and TAT=Physical
2023-11-21 16:58:48.012 [DEBUG] isotp: Tx: <640> (64) [ ]	 10423601ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
2023-11-21 16:58:48.013 [DEBUG] isotp: TxState=WAIT_FC - RxState=IDLE
2023-11-21 16:58:48.014 [DEBUG] isotp: Rx: <640> (64) [i]	 10423601ffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffffff
2023-11-21 16:58:48.015 [DEBUG] isotp: Rx: <6C0> (08) [p]	 300000aaaaaaaaaa
2023-11-21 16:58:48.015 [DEBUG] isotp: Tx: <640> (08) [ ]	 21ffffffffaaaaaa
2023-11-21 16:58:48.016 [DEBUG] isotp: TxState=IDLE - RxState=IDLE
2023-11-21 16:58:48.017 [DEBUG] isotp: Rx: <640> (08) [i]	 21ffffffffaaaaaa
2023-11-21 16:58:48.034 [DEBUG] isotp: Rx: <6C0> (08) [p]	 037f367faaaaaaaa
2023-11-21 16:58:48.034 [DEBUG] isotp: Stopping CanStack
2023-11-21 16:58:48.052 [DEBUG] isotp: Thread is exiting
2023-11-21 16:58:48.053 [DEBUG] isotp: CanStack Stopped
2023-11-21 16:58:48.432 [DEBUG] isotp: Stopping CanStack
2023-11-21 16:58:48.433 [DEBUG] isotp: CanStack Stopped

@pylessard
Copy link
Owner

isotp v2.x now have a NotifierBasedCanStack. See this example :
image

I also did some updates to isotp v2.x. I really think I have reached the maximum performance I can achieve

@xRowe
Copy link
Author

xRowe commented Jan 8, 2024

Sorry I am stuck at Ethernet works these days, recently I have no time to validate it. But if I tried, I will share the NEWS

Thank you

@pylessard
Copy link
Owner

pylessard commented Jan 8, 2024

Sure, no issue. I will go forward with the release of 2.x anyhow.
Thanks for your feedback. It has been very useful

@pylessard
Copy link
Owner

  • isotp v2.x is release
  • udsoncan v1.21 is released

These 2 should have better performances. I suggest to use a NotifierBasedCanStack.

Will close as I believe it is fixed. reopen if you think it deserves more work.

@tkeairns
Copy link

resurrecting this thread since it seems recent and relevant. i'm encountering some performance issues on large block transfers and i believe it is due to:
https://github.com/pylessard/python-can-isotp/blob/v2.x/isotp/protocol.py#L1644-L1645

below screenshot notes a candump -t d where the first packet is the start of a multi-frame block transfer followed by a subsequent block transfer that encounters a ~50ms delay before starting.
image

monkey patching the connection's empty_rxqueue to perform a 'no op' results in fast transfers with <1ms response time between transfers, but is of course a hack and prone to issues.

i had followed the advice on this thread regarding NotifierBasedCanStacks etc and digging into the call stack led me to the can-isotp impl.

i'm not too familiar with the handshaking that goes on between rx threads and the caller but seems like a symmetrical event to reset_rx could be used (while adding some robustness by having a timeout instead of a potential while 1)

i.e.

self.events.reset_rx.set()

# rx thread sets this event feedback as well as clearing the reset_rx event
self.events.reset_rx_confirmation.wait(timeout=x)

@pylessard pylessard reopened this Jan 31, 2024
@pylessard
Copy link
Owner

Will investigate, I have not tested the performances when calling empty_rx_queue. Maybe I forgot something.

@tkeairns
Copy link

tkeairns commented Jan 31, 2024

in case it helps, my specific use case is looping on transfer_data after performing a request_download. here's some example (untested) code to work with:

    def block_generator(data, block_size):
        total_size = len(data)
        slice_index = 0
        sequence_number = 1

        while slice_index < total_size:
            next_index = slice_index + block_size % total_size
            yield sequence_number % 256, data[slice_index : next_index]
            sequence_number += 1
            slice_index = next_index

    try:
        big_data = bytes([_ for _ in range(256)]) * 1000
        location = MemoryLocation.MemoryLocation(
            address=0x0, memorysize=len(big_data), address_format=32
        )
        response = client.request_download(location)
        block_size = response.service_data.max_length

        for sequence_number, data in block_generator(big_data, block_size):
            log.debug("Transmitting sequence number = %s", sequence_number)
            # 50ms delay is encountered each call to this method
            client.transfer_data(sequence_number, data)

        client.request_transfer_exit()
    except udsoncan.exceptions.NegativeResponseException:
        log.error("Failed to execute download sequence")

@pylessard
Copy link
Owner

pylessard commented Feb 3, 2024

@tkeairns : You put the finger on the issue. When I wrote the v2.x version of stop_receiving I forgot it would be called all the time by the UDS client. It was a mistake from me.

Try this branch please: improve-empty-rxqueue-performance

if it works. I will merge and release right away

@pylessard
Copy link
Owner

pylessard commented Feb 6, 2024

I released isotp v2.0.4 that should solve this.
Closing the issue, reopen if needed.

@tkeairns
Copy link

tkeairns commented Feb 7, 2024

I released isotp v2.0.4 that should solve this. Closing the issue, reopen if needed.

you updated before i had a chance to test, tested the latest release and it looks good. thanks for the speedy fix!

@pylessard
Copy link
Owner

Sorry for that. I never know if I'll get an answer or not and I got impatient :)
Thanks for the good bug report.

Cheers

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants