Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: Exception while parsing the data received #206

Closed
3 tasks
Lurker00 opened this issue Apr 23, 2024 · 37 comments
Closed
3 tasks

[Bug]: Exception while parsing the data received #206

Lurker00 opened this issue Apr 23, 2024 · 37 comments
Labels
bug Something isn't working

Comments

@Lurker00
Copy link

LocalTuya Version

3.2.5.2b5

Home Assistant Version

2024.4.3

Environment

  • Does the device work using the Home Assistant Tuya Cloud component?
  • Is this device connected to another local integration, including Home Assistant and any other tools?
  • The devices are within the same HA subnet, and they get discovered automatically when I add them

What happened?

I see such log records regularly, I think, since I first installed LocalTuya.

It may happen that HA engine sends wrong data sometimes, but may be a parser error. The following lines:

if prefix_offset_55AA < 0 and prefix_offset_6699 < 0:
header_len = header_len_55AA
self.buffer = self.buffer[1 - prefix_len :]
else:
header_len = header_len_6699
prefix_offset = (
prefix_offset_6699 if prefix_offset_55AA < 0 else prefix_offset_55AA
)
self.buffer = self.buffer[prefix_offset:]

for me look like you always expect a valid data, presuming that either prefix 55AA or 6699 exists even if not found, and does not handle well the case when there are both, and the expression "1 - prefix_len" scares me, because it must be a negative value always (I'm not familiar with Python!). But it's only MHO.

I tried to uncomment the line

# self.debug("received data=%r", binascii.hexlify(data))

and turn on debug for both custom_components.localtuya and custom_components.localtuya.pytuya, but I see no related records in the log. Anyway, even if that line wrote something, it would be overkill: the data content is useful only if the exception happened, and it happens only a few times per day.

If the actual data that causes exceptions is required, I believe better to write it into files in binary form, piece by piece, with autogenerated file names. If you can give me a piece of such a code, I'd be happy to collect the data.

Steps to reproduce.

Don't know

Relevant log output

2024-04-23 05:52:14.206 ERROR (MainThread) [homeassistant] Error doing job: Fatal error: protocol.data_received() call failed.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 1017, in _read_ready__data_received
self._protocol.data_received(data)
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 900, in data_received
self.dispatcher.add_data(data)
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 643, in add_data
header = parse_header(self.buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 489, in parse_header
raise DecodeError(
custom_components.localtuya.core.pytuya.DecodeError: Header claims the packet size is over 1000 bytes! It is most likely corrupt. Claimed size: 543552210 bytes. fmt:>4I unpacked:(21930, 9, 32, 543552210)

2024-04-23 11:45:22.701 ERROR (MainThread) [homeassistant] Error doing job: Fatal error: protocol.data_received() call failed.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 1017, in _read_ready__data_received
self._protocol.data_received(data)
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 900, in data_received
self.dispatcher.add_data(data)
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 643, in add_data
header = parse_header(self.buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 489, in parse_header
raise DecodeError(
custom_components.localtuya.core.pytuya.DecodeError: Header claims the packet size is over 1000 bytes! It is most likely corrupt. Claimed size: 4154224665 bytes. fmt:>4I unpacked:(21930, 9, 32, 4154224665)

Diagnostics information.

No response

@Lurker00 Lurker00 added the bug Something isn't working label Apr 23, 2024
@xZetsubou
Copy link
Owner

xZetsubou commented Apr 23, 2024

I have seen this error a few times before and I thought it was fixed adjust heade_len? it never shows up to me after that, the corrupted data isn't necessary needed because It will only handle the necessary one. Is this issue happen to you before/after beta5?

try to delete both lines "header_len = header_len_6699" and "header_len = header_len_55AA" inside the while loop.

@Lurker00
Copy link
Author

Is this issue happen to you before/after beta5?

Yes, I've seen it long beffore beta5.

try to delete both lines "header_len = header_len_6699" and "header_len = header_len_55AA" inside the while loop.

Done. I'll report you how it works.

@xZetsubou
Copy link
Owner

xZetsubou commented Apr 24, 2024

I'm really not sure about this this line, However this caused an issues for some devices probably similar to "IR Remote" when learn a button and sends a payload with the base64 code of learned button. it raises an error... so I may just stop it from raising an errors. even tho yeah the payload you received 4154224665 is too much :) and both of them are 55AA so not sure what was even that.

if payload_len > 1000:
raise DecodeError(
"Header claims the packet size is over 1000 bytes! It is most likely corrupt. Claimed size: %d bytes. fmt:%s unpacked:%r"
% (payload_len, fmt, unpacked)
)

xZetsubou added a commit that referenced this issue Apr 24, 2024
@Lurker00
Copy link
Author

even tho yeah the payload you received 4154224665 is too much :)

All the exceptions of this kind I experienced are for "gigabytes"! That's why I believe it is either a packet with wrong format (not related to LocalTuya actually), or a bug somewhere in the data flow. BTW first I thought they are negative numbers converted to unsigned values, but they are not: inverse values are also too big.

Not a big issue if it is unrelated data. But I'd prefer to ensure it is not a possible bug which leads to information loss.

@Lurker00
Copy link
Author

this caused an issues for some devices probably similar to "IR Remote" when learn a button and sends a payload with the base64 code of learned button

You may compare the calculated payload_len with the size of the data received, to ensure you actually have payload_len bytes in the buffer. This would pass IR codes through but detect the problem in my case.

@Lurker00
Copy link
Author

That fix didn't help:

2024-04-24 16:08:53.098 ERROR (MainThread) [homeassistant] Error doing job: Fatal error: protocol.data_received() call failed.
Traceback (most recent call last):
File "/usr/local/lib/python3.12/asyncio/selector_events.py", line 1017, in _read_ready__data_received
self._protocol.data_received(data)
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 900, in data_received
self.dispatcher.add_data(data)
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 643, in add_data
header = parse_header(self.buffer)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/config/custom_components/localtuya/core/pytuya/__init__.py", line 489, in parse_header
raise DecodeError(
custom_components.localtuya.core.pytuya.DecodeError: Header claims the packet size is over 1000 bytes! It is most likely corrupt. Claimed size: 356474677 bytes. fmt:>4I unpacked:(21930, 9, 32, 356474677)

@Lurker00
Copy link
Author

Also, I believe that here:

prefix_offset = (
prefix_offset_6699 if prefix_offset_55AA < 0 else prefix_offset_55AA
)

if both 55AA and 6699 found, prefix_offset should be the minimum of both offsets...

@Lurker00
Copy link
Author

Sorry, I was wrong and deleted messages I wrote without paying enough attention!

@xZetsubou
Copy link
Owner

xZetsubou commented Apr 25, 2024

if both 55AA and 6699 found, prefix_offset should be the minimum of both offsets...

This can not happen.6699 will only be found in payloads 3.5 protocol devices.

Actually, looking at this, I may need to adjust some lines.
This is useless.

if prefix_offset_55AA < 0 and prefix_offset_6699 < 0:

This will only handle 55aa

self.buffer = self.buffer[header_len - 4 + header.length :]

xZetsubou added a commit that referenced this issue Apr 25, 2024
@xZetsubou
Copy link
Owner

I'm not sure if this will make a difference with corrupted data. However, if it still does, I'll make a change to ignore the payload that has a length of millions.

Can you test if the last commit would make any difference, I don't know how to trigger this on my device.

@Lurker00
Copy link
Author

This will only handle 55aa

Yesterday, I've made some changes to this line and around, and, so far, my log confirms that:

This is useless.

if prefix_offset_55AA < 0 and prefix_offset_6699 < 0:

But only presuming that corrupted data will never come.

6699 will only be found in payloads 3.5 protocol devices.

Strictly speaking, the buffer contains random bytes (encrypted payload, crc/hmac) which may have any value. The chance is small, and may be even zero (cryptographic data usualy does not contain zero byte sequences).

The current code does not presume that incoming data always contains one or more full messages. It is implemented as a stream reader, which only presumes that if a message header found, the whole message must be in the buffer. Instead, it would be enough to always check first bytes of the buffer rather than do search.

Note, that I still can't explain how those DecodeError exceptions may happen. I noticed that they all have the same prefix, seqno and cmd. The only difference in length, which is always weird, and I can't find cmd=0x20 in your source code. The only explanation I have is that the prefix was found in the data, by the search inside the buffer. But I have no a confirmation yet in my log. I'm logging events when both 55AA and 6699 are found in the buffer, hoping this would be the explanation. But the log is clear so far.

@Lurker00
Copy link
Author

Can you test if the last commit would make any difference, I don't know how to trigger this on my device

I'm currently running a test with more comprehensive changes, and with logging of suspicious events. I'll be back 😄

@xZetsubou
Copy link
Owner

xZetsubou commented Apr 25, 2024

Maybe you can also add cmd to expectation message as well I wonder what command actually raises this. 😺

@Lurker00
Copy link
Author

I log the buffer bytes, and do it in the add_data. So, cmd could be found out, and the device id should be known as well. But, so far, there are no events. Maybe I have to wait at least for a couple of days...

@Lurker00
Copy link
Author

@xZetsubou Please help me to understand, because I found nothing in the sources.

I look into add_data method in pytuya. It calls parse_header and unpack_message which may raise an exception because self.buffer content is invalid. But then add_data will be called with a new data, while self.buffer still contains that invalid content. Why shouldn't it raise exceptions again and again, making the buffer growing infinitely here:

I can't find a line where the buffer is reset due to its invalid content!

@xZetsubou
Copy link
Owner

xZetsubou commented Apr 25, 2024

Because this was never an issue before adding support for protocol '3.5', LocalTuya never expected the devices to send a payload of millions in size so there is no expectation for it.

And this what I meant by I may make changes to ignore it by adding a handler for this and reset the buffer when it happens. but I was curious of what this mass payload has.

But then add_data will be called with a new data

note: That when we raise an error to data_received it will disconnect from the device and reset the instances and buffer will be reset as well.

@xZetsubou
Copy link
Owner

Does the invalid payload still shows up?

@Lurker00
Copy link
Author

No, nothing yet.

@Lurker00
Copy link
Author

It happened, and I was right: both signatures found in one, good, message! The message is from a gateway:

2024-04-26 11:55:56.014 WARNING (MainThread) [custom_components.localtuya.core.pytuya] [bf5...wrf] Both signatures found: 168-168
b'000055aa00006699000000080000009800000000ebfe429b975af6d1c7d6fe44cf061b06cabf81b5cda4b0476182d909217f598bf231fd0bf48a83284240472fc091f1606bc38cd222459be9b31ebe6f204e0c09922ba75fbbf4f583b8cdc2ade73baef44ef75e27bbd3e114b0d7ce36e62dcf49c610f4d59b2cb09f10aa456ac6ec5bb8001a80ff06c02e7993a465c26d623a973e97db8869d223a0a8a2afab26e7da4f0000aa55'
b'000055aa00006699000000080000009800000000ebfe429b975af6d1c7d6fe44cf061b06cabf81b5cda4b0476182d909217f598bf231fd0bf48a83284240472fc091f1606bc38cd222459be9b31ebe6f204e0c09922ba75fbbf4f583b8cdc2ade73baef44ef75e27bbd3e114b0d7ce36e62dcf49c610f4d59b2cb09f10aa456ac6ec5bb8001a80ff06c02e7993a465c26d623a973e97db8869d223a0a8a2afab26e7da4f0000aa55'

It has "00006699" as a seqno, "00000008" as a cmd, "00000098" as the length (158 bytes). There are no a decoded message in the log, because I didn't turn debug for it. But there are no other errors, meaning, it was decoded and dispatched.

The code I run:
  def add_data(self, data):
      """Add new data to the buffer and try to parse messages."""
      self.buffer += data

      header_len_55AA = struct.calcsize(MESSAGE_RECV_HEADER_FMT)
      header_len_6699 = struct.calcsize(MESSAGE_HEADER_FMT_6699)

      header_len = header_len_55AA
      prefix_len = len(PREFIX_55AA_BIN)

      while self.buffer:
          prefix_offset_55AA = self.buffer.find(PREFIX_55AA_BIN)
          prefix_offset_6699 = self.buffer.find(PREFIX_6699_BIN)
          if prefix_offset_55AA > 0 or prefix_offset_6699 > 0:
              self.warning("Odd bytes, 6699: %d, 55AA: %d, buffer: %d %r", prefix_offset_6699, prefix_offset_55AA, len(self.buffer), binascii.hexlify(self.buffer))

          if prefix_offset_55AA < 0 and prefix_offset_6699 < 0:
              self.warning("Odd data: %d-%d\n%r\n%r", len(data), len(self.buffer), binascii.hexlify(data), binascii.hexlify(self.buffer))
              self.buffer = self.buffer[1 - prefix_len :]
          else:
              if prefix_offset_55AA < 0:
                  prefix_offset = prefix_offset_6699
              elif prefix_offset_6699 < 0:
                  prefix_offset = prefix_offset_55AA
              else:
                  prefix_offset = (
                      prefix_offset_55AA if prefix_offset_55AA < prefix_offset_6699 else prefix_offset_6699
                  )
                  self.warning("Both signatures found: %d-%d\n%r\n%r", len(data), len(self.buffer), binascii.hexlify(data), binascii.hexlify(self.buffer))

              header_len = (
                  header_len_6699 if prefix_offset == prefix_offset_6699 else header_len_55AA
              )

          # Check if enough data for message header
          if len(self.buffer) < header_len:
              break

          self.buffer = self.buffer[prefix_offset:]
          header = parse_header(self.buffer)
          if header.length > 2000:
              self.error("Packet size is too big: %d, size: %d buffer: %r", header.length, len(self.buffer), binascii.hexlify(self.buffer))

          hmac_key = self.local_key if self.version >= 3.4 else None
          no_retcode = False
          msg = unpack_message(
              self.buffer,
              header=header,
              hmac_key=hmac_key,
              no_retcode=no_retcode,
              logger=self,
          )
          self.buffer = self.buffer[header.total_length:]
          self._dispatch(msg)
The gateway details:

localtuya-944c1616039a1d1ec6b9745179bff74e-ВС_ Wired Zigbee gateway-6fd9643a958b0dc41ebc2e63ecb38c1f.json

So please implement the full check of the signature offsets, like I do, including checks for -1, in case of damaged data!

Meanwhile, I'll turn the debugging on, because I'm curious what the gateway is sending.

@xZetsubou
Copy link
Owner

Assuming that both of them exist, how would this cause an issue? "prefix_offset" will prioritize finding 55aa over 6699, but for some reason, it actually hasn't found either of them in your payload since both of them returned -1.

So instead we can make a check before it goes to parse header, if both prefix failed to found then ignore it.

@Lurker00
Copy link
Author

Assuming that both of them exist, how would this cause an issue? "prefix_offset" will prioritize finding 55aa over 6699

If both values can be found in a message, why can't "000055AA" be found in a 6699 message? This log record is a proof of the concept. It means, the check for booth offsets != -1 is the must, and the lowest offset should be taken.

Or, would you wait more for a proof of "000055AA" found inside 6699 message?

I strongly believe that a good code should check for all possible values without presuming "it will never happen", when the data comes from untrusted sources.

@Lurker00
Copy link
Author

Can you decode that payload? You have the local key in the diagnistics data.

@xZetsubou

This comment was marked as off-topic.

@Lurker00
Copy link
Author

"else" means that it didn't found any of them?

"else" there means "both found" (both are not < 0), and that's what the log record says. I don't understand why do you think opposite...

@xZetsubou
Copy link
Owner

xZetsubou commented Apr 26, 2024

Probably because I'm half a sleep 👀 sorry for that not sure why did think of that too. I'll look into this deep later thanks for the debugging ❤️ really useful informations.

edit: btw did the error raised after this payload?

@Lurker00
Copy link
Author

btw did the error raised after this payload?

No, because it was correctly handled :)

@xZetsubou
Copy link
Owner

xZetsubou commented Apr 26, 2024

I can't decode the payload because protocols >= 3.4 has a session localkey that change every time we connect to the device. however the CMD is 8 which is a STATUS payload.

edit im pretty sure this is unrelated to the mass payload issue and this message handled like normally when I wake up I'll double check this.

@xZetsubou
Copy link
Owner

I can understand your point that offset may cause an issues if it 55aa existed in 6699 message there was no issues reported that related to this matter.

So let's see if 55aa existed in 6699 and we take 6699 offset would this break the data?

I think after changing buffer offset to takes header.total_length we can just remove prefix_offset it longer required.

Technically I added prefix_offset for buffer offset. so everything shuold works as it intended like this:

add_data

    def add_data(self, data):
        """Add new data to the buffer and try to parse messages."""
        self.buffer += data

        header_len = struct.calcsize(MESSAGE_RECV_HEADER_FMT)
        while self.buffer:
            # Check if enough data for measage header
            if len(self.buffer) < header_len:
                break

            header = parse_header(self.buffer, logger=self)
            hmac_key = self.local_key if self.version >= 3.4 else None
            no_retcode = False
            msg = unpack_message(
                self.buffer,
                header=header,
                hmac_key=hmac_key,
                no_retcode=no_retcode,
                logger=self,
            )
            self.buffer = self.buffer[header.total_length :]
            self._dispatch(msg)

We can also add this line to always take the minimum prefix, But then again is there any point of adding it when we already resolved buffer offset? because it was the main issue it wasn't clean the buffer correctly before

takes the lowest prefix

prefix_offset_55AA = self.buffer.find(PREFIX_55AA_BIN)
prefix_offset_6699 = self.buffer.find(PREFIX_6699_BIN)
prefixes = (prefix_offset_6699, prefix_offset_55AA)
prefix_offset = min(prefix for prefix in prefixes if not prefix < 0)

note that having 6699 in 55aa messages was already taken in conclusions here.

def parse_header(data, logger=_LOGGER):
"""Unpack bytes into a TuyaHeader."""
if data[:4] == PREFIX_6699_BIN:
fmt = MESSAGE_HEADER_FMT_6699
else:
fmt = MESSAGE_HEADER_FMT_55AA

@Lurker00
Copy link
Author

So let's see if 55aa existed in 6699 and we take 6699 offset would this break the data?

It will definitely break, if it would happen. But waiting is a waste of time: it is the case when seqno is 0x55AA, and it seems all my devices have passed this boundary. I have had many such exceptions in the past.

Offtopic

Looks like you have no experience writing software for something like a missile, when it is not possible to make real life tests of all the cases! 😄

The simplified version of your code without any search is good, presuming the data is always correct. But, anyway, I'd put an explicit check for PREFIX_55AA_BIN at line 461, and raise an exception if no a known signature found at the beginning of the buffer. Consider also a new device with a new signature, not only incorrect data!

@xZetsubou
Copy link
Owner

xZetsubou commented Apr 27, 2024

For sure I have never had that experience 😄 I always go with the saying if it's not broken don't fix it

Then let's go with your suggestion, however Let's not pass the data to parse_header unless it's valid, and the check happen before most likely in add_data and add warnings if that happen,

xZetsubou added a commit that referenced this issue Apr 29, 2024
* Reset the buffer if we can't find valid message.
* Ensure that the prefix at the start of the message.
xZetsubou added a commit that referenced this issue Apr 29, 2024
* Reset the buffer if we can't find valid message.
* Ensure that the prefix at the start of the message.
@xZetsubou xZetsubou added the master/next-release Fixed in master branch, Will be ready in the next release label Apr 29, 2024
@Lurker00
Copy link
Author

This is an example of 55AA inside 6699, finally:

2024-04-29 04:26:43.813 WARNING (MainThread) [custom_components.localtuya.core.pytuya] [bf0...sti] Both signatures found: 54-54
b'000066990000000055aa0000000900000020c6501291af72d32c2bdd6323fbadf0cb999b39be028bf48ca5e6c14fe2a5390500009966'
b'000066990000000055aa0000000900000020c6501291af72d32c2bdd6323fbadf0cb999b39be028bf48ca5e6c14fe2a5390500009966'

It would produce exactly the exception in the first post.

@Lurker00
Copy link
Author

This is an example of a damaged packet:

2024-04-28 23:54:10.791 WARNING (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Odd data: 52-52
b'dca1dc51e79224565429963025bf2a961571b7549fd87e4f25f57a585e5e9c5388501b021dbe909b2df8155c39f580510000aa55'
b'dca1dc51e79224565429963025bf2a961571b7549fd87e4f25f57a585e5e9c5388501b021dbe909b2df8155c39f580510000aa55'

Obviously, it is a tail of a message, but, unlike cases in the #213, I have no log records of attemps to parse the head.

This damaged packet was produced by a cheap BLE T&H sensor with display, like this. They are useless for automation, because the send measurements very rare, but sometimes they dump all they've collected for past hours at once. See an a small extraction when two such sensors decided to free their history:

T&H sensors output
2024-04-28 23:54:09.738 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337646,"data":{"dps":{"1":231,"2":38},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:09.753 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337646,"data":{"dps":{"1":226,"2":39},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:09.760 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337646,"data":{"dps":{"1":227,"2":40},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:09.995 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337648,"data":{"dps":{"1":258,"2":35},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:10.184 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337648,"data":{"dps":{"1":260,"2":35},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:10.303 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337648,"data":{"dps":{"1":262,"2":34},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:10.404 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337648,"data":{"dps":{"1":265,"2":34},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:10.483 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337648,"data":{"dps":{"1":265,"2":37},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:10.791 WARNING (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Odd data: 52-52
b'dca1dc51e79224565429963025bf2a961571b7549fd87e4f25f57a585e5e9c5388501b021dbe909b2df8155c39f580510000aa55'
b'dca1dc51e79224565429963025bf2a961571b7549fd87e4f25f57a585e5e9c5388501b021dbe909b2df8155c39f580510000aa55'
2024-04-28 23:54:10.902 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":220,"2":32},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:11.025 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":216,"2":32},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:11.047 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":266,"2":40},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:11.117 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":267,"2":39},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:11.128 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":216,"2":32},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:11.220 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":267,"2":38},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:11.233 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":217,"2":31},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:11.323 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":220,"2":33},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:11.335 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":268,"2":39},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:11.427 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":218,"2":32},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:11.507 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":268,"2":42},"cid":"uuid03a62e05881c"}}'
2024-04-28 23:54:11.526 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf1...una] Deciphered data = '{"protocol":4,"t":1714337650,"data":{"dps":{"1":218,"2":31},"cid":"uuid6e31ed40a28d"}}'
2024-04-28 23:54:11.609 INFO (MainThread) [custom_components.localtuya.core.pytuya] [bf2...38k] Deciphered data = '{"protocol":4,"t":1714337649,"data":{"dps":{"1":273,"2":41},"cid":"uuid03a62e05881c"}}'

Note the data rate: several messages per second! It lasted for many seconds, with that damaged message in between.

So, there are real life confirmations that the checks I suggested and you've implemented are all required.

@xZetsubou
Copy link
Owner

So this will be handled with the changes made in bb90383. And we can also add the check you suggested in #213

they dump all they've collected for past hours at once

Now it's makes sense with the amount of payload they throw.

@Lurker00
Copy link
Author

we can also add the check you suggested in #213

Yes, please do. As I said before, there were lots of presumptions in the original code, but the logs showed they all were wrong.

@xZetsubou
Copy link
Owner

By the way I did try to parse the message you got, as expected to raised an error with damaged prefix.

DecodeError: Header prefix wrong! 3701595217 is not 21930 or 26265

@Lurker00
Copy link
Author

Right. But exception is a performance drop, also may loose the next packet already buffered in the socket.

Copy link

This issue was closed because it was resolved on the release:

@github-actions github-actions bot added stale and removed master/next-release Fixed in master branch, Will be ready in the next release stale labels May 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants