New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
netty - corrupted messages with netty multi-client #5371
Comments
Just some comments and possible issue. It makes sense that your next messages are reading fine with a length based encoder/decoder; it will not matter if some gets corrupted, now if you used a Variant Length framer you would probably have complete corruption after the first corruption. So with this the corruption and the fact the following messages work would lead me to think that the corruption is occurring before or after the prepender with the string encoding/decoding.
|
The channel read looks pretty broken, if you are dealing with json only payloads consider JsonObjectDecoder. |
Let me close this ... If you still think its an issue please reopen |
The pretty typical netty client in the multithreaded environment gets corrupted messages randomly. Let's say 20-30 corrupted messages on 30-40k transferred messages.
The communication protocol is simple - packet contains a message of following format:
| length | string |
We create ten netty connections (FeedCommunicatorImpl class given below) to the same server. We instantiate ten Fetcher classes each of them containing and creating netty - FeedCommunicatorImpl connection to the same server. Each Fetcher sends a command to the server. The command says "I want to receive all messages with given ID" and that Fetcher will receive only messages with given ID from the server.
All works well except that we randomly get corrupted messages. The example of corrupted message given below.
I can confirm that the data server sends are OK since it is payed service and it offer control application which shows each specific message sent to us - we can confirm that server sends correct messages.
The netty-FeedCommunicatorImpl puts message in a queue, and at the other side of queue the MessageHandler converts received String messages to JSON.
Strange thing is that a corrupted message does not affect next messages in any way - they come correct.
In a length based protocol in an async environment I would expect that the first corrupted message in a length based protocol will totally devastate data exchange but it does not happen. The next message is usually correct.
Here is the code - below it are example of the corrupted and correct message. The correct message is JSON string and it should start with "{" character.
`
`
Corrupted message
"n":"id: 29539990; name: Odd III vs Kongsberg; ss: 1"}}_______________MESSAGE ERROR DETAILS:
2016-06-07 20:56:17 ERROR MessageHandler:144 -[THREAD ID= MessageHandler-7]- CORRUPT MESSAGE: >{"C":"SS","bmId":43,"tok_______________MESSAGE ERROR DETAILS:
2016-06-07 20:58:26 ERROR MessageHandler:144 -[THREAD ID= MessageHandler-5]- CORRUPT MESSAGE: ?{"C":"SS","bmId":43,"tok_______________MESSAGE ERROR DETAILS:
2016-06-07 23:01:11 ERROR MessageHandler:144 -[THREAD ID= MessageHandler-7]- CORRUPT MESSAGE: �{"C":"SS","bmId":43,"tok_______________MESSAGE ERROR DETAILS:
2016-06-08 03:03:51 ER
Correct message
2016-06-08 09:35:34 INFO MessageHandler:82 -[THREAD ID= MessageHandler-1]- Response is : {"C":"SS","bmId":43,"tok":"987987987516","inst"
:"exefeed","eId":"29554494","dt":"06-08-2016 07:30:48","cmd":"U","flg":23,"obj":[{"num":283,"sts":"Aces=0:0|Double Faults=4:2|Win % 1st Se
rve=56:57|Break Point Conversions=50:60|INFO=30:0|","st":"11114","stn":"Valeriya Zeleva PointScore","sc":{"POINTS":[30,0]},"bt":[{"n":"HA"
,"o":[{"n":"1","v":1.166},{"n":"2","v":4.5}]},{"n":"HA_S2","o":[{"n":"1","v":1.666},{"n":"2","v":2.1}]},{"n":"RG_S2","h":3,"o":[{"n":"1","v":1.8},{"n":"2","v":1.909}]},{"n":"RG_S2","h":4,"o":[{"n":"1","v":1.727},{"n":"2","v":2}]},{"n":"RG_S2","h":5,"o":[{"n":"1","v":1.666},{"n":"2","v":2.1}]}],"u":1465371049,"tm":0,"tss":"00:00:00"}]}
The text was updated successfully, but these errors were encountered: