Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Getting NORM_REMOTE_SENDER_ADDRESS event #46

Closed
AlteroCode opened this issue Nov 9, 2021 · 2 comments
Closed

Getting NORM_REMOTE_SENDER_ADDRESS event #46

AlteroCode opened this issue Nov 9, 2021 · 2 comments

Comments

@AlteroCode
Copy link

AlteroCode commented Nov 9, 2021

Good day NRL team.

I'm running into an issue with NormFileSend.cpp and NormFileRecv.cpp examples. I'm trying to send different files using few containerized NormFileSend instances to a single NormFIleRecv container instance. I do this in order, ie not simultaneously, back to back transfer. It works mostly fine, except every few sends I start getting NORM_REMOTE_SENDER_ADDRESS events continuously and sometimes it clears out, sometimes it gets stuck until a timeout and then sends/receive resumes until another time it gets stuck. Before I was getting NORM_REMOTE_SENDER_RESET event as well, but I fixed that with a static instanceID. I was going to try to use NormServer and NormClient examples as it more aligns with multiple transfers, many to one / one to many, but saw the comment that its not complete yet. Is there a way I can alleviate NORM_REMOTE_SENDER_ADDRESS event? My guess is that since Multiple senders with different IP addresses are sending, the receiver gets this event, but shouldn't destruction of the instance and closeout of the session account for this? It almost seems that some information from previous transfer remains when a new instance is created and they overlap?
Any advise, examples would be greatly appreciated.
Thank you!

@bebopagogo
Copy link
Collaborator

FYI - the NormFileSend and NormFileRecv examples are less complete file transfer apps than the norm/examples/normCast.cpp.

My hunch if you have multiple senders is that you did not set unique NormNodeIds for each sender? That is controlled by the NormCreateSession() call. The default parameter (NORM_NODE_NONE I think) causes the underlying NORM node to auto-generate a NormNodeId based on the host's IP address table. It does this by iterating through available interfaces and their assigned addresses. In your containerized environment, it may be the case that all of your nodes are using the same NormNodeId if the default code to set the NormNodeId ends up using the same IP address.

@AlteroCode
Copy link
Author

AlteroCode commented Nov 10, 2021

@bebopagogo , indeed that looks like was my issue. Default value was set to 1. I'm now taking IP address of a container and assigning last 2 values as an NormNodeId, which reduced the number of events to only 1 occurrence per transfer.
I'm running into one more issue if you don't mind helping me with it. It is the same setup and out of 100 file sends at least 2-3 transfers fail, by fail I mean transfer starts and gets stuck in the middle of transfer until time out. I reduced the timeout by setting NormSetDefaultRxRobustFactor to 1, which quickly times out the failed/stuck transfer and moves on to next. Are you aware of what might be causing the transfer to get stuck? or maybe a mechanism to alleviate this and recover? Its interesting that if I send one sender to many receivers, it doesn't matter how many transfer are initiated, the transfer success rate is 100%, but in case where its many senders send multiple files to one receiver, the success rate is about 97-98% out of 100 transfers. I'm still looking through the developer guide to find a function that may help with this, but I'm hoping you might give me a tip which one to go with if there are functions that can help with this issue. I tried to tweak and add flow-control, Parity and tweak buffer and other default numbers passed during session creation. So far some changes cause more loss transfers, but can't seem to find the tweak that would eliminate that small fail rate.

Thank you! I appreciate all the help provided.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants