Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

warn: Cannot forward task (ID) to processing node (IP):3000: Failed sending data to the peer #23

Closed
ChipwizBen opened this issue Jun 25, 2019 · 10 comments

Comments

@ChipwizBen
Copy link

ChipwizBen commented Jun 25, 2019

Experimenting with ClusterODM. Have got a two node test cluster setup and a VM with ClusterODM running on it. Everything seems to be setup correctly, both devices show correctly in :10000 and with NODE LIST.

When submitting a job with split, it throws the error warn: Cannot forward task (ID) to processing node (IP):3000: Failed sending data to the peer. The error appears with both IPs alternately with several test datasets from 14 - 986 images. No GCPs on these. I tried the following splits on a 986 image dataset:

50 - error above
100 - error above
400 - error above
500 - job splits into FIVE parts and only make it to one node. Raising the issue via my mobile but hopefully the below table comes out OK:

Node 	Status 	Queue 	Version 	Flags

1 192.168.1.172:3000 Online 5/4 1.5.2
2 192.168.1.173:3000 Online 0/4 1.5.2

@pierotofy
Copy link
Member

It's likely a network configuration error. 5/4 means you're sending all tasks to node 1 but none on node 2. If you lock 1 with:

NODE LOCK 1

And try to send a task, what do you get?

@pierotofy pierotofy added the question Further information is requested label Jun 25, 2019
@ChipwizBen
Copy link
Author

ChipwizBen commented Jun 25, 2019

The hosts were scripted builds and are exactly the same. All hosts are in the same rack, on the same switch.

Locking shifts the load to the other host, but with only 3 'splits' this time (which is probably normal since the split is set to 400):

| Node | Status | Queue | Version | Flags

1 | 192.168.1.172:3000 | Online | 0/4 | 1.5.2 | L
2 | 192.168.1.173:3000 | Online | 3/4 | 1.5.2 |  

@pierotofy
Copy link
Member

Mm, so if both nodes are unlocked, only one node is receiving all the tasks.

This is odd, I wonder if it's due to the network being really fast and the tasks being received all at once before they can be forwarded. The choice of a node is done in this function https://github.com/OpenDroneMap/ClusterODM/blob/master/libs/nodes.js#L135 but if two tasks are racing they might both end up on the same node, because they are not aware of each other.

I think this is a separate issue and doesn't explain the Failed sending data to the peer error.

@ChipwizBen
Copy link
Author

This is still an issue. It seems to keep stacking the same submodel onto the same node over and over. It complains about read timeouts but yet has no trouble piling on the jobs:

[WARNING] LRE: submodel_0002 failed with: HTTPConnectionPool(host='localhost', port=3000): Read timed out. (read timeout=30)
[INFO]    LRE: Re-queueing submodel_0002 (retries: 5)
[INFO]    LRE: About to process submodel_0002 remotely
[INFO]    LRE: Waiting 50 seconds before processing submodel_0002
2019-07-24 20:19:35,719 DEBUG: Found 23389 points in 66.418241024s
[INFO]    LRE: submodel_0001 (da43350f-c792-4511-840e-7672e2298ebb) is still running
#> !!
1) 192.168.1.172:3000 [online] [0/4] <version 1.5.3>
2) 192.168.1.173:3000 [online] [6/4] <version 1.5.3>

#> !!
1) 192.168.1.172:3000 [online] [0/4] <version 1.5.3>
2) 192.168.1.173:3000 [online] [7/4] <version 1.5.3>

@pierotofy
Copy link
Member

I still don't know how to replicate this.

image

If you send two tasks to ClusterODM (from the NodeODM UI, don't use split-merge), are the tasks sent to the two nodes? Or do they queue on the same node (so that one node is not used at all)?

@ChipwizBen
Copy link
Author

Yes, it is correctly distributed:

#> !!
1) 192.168.1.172:3000 [online] [1/4] <version 1.5.3>
2) 192.168.1.173:3000 [online] [1/4] <version 1.5.3>

There's something wrong with the splitting it seems.

@pierotofy
Copy link
Member

Maybe it's the number of parallel connections that's tripping something on your network.

I would try to patch https://github.com/OpenDroneMap/ODM/blob/09109f33f94b8e9f2ec804fec94c3a53783daf4a/opendm/remote.py#L356 by adding the parallel_uploads parameter and set it to 1 or lower it to 3 and see what happens.

https://pyodm.readthedocs.io/en/latest/#api

@ChipwizBen
Copy link
Author

ChipwizBen commented Jul 25, 2019

Thanks, that helped. I set it to 3. It seemed to upload just as fast to the processing nodes with 3 connections as it did with no limit.

It's still piling everything onto one node though:

#> !!
1) 192.168.1.172:3000 [online] [3/4] <version 1.5.3>
2) 192.168.1.173:3000 [online] [0/4] <version 1.5.3>

And the console says that there are two sub-models, so a rogue one has been dropped in there. This is a dev system, there are no other jobs processing. I can see from the console that it didn't try to upload any sub-models twice, so there's something else going on there.

@pierotofy pierotofy added possible software fault and removed question Further information is requested labels Jul 30, 2019
@pierotofy
Copy link
Member

pierotofy commented Jul 31, 2019

@ChipwizBen see if the changes in #32 help this issue.

Also try to set --public-address to the appropriate value when launching ClusterODM. Perhaps that's the cause of the read timeouts (why is it using "localhost" ?)

@pierotofy
Copy link
Member

Closing this with the assumption that #32 fixed / help the network issues.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants