Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GCS_MAVLink: schedule current waypoint rather than immediate send #16659

Merged
merged 1 commit into from Feb 23, 2021

Conversation

peterbarker
Copy link
Contributor

This message may not fit in our outgoing buffer

This message may not fit in our outgoing buffer
@peterbarker
Copy link
Contributor Author

Tested by graphing MISSION_CURRENT.seq while doing a wp set

@tridge tridge merged commit c5e62eb into ArduPilot:master Feb 23, 2021
@peterbarker peterbarker deleted the pr/waypoint-schedule branch February 23, 2021 01:43
@WickedShell
Copy link
Contributor

This looks like a serious regression actually. This was intentionally acting as our ACK. The problem with queuing it like this is that by the time we send it it may not be current. Heck when we sent it here we may have already moved on 3 items later. But it is important to be able to get an ack that when you target a DO_LAND_START for example to know that it actually jumped to the item. Without an ack of some form it's impossible to validate that you jumped to any DO item, which means a GCS has to trust the user to manage error detection. Without this change you at least could detect that we did definetly react to it. Packet loss & buffers means it may have happened without an ack, but if you got an ack it definetly happened.

If we can't send here we should queue, but we need some form of ack here.

@peterbarker
Copy link
Contributor Author

This looks like a serious regression actually. This was intentionally acting as our ACK. The problem with queuing it like this is

#16714

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants