Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make BinSync Safer for Unstable Internet #142

Closed
mahaloz opened this issue Jun 17, 2022 · 6 comments
Closed

Make BinSync Safer for Unstable Internet #142

mahaloz opened this issue Jun 17, 2022 · 6 comments

Comments

@mahaloz
Copy link
Member

mahaloz commented Jun 17, 2022

Problem

BinSync is heavily reliant on the fact that the user will always have an internet connection when they connect to a remote repository. If at any point BinSync is unable to do a pull or push from remote, the BinSync worker thread dies leaving the user unable to use BinSync anymore.

Proposal

A good solution will need to implement error handling in a variety of places. One example location is here:
https://github.com/angr/binsync/blob/a1760e2ce6468efaffe41371163d609ff0cb4064/binsync/common/controller.py#L200

This should be wrapped in a try-catch for when it failed to update, but this is only the start of the problem. If it fails into the try-catch, the client needs to be marked a "disconnected" until a check again in the next cycle. When a client is disconnected, the UI should not update, but it should also not be wiped clean. Whatever is there should stay there. The user should still be able to pull local changes if they like from the shown UI. In the next cycle, we check if we have established a connection.

What is also important is we redesign how being connected to a remote works for when a user makes a change. Regardless of whether a user is connected to a remote or not, whenever they make a change a commit should be made on that git repo. This helps us out for when the reconnect, then we can finally push those changes with the update.

@ltfish
Copy link
Collaborator

ltfish commented Jun 17, 2022

I feel the design philosophy of binsync is very similar to Git: You can totally work locally without pushing to or pulling from a remote. Failing a push or a pull should not matter to the local user at all. I don't see why "this is only the start of the problem."

@mahaloz
Copy link
Member Author

mahaloz commented Jun 17, 2022

The phrase "the start of the problem," is a reference to all the other locations where a remote operation can fail. There are many. To fix this each one needs to be found, fail gracefully, and alert the user through our control panel that they are no longer connected (color change).

@ltfish
Copy link
Collaborator

ltfish commented Jun 18, 2022

Are there multiple places in the code base that would trigger a pull or a push?

@mahaloz mahaloz self-assigned this Jun 24, 2022
@mahaloz
Copy link
Member Author

mahaloz commented Jun 29, 2022

Progress

A lot of the internals were changed in #146, which updated the current sync models to actually place remote data in local branches. Before this update, we used to actually access tree files by commit over the remote. It is no longer like that.

Now there is a small number of places to make updates to. First, update the try-catch structures in the Client for pull/push so that on failure we set somewhere in the client that the remote is no longer active. On each iteration in the Controller, we should check that same variable and decide whether or not to change the color of the UI to yellow (for inactive remote).

To optimize, make sure to set the active_remote in pull and do not enter push if the remote is not active.

@mahaloz
Copy link
Member Author

mahaloz commented Jul 12, 2022

I've done a lot of work in #159 and #146 to make BinSync have only 2 points of failure for network related things: push and pulling. I've solved most issues but on Master if the client gets disconnected while using BinSync, it will stall their entire IDA... need to fix this bug asap.

@mahaloz
Copy link
Member Author

mahaloz commented Aug 8, 2022

Turns out this issue was fixed with #171 from #173

@mahaloz mahaloz closed this as completed Aug 8, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants