Skip to content
This repository has been archived by the owner on Nov 9, 2019. It is now read-only.

Only allow running a single pick instance concurrently #171

Merged
merged 1 commit into from
Apr 6, 2018

Conversation

leonklingele
Copy link
Collaborator

To prevent data loss, only a single pick instance should be running
at the same time.
Without this patch, if an active "pick note" editor is open and
new credentials are stored in a pick safe, closing the "pick note"
editor will overwrite all changes happened since.

@@ -75,6 +76,16 @@ func (c *client) Backup() error {
return c.putObject(bytes.NewReader(data), c.Bucket, backupKey)
}

func (c *client) Lock() error {
// TODO: Implement me!
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@bndw maybe you find the time to implement this? Please merge this PR asap and fix in a separate PR :)

@bndw
Copy link
Owner

bndw commented Mar 19, 2018

@leonklingele I think in order to implement locking correctly we'll need to handle a few cases. What happens if a lock is somehow orphaned? We may consider TTLs.

@leonklingele
Copy link
Collaborator Author

@bndw the lock is automatically released once pick terminates. Try it yourself :)

@bndw
Copy link
Owner

bndw commented Mar 20, 2018

@leonklingele My only concern is the case where the binary may not terminate, for example running pick on a remote server. In that case I believe the safe could become "permanently" locked.

@leonklingele
Copy link
Collaborator Author

Updated to only fail if at least two instances of pick try to open the safe in writable mode.

for example running pick on a remote server

What do you mean by "running pick on a remote server"? pick can only be run locally?!
If you intended to say "running pick with a remote safe" (e.g. using the S3 backend), well, then the S3 backend needs to implement some proper way of ensuring the lock really is released once the pick binary terminates.
The file backend as implemented in this PR takes care that the lock doesn't become stale.

In that case I believe the safe could become "permanently" locked

I still don't see how that could work, please give some more insight to your thoughts :)

@leonklingele leonklingele force-pushed the single-instance branch 3 times, most recently from 7b41995 to 4af7f25 Compare March 20, 2018 11:08
@bndw
Copy link
Owner

bndw commented Mar 20, 2018

@leonklingele I was imagining running pick on a remote server that's accessed via SSH;

  • SSH to remote server
  • open a note for editing
  • connection fails

In this [edge] case, is it possible that pick may not exit and "permanently" lock the safe?

@leonklingele
Copy link
Collaborator Author

Well, if your SSH connection breaks for whatever reason while pick is running (e.g. a pick note command), then yes, the pick binary might not terminate and you need to killall it manually.
The exact same issue also occurs when you're in the process of apt upgrade'ing your system and the SSH connection breaks, so I don't really see this as a problem only we face. Just killall pick and you're good to go — or consider running pick inside a tmux or screen session as it's recommended to do with apt.
Do you have a better solution? Your described edge-case is a minor problem compared to losing credentials caused by running multiple pick instances (which happened to me in the past.. sucks)

@bndw
Copy link
Owner

bndw commented Mar 20, 2018

I agree your use-case is much more likely and significant, however I just want to ensure I understand the problem space thoroughly.

I'll review this week and get it released.

}

func (c *client) Lock() error {
// TODO: Implement me!
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I need to come up with a solution for s3 before merging this, otherwise it'll break for me.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't break, starting pick only fails if a backend's Lock() method returns errors.ErrAlreadyRunning, see https://github.com/bndw/pick/pull/171/files#diff-c07200a8f18f2a355ec0bd360f843b40R61
Both s3/client::Lock() and s3/client::Unlock() return a different error.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

.. not to say it shouldn't be implemented asap —s3 support is still dangerous to use.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, tracking in #174

To prevent data loss, only a single pick instance should be running
at the same time.
Without this patch, if an active "pick note" editor is open and
new credentials are stored in a pick safe, closing the "pick note"
editor will overwrite all changes happened since.
if err != nil {
return err
}

if action == "edit" {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This moved down a few lines to avoid printing the message in case the safe loader fails to do its job.

}

func (c *client) Lock() error {
// TODO: Implement me!
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It shouldn't break, starting pick only fails if a backend's Lock() method returns errors.ErrAlreadyRunning, see https://github.com/bndw/pick/pull/171/files#diff-c07200a8f18f2a355ec0bd360f843b40R61
Both s3/client::Lock() and s3/client::Unlock() return a different error.

@leonklingele
Copy link
Collaborator Author

Rebased to latest develop.

@bndw bndw merged commit f0e539d into bndw:develop Apr 6, 2018
@leonklingele
Copy link
Collaborator Author

This should be published to master asap, what about an emergency release?

@bndw
Copy link
Owner

bndw commented Apr 19, 2018

@leonklingele I'm currently trying to implement locking for s3 backend before doing a master push/release--

I'm a little confused on the SetWritable and Unlock behavior. I understand that the lock for the file backend is automatically released, however for other backends we need an explicit call to Unlock and the existing implementation doesn't appear to have that hook in there.

In my mind, any edit operation (e.g. add, rm, etc) should call

backend.Lock()
defer backend.Unlock()

Do you have any advice on the current implementation with regard to this?

@leonklingele
Copy link
Collaborator Author

I'm a little confused on the SetWritable and Unlock behavior.

You mean Lock and Unlock, right?

The lock for the file backend is automatically released by the kernel once pick terminates, regardless of the signal it received (SIGINT, SIGKILL, etc.). This ensures the lock will eventually be unlocked again and under no circumstance be held for longer than required. The locking happens on an atomic level (i.e. there will be no lock-acquiring data race between two instances of the app).

For the S3 backend it will probably get a bit trickier though. Not only do we need to ensure that only a single lock can exist, we also need to ensure it will get removed when no longer required (even if pick is kill -9'ed). Maybe there's some locking mechanism supported by Amazon? I don't know how up-to-date this answer is, but it might help anyway: https://stackoverflow.com/questions/3431418/locking-with-s3/3434952#3434952

Another (good?) approach to the problem would be to rely on a proper safe-syncing mechanism (instead of locking).. Just an idea which came to my mind, I haven't really thought this through yet.

@bndw
Copy link
Owner

bndw commented Apr 20, 2018

I agree, cleaning up locks will be tricky given the OS signals. Maybe the best way forward for the time being is to not support, or no-op, locking in the S3 backend. We can take this on as tech debt until we figure out a solution.

@leonklingele
Copy link
Collaborator Author

bump

@bndw
Copy link
Owner

bndw commented Apr 25, 2018

@leonklingele https://github.com/bndw/pick/releases/tag/v0.7.0

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants