-
Notifications
You must be signed in to change notification settings - Fork 29
Only allow running a single pick instance concurrently #171
Conversation
backends/s3/client.go
Outdated
@@ -75,6 +76,16 @@ func (c *client) Backup() error { | |||
return c.putObject(bytes.NewReader(data), c.Bucket, backupKey) | |||
} | |||
|
|||
func (c *client) Lock() error { | |||
// TODO: Implement me! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@bndw maybe you find the time to implement this? Please merge this PR asap and fix in a separate PR :)
@leonklingele I think in order to implement locking correctly we'll need to handle a few cases. What happens if a lock is somehow orphaned? We may consider TTLs. |
@bndw the lock is automatically released once pick terminates. Try it yourself :) |
@leonklingele My only concern is the case where the binary may not terminate, for example running pick on a remote server. In that case I believe the safe could become "permanently" locked. |
f3d8740
to
526463d
Compare
Updated to only fail if at least two instances of pick try to open the safe in writable mode.
What do you mean by "running pick on a remote server"? pick can only be run locally?!
I still don't see how that could work, please give some more insight to your thoughts :) |
7b41995
to
4af7f25
Compare
@leonklingele I was imagining running pick on a remote server that's accessed via SSH;
In this [edge] case, is it possible that pick may not exit and "permanently" lock the safe? |
Well, if your SSH connection breaks for whatever reason while pick is running (e.g. a |
I agree your use-case is much more likely and significant, however I just want to ensure I understand the problem space thoroughly. I'll review this week and get it released. |
} | ||
|
||
func (c *client) Lock() error { | ||
// TODO: Implement me! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to come up with a solution for s3 before merging this, otherwise it'll break for me.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't break, starting pick
only fails if a backend's Lock()
method returns errors.ErrAlreadyRunning
, see https://github.com/bndw/pick/pull/171/files#diff-c07200a8f18f2a355ec0bd360f843b40R61
Both s3/client::Lock()
and s3/client::Unlock()
return a different error.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
.. not to say it shouldn't be implemented asap —s3
support is still dangerous to use.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, tracking in #174
4af7f25
to
ec8521d
Compare
To prevent data loss, only a single pick instance should be running at the same time. Without this patch, if an active "pick note" editor is open and new credentials are stored in a pick safe, closing the "pick note" editor will overwrite all changes happened since.
ec8521d
to
e501a72
Compare
if err != nil { | ||
return err | ||
} | ||
|
||
if action == "edit" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This moved down a few lines to avoid printing the message in case the safe loader fails to do its job.
} | ||
|
||
func (c *client) Lock() error { | ||
// TODO: Implement me! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It shouldn't break, starting pick
only fails if a backend's Lock()
method returns errors.ErrAlreadyRunning
, see https://github.com/bndw/pick/pull/171/files#diff-c07200a8f18f2a355ec0bd360f843b40R61
Both s3/client::Lock()
and s3/client::Unlock()
return a different error.
Rebased to latest |
This should be published to |
@leonklingele I'm currently trying to implement locking for s3 backend before doing a master push/release-- I'm a little confused on the In my mind, any edit operation (e.g. backend.Lock()
defer backend.Unlock() Do you have any advice on the current implementation with regard to this? |
You mean The lock for the file backend is automatically released by the kernel once pick terminates, regardless of the signal it received (SIGINT, SIGKILL, etc.). This ensures the lock will eventually be unlocked again and under no circumstance be held for longer than required. The locking happens on an atomic level (i.e. there will be no lock-acquiring data race between two instances of the app). For the S3 backend it will probably get a bit trickier though. Not only do we need to ensure that only a single lock can exist, we also need to ensure it will get removed when no longer required (even if pick is Another (good?) approach to the problem would be to rely on a proper safe-syncing mechanism (instead of locking).. Just an idea which came to my mind, I haven't really thought this through yet. |
I agree, cleaning up locks will be tricky given the OS signals. Maybe the best way forward for the time being is to not support, or no-op, locking in the S3 backend. We can take this on as tech debt until we figure out a solution. |
bump |
To prevent data loss, only a single pick instance should be running
at the same time.
Without this patch, if an active "pick note" editor is open and
new credentials are stored in a pick safe, closing the "pick note"
editor will overwrite all changes happened since.