-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Send replies #67
Send replies #67
Conversation
so, check out commit 41870a7, in there I've shown an example of what I would expect the structure to be in terms of the callbacks / API functionality. if you agree this is reasonable, feel free to cherry-pick into here note: the reason in that diff I remove your |
+1 on |
This looks good. Perhaps when clicking send, the conversation view should scroll down to the bottom so you see the latest entry. |
2fc3ec9
to
31ac240
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
hey @heartsucker, this is looking good so far, I dropped some comments inline (note this review from just from looking at the diff, i haven't tested this)
securedrop_client/logic.py
Outdated
if source.key and source.key.get('type', None) == 'PGP': | ||
pub_key = source.key.get('public', None) | ||
if pub_key: | ||
self.gpg.import_key(source.uuid, pub_key) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
since we're doing this each sync, i was curious how fast this is: pgp key imports on my system took 0.16s for a new key, and about half that time to attempt to import a key that was already in the keyring. so if a news organization has 100 sources, that's 8 seconds just for attempting key imports for sources that already have keys in the workstation. this is not a blocker for initial merge, just flagging this for the future when we do a bit more benchmarking of the time to sync.
more generally, i think that storage.update_sources
is a more natural place for this gpg key import such that this source update logic is all together - what do you think? (indeed this means passing self.gpg
in storage.update_sources
, which we're also doing with self.session
- i think that's a fine pattern)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we shouldn't rely on the client always having the same key as the server because at some point in the future, we could rotate keys. It would be nice to do this pretty and async, but for now this is fine. I can look in to moving the logic to the other function.
securedrop_client/logic.py
Outdated
|
||
def on_reply_timeout(self) -> None: | ||
self.set_status("Connection to server timed out, please try again.") | ||
self.call_reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
note this'll need a rebase on latest master
(self.call_reset
is no longer called directly in the callbacks after the threading cleanup in #85)
securedrop_client/logic.py
Outdated
source_uuid, message) | ||
|
||
def on_reply_to_source(self, result) -> None: | ||
self.call_reset() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
until we have the UUID of the created reply in the response body, we should probably just sync the API here
securedrop_client/gpg.py
Outdated
raise RuntimeError('Local source not found: {}' | ||
.format(source_uuid)) | ||
|
||
out = self.gpg.encrypt(message, local_source.fingerprint) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here we'll also want to encrypt to the submission key of the instance
in terms of how to actually figure out what key that is in the keyring, there are a couple approaches (maybe more):
- we have the submission private key in dom0 (from there we use salt to drop the private key in the gpg VM, which the VM the SVS runs in uses for gpg actions via qubes' split-gpg). as part of the provisioning process, we could use salt to drop the submission key fingerprint in the config file (or in a file on disk e.g.
~/public_key_fingerprint
that the client can use to populate the config file). - use the fact that the public key fingerprint associated with the only private key in the keyring that is not an autogenerated source key, is the submission key (a safe assumption for the alpha, but not a great assumption for the beta)
i think the former is a better approach, what do you think? that would mean for the purposes of this PR, we'd need to add a submission key fingerprint to the config file (and for the client dev env we should add the key associated with the securedrop dev env), file a followup in the workstation repo to add the salt logic.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was nothing that seemed to reference this key anywhere, so I left that out for now.
Assuming we have a known path for the key, we can always reload it at boot time to store a reference to the fingerprint. Reloading a key will always provide a fingerprint.
>>> res = g.import_keys(k)
>>> res.fingerprints
['0CEC936888A60171461174C5C0A2586F09D77C82']
>>> res = g.import_keys(k)
>>> res.fingerprints
['0CEC936888A60171461174C5C0A2586F09D77C82']
I don't want to put it in the config because if a key gets rotated, we have to remember to rotate the config, and that sounds like a place where errors could happen.
1c932ef
to
4933789
Compare
4b40a62
to
ad7b3f5
Compare
hey @redshiftzero I'm trying to merge your Also, my idea is that this needs to have a reference to the |
hey so, we can pass
if by "this" in your first sentence you mean the |
7efb000
to
ec0f5c0
Compare
securedrop_client/message_sync.py
Outdated
|
||
|
||
logger = logging.getLogger(__name__) | ||
|
||
|
||
class APISyncObject(QObject): | ||
|
||
def __init__(self, api, home, is_qubes): | ||
def __init__(self, api, session, gpg): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we got in to the pattern of creating new sqlite Sessions
in these objects because of threading issues: you can't use a session created in, say, thread A
from a different thread, B
. Since these "sync objects" tend to run in their own threads, I think it makes sense to build their sessions in the objects' __init__
fns.
In fact, now that I can test this, I think we're already getting bit by this problem. I can move the session-building back in to init.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah that was what I was posting about in gitter. However, this does still create a problem that if we pass a reference to this object, it could still be tainted. I think I'm going to roll this back for now so that each has its own session.
securedrop_client/crypto.py
Outdated
|
||
def __init__(self, gpg_home: str, session, is_qubes: bool) -> None: | ||
if is_qubes: # pragma: no cover | ||
gpg_binary = "qubes-gpg-client" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Strangely, upon testing, pretty_bad_protocol doesn't play well with qubes-gpg-client
, but does work well with qubes-gpg-client-wrapper
. I'll update the binary name, here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so unfortunately we tested this with qubes-gpg-client-wrapper and found that python-gnupg actually does not work at all with that as a binary, so we'll need to implement our own (ideally minimal) gpg wrapper. we're going to take a crack at doing this at the hackathon this weekend
securedrop_client/crypto.py
Outdated
local_source = self.session.query(Source) \ | ||
.filter_by(uuid=source_uuid).one() | ||
|
||
out = self.gpg.encrypt(message, local_source.fingerprint) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per irl conversation with @redshiftzero, we want to encrypt to both the source key and the journalist key. We decided on an interim solution for figuring out what fingerprint to use here: upon startup, we'll try to get the fingerprint of the journalist key from sd-gpg
(via pretty_bad_protocol.client.list_keys()
) then add that fingerprint to the self.gpg.encrypt()
call here.
securedrop_client/crypto.py
Outdated
|
||
local_source.fingerprint = fingerprints.pop() | ||
self.session.add(local_source) | ||
self.session.commit() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
one thought: I feel like it makes sense for this function to return the fingerprint and then over where this is called we make the database change (ideally this would only be crypto operations and interactions with the gpg binary)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah can do.
So, I got this to the point where things seem functional in the client, at least running with a local SD docker instance. It needs more manual testing, some code cleanup, and new unit tests. I think I might be putting this aside for a bit to do other work in preparation for the audit, but that's the status if other folks want to pick this up. |
This has gotten so out of sync that I'm going to close it and open a new one. |
Check extract path and ensure minimal perms
Fixes #16
Fixes #73
Doesn't implement DB functionality yet.