Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Send replies #67

Closed
wants to merge 19 commits into from
Closed

Send replies #67

wants to merge 19 commits into from

Conversation

heartsucker
Copy link
Contributor

@heartsucker heartsucker commented Oct 25, 2018

Fixes #16
Fixes #73

Doesn't implement DB functionality yet.

@redshiftzero
Copy link
Contributor

so, check out commit 41870a7, in there I've shown an example of what I would expect the structure to be in terms of the callbacks / API functionality. if you agree this is reasonable, feel free to cherry-pick into here

note: the reason in that diff I remove your ConversationView.add_reply call in ReplyBoxWidget.send_reply is because imho we should have Window.show_conversation_for just represent the state of the current conversation (I added the files part of this logic over in a PR here) and then call that whenever we want to refresh the conversation view (there are other ways to do this, which may be better or worse, so let me know if you disagree with this pattern)

@ntoll
Copy link
Contributor

ntoll commented Oct 29, 2018

+1 on Window.show_conversation_for comment.

@ntoll
Copy link
Contributor

ntoll commented Oct 29, 2018

This looks good. Perhaps when clicking send, the conversation view should scroll down to the bottom so you see the latest entry.

@heartsucker heartsucker force-pushed the send-replies branch 2 times, most recently from 2fc3ec9 to 31ac240 Compare November 1, 2018 16:47
Copy link
Contributor

@redshiftzero redshiftzero left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hey @heartsucker, this is looking good so far, I dropped some comments inline (note this review from just from looking at the diff, i haven't tested this)

if source.key and source.key.get('type', None) == 'PGP':
pub_key = source.key.get('public', None)
if pub_key:
self.gpg.import_key(source.uuid, pub_key)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

since we're doing this each sync, i was curious how fast this is: pgp key imports on my system took 0.16s for a new key, and about half that time to attempt to import a key that was already in the keyring. so if a news organization has 100 sources, that's 8 seconds just for attempting key imports for sources that already have keys in the workstation. this is not a blocker for initial merge, just flagging this for the future when we do a bit more benchmarking of the time to sync.

more generally, i think that storage.update_sources is a more natural place for this gpg key import such that this source update logic is all together - what do you think? (indeed this means passing self.gpg in storage.update_sources, which we're also doing with self.session - i think that's a fine pattern)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we shouldn't rely on the client always having the same key as the server because at some point in the future, we could rotate keys. It would be nice to do this pretty and async, but for now this is fine. I can look in to moving the logic to the other function.


def on_reply_timeout(self) -> None:
self.set_status("Connection to server timed out, please try again.")
self.call_reset()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

note this'll need a rebase on latest master (self.call_reset is no longer called directly in the callbacks after the threading cleanup in #85)

source_uuid, message)

def on_reply_to_source(self, result) -> None:
self.call_reset()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

until we have the UUID of the created reply in the response body, we should probably just sync the API here

raise RuntimeError('Local source not found: {}'
.format(source_uuid))

out = self.gpg.encrypt(message, local_source.fingerprint)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

here we'll also want to encrypt to the submission key of the instance

in terms of how to actually figure out what key that is in the keyring, there are a couple approaches (maybe more):

  1. we have the submission private key in dom0 (from there we use salt to drop the private key in the gpg VM, which the VM the SVS runs in uses for gpg actions via qubes' split-gpg). as part of the provisioning process, we could use salt to drop the submission key fingerprint in the config file (or in a file on disk e.g. ~/public_key_fingerprint that the client can use to populate the config file).
  2. use the fact that the public key fingerprint associated with the only private key in the keyring that is not an autogenerated source key, is the submission key (a safe assumption for the alpha, but not a great assumption for the beta)

i think the former is a better approach, what do you think? that would mean for the purposes of this PR, we'd need to add a submission key fingerprint to the config file (and for the client dev env we should add the key associated with the securedrop dev env), file a followup in the workstation repo to add the salt logic.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There was nothing that seemed to reference this key anywhere, so I left that out for now.

Assuming we have a known path for the key, we can always reload it at boot time to store a reference to the fingerprint. Reloading a key will always provide a fingerprint.

>>> res = g.import_keys(k)
>>> res.fingerprints
['0CEC936888A60171461174C5C0A2586F09D77C82']
>>> res = g.import_keys(k)
>>> res.fingerprints
['0CEC936888A60171461174C5C0A2586F09D77C82']

I don't want to put it in the config because if a key gets rotated, we have to remember to rotate the config, and that sounds like a place where errors could happen.

@heartsucker heartsucker force-pushed the send-replies branch 3 times, most recently from 1c932ef to 4933789 Compare November 5, 2018 19:56
@heartsucker heartsucker changed the title GUI functionality for sending replies Send replies Nov 5, 2018
@heartsucker heartsucker force-pushed the send-replies branch 3 times, most recently from 4b40a62 to ad7b3f5 Compare November 7, 2018 16:23
@heartsucker
Copy link
Contributor Author

hey @redshiftzero I'm trying to merge your decrypt_submission function here into the GpgHelper, and I realized we have a MessageSync object that creates its own session. Is there a reason to have multiple sessions?

Also, my idea is that this needs to have a reference to the GpgHelper to do the decryption itself, so these need to be injected / passed to each other via __init__() or setup(). Does that sound reasonable?

@redshiftzero
Copy link
Contributor

I'm trying to merge your decrypt_submission function here into the GpgHelper, and I realized we have a MessageSync object that creates its own session. Is there a reason to have multiple sessions?

hey so, we can pass Client.session in logic.py to MessageSync and ReplySync when we create those threads in Client.start_message_thread and Client.start_reply_thread

Also, my idea is that this needs to have a reference to the GpgHelper to do the decryption itself, so these need to be injected / passed to each other via init() or setup(). Does that sound reasonable?

if by "this" in your first sentence you mean the MessageSync and ReplySync objects then yep, that's a solid plan



logger = logging.getLogger(__name__)


class APISyncObject(QObject):

def __init__(self, api, home, is_qubes):
def __init__(self, api, session, gpg):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we got in to the pattern of creating new sqlite Sessions in these objects because of threading issues: you can't use a session created in, say, thread A from a different thread, B. Since these "sync objects" tend to run in their own threads, I think it makes sense to build their sessions in the objects' __init__ fns.

In fact, now that I can test this, I think we're already getting bit by this problem. I can move the session-building back in to init.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah that was what I was posting about in gitter. However, this does still create a problem that if we pass a reference to this object, it could still be tainted. I think I'm going to roll this back for now so that each has its own session.


def __init__(self, gpg_home: str, session, is_qubes: bool) -> None:
if is_qubes: # pragma: no cover
gpg_binary = "qubes-gpg-client"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Strangely, upon testing, pretty_bad_protocol doesn't play well with qubes-gpg-client, but does work well with qubes-gpg-client-wrapper. I'll update the binary name, here.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

so unfortunately we tested this with qubes-gpg-client-wrapper and found that python-gnupg actually does not work at all with that as a binary, so we'll need to implement our own (ideally minimal) gpg wrapper. we're going to take a crack at doing this at the hackathon this weekend

local_source = self.session.query(Source) \
.filter_by(uuid=source_uuid).one()

out = self.gpg.encrypt(message, local_source.fingerprint)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Per irl conversation with @redshiftzero, we want to encrypt to both the source key and the journalist key. We decided on an interim solution for figuring out what fingerprint to use here: upon startup, we'll try to get the fingerprint of the journalist key from sd-gpg (via pretty_bad_protocol.client.list_keys()) then add that fingerprint to the self.gpg.encrypt() call here.


local_source.fingerprint = fingerprints.pop()
self.session.add(local_source)
self.session.commit()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one thought: I feel like it makes sense for this function to return the fingerprint and then over where this is called we make the database change (ideally this would only be crypto operations and interactions with the gpg binary)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah can do.

@joshuathayer
Copy link
Contributor

So, I got this to the point where things seem functional in the client, at least running with a local SD docker instance. It needs more manual testing, some code cleanup, and new unit tests. I think I might be putting this aside for a bit to do other work in preparation for the audit, but that's the status if other folks want to pick this up.

@heartsucker
Copy link
Contributor Author

This has gotten so out of sync that I'm going to close it and open a new one.

legoktm pushed a commit that referenced this pull request Dec 11, 2023
legoktm pushed a commit that referenced this pull request Dec 11, 2023
Check extract path and ensure minimal perms
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants