Skip to content

Tech Meeting Notes 2020 07 07

Erik Moeller edited this page Jul 7, 2020 · 1 revision

2020-07-07 Tech Meeting

Topic: How do we want to approach, scope and prioritize the consolidation of templates used by SecureDrop Workstation?

Motivation:

We currently use a total of 7 templates:

  • sd-app-buster-template
  • sd-devices-buster-template
  • sd-log-buster-template
  • sd-proxy-buster-template
  • sd-viewer-buster-template
  • securedrop-workstation-buster (used by sd-gpg)
  • whonix-gw-xx (system-provided, used by sd-whonix)

Each of those templates has to be updated/rebooted on every workstation run; via Tor in the case of the Whonix template. In addition, we also update the fedora-xx template, because it is used by system VMs we rely on. This adds up to 15-20 minute update time in the normal case.

The primary motivation for considering consolidation is therefore update time.

The secondary motivation is to simplify the system, and reduce brittleness due to issues with any given template.

It will become more difficult to make a change of this architectural scope once the workstation is in wider production usage, which is why we are considering it now.

See https://github.com/freedomofpress/securedrop-workstation/issues/471#issuecomment-654418868 for considerations on how this relates to our overall threat model.

Open questions (please add yours):

What is the minimally sufficient number of templates?

  • Previously we had settled on two base templates reflecting different levels of "trust" in terms of which applications are installed. Is there a strong argument to consolidate even further?

Conor: Why not have a single template? Given concerns expressed so far, I do think we get a lot of wins out of going to two, but interested in exploring switch to a single template. MIME type policing could be done in private volume.

Use of private volumes: What we're talking about is the persistent /home directory that's unique to that AppVM, and the persistent /rw/config directory that's also unique. /rw/config can be used for enabling/disabling specific configurations on a per-VM basis without requiring a new template.

We're using home directory for sd-app, while other information is typically stashed in system directories.

Kushal: Can we put our secrets into a vault VM and query them via qrexec?

Conor: The easiest lift would be to move them into the private volume.

Mickael: May be difficult to properly isolate access to specific secrets by specific VMs. Could be useful to open a ticket for investigation.

CONSENSUS: Aiming to consolidate towards two templates for now.

Is there an argument for parallellizing template updates in the updater? That would hit the primary goal, tho not the second.

  • Kushal: Will depend on network speed.

  • Conor: Could probably only go to 2 concurrent due to RAM constraints.

  • Mickael: Running even just 2 in parallel during the cron job era caused reliability issues.

  • Erik: Let's investigate long tail of optimizations after consolidation -- but need to do consolidation first, due to complexity of rollout.

How would we handle a future "Signal" template in this schema (networked but trusted to install from a third party repo)?

  • John: Could Signal VM be firewalled off to only connect to Signal servers?

  • Conor: Would still allow exfiltration via Signal

  • Mickael: Not sure if Signal makes available IPs used by all their servers, could impose a maintenance burden

  • Kushal: Trying to do something similar for one of my projects. Maintaining hostname changes is difficult.

  • Erik: May be worth investigating, but as Conor says would not fully resolve mitigation against exfiltration.

  • Mickael: The easy answer is another template. Will come up again with Keybase, other apps.

  • Can there be templates deriving from other templates to minimize need for updates? Unfortunately, no.

  • Erik: Would it be worth pursuing a separate install strategy for something like Signal, so that it can live in /home ?

    • Flatpak/Snap/AppImage style solutions would impose their own significant maintenance burden, unclear if worth the win in reducing updates for other packages.

Would future server admin support in the workstation affect the # of templates? Probably more relevant in scope of whonix lift?

  • Conor: Not necessarily, secrets injection could be done via private volume + /rw config

How should we scope this overall work?

  • Proposal on the table is to have two separate, potentially parallel lines of work: one for removal of Whonix dependency, the other for consolidation of SD-related templates.
  • Move all secrets and template-specific config to private volumes, can be done even before templates are consolidated
  • That includes the MIME apps list provided via package
  • Allie: Would be good to have a list of the relevant artifacts (config files, packages, etc.)
  • paxctld config, MIME list -- need to survey packaging repo
  • what's being dropped in directories like /usr/share, /etc?
  • We could use packages to ship all available configurations (e.g. for mimetypes), and use /rw/config to select the configuration relevant for each VM. /rw/config changes would be deployed via Salt.
  • Need to be consistent about consolidating all config into packages to make this work.

CONSENSUS: Begin by inventorying VM-specific configs and explore consistent management via template-universal packages & /rw/config

What do we get from Whonix and how would we replace it?

  • Access to Tor restart button -> replaceable by simply restarting sd-proxy VM
    • also VMs can add panel widgets IIRC
  • Stream isolation: non-issue because we'd be only using it for SD access
  • Recommendation for updating/using Tor Browser + JI (esp. important for admins)
  • Bridge configuration UI (potentially important for users in some countries)

Should we support updates in place or should we require a reinstall?

  • Conor: We'd have to write custom code to shut down templates, pull in and configure new templates, reboot.

CONSENSUS: We will aim for the securedrop-admin --apply style migration, but may have to timebox it.

Clone this wiki locally