Skip to content
This repository has been archived by the owner on Apr 24, 2024. It is now read-only.

Replace custom upgrade logic with qubes-gui-updater #34

Open
conorsch opened this issue Jan 16, 2019 · 14 comments · Fixed by freedomofpress/securedrop-workstation#396
Open

Comments

@conorsch
Copy link
Contributor

The custom workstation update process described in freedomofpress/securedrop-workstation#24 has now been largely superseded by the qubes-gui-updater tool in dom0. See screenshot:

qubes-gui-updater

One major problem is that the sd- templates never report any updates. Why is this? Since they were created via cloning the debian-9 TemplateVM, I'd expect the requisite qubes-specific tooling to be present on disk. More research required.

Even if we can leverage the qubes-gui-updater tool for use in upgrading the SDW VMs, we'll still likely want unattended upgrades to ensure security fixes are applied in a timely manner.

@marmarek
Copy link

One major problem is that the sd- templates never report any updates. Why is this?

Checking for updates is done by VMs based on that template. If all of them are network-disconnected, nothing can report available updates.

@kushaldas
Copy link
Contributor

One major problem is that the sd- templates never report any updates. Why is this?

One of AppVM based on that templateVM needs to call apt commands for the notify script at /usr/lib/qubes/upgrades-status-notify, then only the source template will be marked has updates in the UI.

One possible solution is to have a daily cron-job to execute only that script in those templates, and then let the users decide when to upgrade (as they want). @redshiftzero @emkll @conorsch

@conorsch
Copy link
Contributor Author

One possible solution is to have a daily cron-job to execute only that script in those templates, and then let the users decide when to upgrade (as they want).

Or we can just mark the sd-workstation-tagged TemplateVMs with feature updates-available=1, which should force their inclusion in the GUI updater. Firing up the template in order to check for updates, versus firing up the template to fetch and install updates seems like it'd have a similar impact on workstation performance, so rather than both, we can just do the latter.

None of this addresses the need for an unattended upgrade strategy, so we'll continue to iterate on the existing cron logic.

@redshiftzero
Copy link
Contributor

redshiftzero commented Nov 19, 2019

copying my proposed next steps from standup today:

  1. add the cron jobs as kushal found (Replace custom upgrade logic with qubes-gui-updater #34) such that all sd VMs can be updated using the qubes-updater-gui, and then

  2. run our autoupdater (e.g. for only the sd-svs-disp) to update the most security sensitive VMs automatically on boot, and then reduce the frequency of the other updater to allow journalists to choose the best time to do it (this does mean we’ll have two autoupdates running in cron so increased complexity, hopefully we can keep it just at two)

@eloquence
Copy link
Member

eloquence commented Nov 20, 2019

For the 11/20-12/4 sprint, we've committed to an investigatory spike to explore option 1 above (make it possible to use the official updater; ideally updates should only be indicated when they are in fact available). @conorsch has offered to take this on, with an 8 hour timebox.

Note that this is an investigation only; it does not need to result in a solution that is ready for integration, but should inform the final implementation choice.

@conorsch
Copy link
Contributor Author

Overview

We've discussed a few options for how to handle updates:

  1. Install them regularly via cron (daily or weekly).
  2. Rely on the Qubes Updater GUI and user action to apply updates.
    1. Force updates-available=1 on SDW VMs, so they're always in the list.
    2. Configure qubes.UpdatesProxy for SDW VMs, so they can poll for updates.

Note that for the present context, we're mostly concerned about packages inside the AppVMs, meaning we'd update the corresponding TemplateVMs, and the new versions would be present on the next boot of the AppVM (or of the physical workstation). For dom0 packages, particularly the Salt logic managing VM configuration, such as enforcing net-less config and RPC policy whitelisting, we'll identify a separate solution.

Option 1: Install updates regularly via cron

This is the approach we first started with, in freedomofpress/securedrop-workstation#172. At that time, the Qubes Updater GUI tool did not exist, so it wasn't an option for us. Since freedomofpress/securedrop-workstation#172, we haven't updated the logic at all, so it's starting to show its age. We should consider sprucing up the logic, e.g. by resolving freedomofpress/securedrop-workstation#339.

Since most team members are familiar with this logic, and the bulk of research has gone to the alternatives, I'll stop here.

Option 2i: Force updates-available=1 on SDW VMs

If we manually set qvm-features sd-svs-template updates-available 1, then the sd-svs-template
will always show in the Qubes Updater GUI. The result is that even after applying updates as recommended, the "Updates available" notification persists, showing the same VM that was updated. Hardly ideal, and downright dishonest to end users. It does provide a mechanism to enable users to apply updates to SDW VMs, but at the cost of clarity about both whether there are updates actually available, and whether updates were successfully applied.

Option 2ii: Configure qubes.UpdatesProxy to check for SDW updates

Given the discussion above, this was a promising option. Qubes already provides appropriate tooling for checking for updates, although it naturally assumes that a network connection is available to do so. In the case of e.g. sd-svs, no network connection is available, so apt polls fail, meaning updates are never identified and reported back to dom0 to surface notifications to the user.

A potential solution is to enable the qubes.UpdatesProxy service on net-less AppVMs. With just a few changes, we can permit polling from sd-svs to notify about updates to the underlying TemplateVM:

  1. Add $tag:sd-workstation $default allow,target=sys-net to /etc/qubes-rpc/policy/qubes.UpdatesProxy in dom0.
  2. Run qvm-service sd-svs updates-proxy-setup on in dom0.
  3. Reboot/start sd-svs.

Thereafter, apt calls made from sd-svs, including the Qubes-maintained cron job for checking update availability, will run successfully. The problem, however, is we've violated a security control on sd-svs by enabling a global TCP proxy:

[user@dom0 ~]$ qvm-service sd-svs netvm
[user@dom0 ~]$ qvm-service sd-svs updates-proxy-setup 
on

user@sd-svs:~$ curl https://ifconfig.co
curl: (6) Could not resolve host: ifconfig.co
user@sd-svs:~$ tail -n2 /etc/apt/apt.conf.d/01qubes-proxy
Acquire::http::Proxy "http://127.0.0.1:8082/";
Acquire::tor::proxy "http://127.0.0.1:8082/";
user@sd-svs:~$ https_proxy=http://127.0.0.1:8082/ curl -s https://ifconfig.co | perl -npE 's/\d/X/g'
XX.XXX.XXX.XXX

In the final line, I chose to redact the IP, as that was my current IP, absent all VPNs and firewall settings, by virtue of the connection running through sys-net. If it's not clear, anything is possible at this stage:

user@sd-svs:~$ https_proxy=http://127.0.0.1:8082/ curl -sf https://raw.githubusercontent.com/speed47/spectre-meltdown-checker/master/spectre-meltdown-checker.sh | sudo bash -s -- --batch
awk: fatal: cannot open file `bash' for reading (No such file or directory)
awk: fatal: cannot open file `bash' for reading (No such file or directory)
CVE-2017-5753: OK (Mitigation: usercopy/swapgs barriers and __user pointer sanitization (complete, automated))
CVE-2017-5715: OK (Full retpoline + IBPB are mitigating the vulnerability)
CVE-2017-5754: OK (Mitigation: PAX_UDEREF (pgd switching))
CVE-2018-3640: OK (your CPU microcode mitigates the vulnerability)
CVE-2018-3639: OK (Mitigation: Speculative Store Bypass disabled via prctl and seccomp)
CVE-2018-3615: OK (your CPU vendor reported your CPU model as not vulnerable)
CVE-2018-3620: OK (Mitigation: PTE Inversion)
CVE-2018-3646: OK (this system is not running a hypervisor)
CVE-2018-12126: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2018-12130: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2018-12127: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2019-11091: OK (Your microcode and kernel are both up to date for this mitigation, and mitigation is enabled)
CVE-2019-11135: OK (your CPU vendor reported your CPU model as not vulnerable)
CVE-2018-12207: OK (this system is not running a hypervisor)

(Yes, I piped curl to bash to make a point.) Clearly this isn't a strategy we can accept.

Recommended next steps

Let's shore up the cron job configuration so it's less disruptive to daily users. We can selectively target certain VMs for updates (#341) in order to minimize the performance impact (and runtime), as well as reduce the frequency daily -> weekly.

We should also reconsider the use of discrete TemplateVMs for each SDW component. If we were instead to install all the SDW-related packages in a single TemplateVM, then reuse that template across multiple components, then we'd get the benefit of the built-in Qubes tooling to notify about updates. There may be conflicts between certain deb packages, particularly those managing mimetypes, but perhaps there's a homedir-based work around that would work well with the Qubes isolation. The Whonix VMs would still be separate, but we could conceivably reuse the dist-provided Whonix VMs, as well.

The option remains open that we could establish unused AppVMs with network enabled, strictly to check for updates, and preserve the isolation of the app-specific data in the net-less VMs. The overhead involved in maintaining and running those dedicated checker VMs seems comparable with that of simply running the update checks for each TemplateVM, however, and therefore is unlikely to be worth the effort.

Comments welcome on the findings above!

@redshiftzero
Copy link
Contributor

interesting findings regarding qubes.UpdatesProxy. I wonder if we could customize the target NetVM (to one that has restrictive firewall rules). Well regardless - agreed on not using it as even then it would be an exfil vector.

Let's pause with the qubes updater and keep going on freedomofpress/securedrop-workstation#341. If the (minimal) Gtk notifications there work well and we don't discover any major issues, we can add other VMs (see comment here) on boot. Then we can see if we can speed up the updates - using less template VMs as you are suggesting is worth considering.

@eloquence
Copy link
Member

I still think that ultimately, we want to share update logic with the upstream OS, instead of maintaining our own preflight updater. There is a lot of potential for confusion if the user gets notifications through multiple channels asking them to run updates through different updaters.

I would recommend that we re-open this issue, and coordinate after the beta launch with upstream on what's required to have a unified approach. Near as I can tell from the previous discussions, it boils down mainly to ensuring that update checks can run for networkless VMs (which seems like a feature upstream would want anyway), and adding support upstream for an update policy that can be enforced e.g. at login (which is a bit more specific to our use case, but also seems in line with Qubes' goals).

@emkll
Copy link
Contributor

emkll commented Feb 21, 2020

To completely eliminate the need for a workstation-specific updater, there should also be a way to integrate some form state management, or post-upgrade-type task (for example enforcing dom0 state freedomofpress/securedrop-workstation#427)

@marmarek
Copy link

I'm open to suggestion of a plugable post-upgrade-task feature, if you could write down functional requirements.

@redshiftzero
Copy link
Contributor

I think getting rid of our updater entirely will be a bit challenging if we want to maintain the current UI but reopening for discussion

@redshiftzero redshiftzero reopened this Feb 21, 2020
@eloquence
Copy link
Member

Thanks for re-opening. My sense is that the set of concerns that are shared between us and upstream is far larger than the set of concerns that are not. Even if we can't ultimately use the same GUI for the updater, there may still be other ways to share code in areas like notifications and system policies. We probably won't be able to investigate this further until the pilot is well underway, but appreciate at least keeping this option on the horizon.

@conorsch
Copy link
Contributor Author

Post template consolidation (#471), it'd be grand to take a closer look at the updater logic and try to simplify it. We'll have significantly fewer SDW custom templates, and our greater familiar with the Python API (e.g. updates-available=1) should allow us to integrate with the upstream updater. Doing so is rather a requirement for things like freedomofpress/securedrop-workstation#20, so that we ensure all templates are updated in a timely fashion, not just the SDW templates we maintain.

@zenmonkeykstop
Copy link
Contributor

Related to ongoing work with the updater and planned changes in Qubes 4.2, so moving it to the securedrop-updater repo.

@zenmonkeykstop zenmonkeykstop transferred this issue from freedomofpress/securedrop-workstation Mar 30, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

7 participants