Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid manual downloading of dom0 rpm during bootstrapping #945

Open
legoktm opened this issue Feb 9, 2024 · 8 comments
Open

Avoid manual downloading of dom0 rpm during bootstrapping #945

legoktm opened this issue Feb 9, 2024 · 8 comments

Comments

@legoktm
Copy link
Member

legoktm commented Feb 9, 2024

I want to split out the first step from #942 as its own separate issue:

The rough flow is as follows:

  1. configure a network-attached VM to download the dom0 rpm, copy and install it

https://workstation.securedrop.org/en/stable/admin/install.html#download-and-install-securedrop-workstation describes a lot of steps related to verifying the key, manually writing a .repo file, verifying and moving the RPM around and then finally installing it.

My proposed solution would be to have a securedrop-keyring RPM that is shipped in the Qubes repositories, so you can simply run sudo dnf install securedrop-keyring from dom0 and it provisions our public signing key in /etc/pki/rpm-gpg/ (or elsewhere). Then the user creates the .repo file, and can run sudo dnf install securedrop-workstation-dom0-config and we're all set. Instead of directly trusting the SD release key, we would be trusting that the Qubes-signed package provided us with the correct key (certainly if Qubes is compromised, the whole workstation is compromised anyways).

This would largely require the Qubes team being willing to ship our key, figuring out details around update cadence, timelines, etc.

@deeplow
Copy link
Contributor

deeplow commented Feb 23, 2024

#467 is an alternative solution.

@rocodes
Copy link
Contributor

rocodes commented Mar 27, 2024

@legoktm I think this is a good idea whether or not we end up working towards an ISO, although I think it's currently semi blocked on the last sentence you mentioned (bit more of an internal conversation about update cadence/making sure our upstream ask isn't too onerous).

What I'm wondering if we do this is why we wouldn't also ship the .repo file and have it be autoconfigured in the postinnst. I think we can find some way of visually confirming to the user before they proceed with the installation, and/or we can still support the legacy method of checking and importing the key manually for users who want to go the extra mile.

@deeplow
Copy link
Contributor

deeplow commented Mar 27, 2024

Yep. I'm happy with that idea as long as the Qubes team is happy as well. I had kind of mentioned this a while ago as an alternative approach to the .iso idea:

Alternative approaches

Having the SecureDrop keys / bootstrapping repo in Qubes-contrib or even the keys in Qubes itself #945

@deeplow
Copy link
Contributor

deeplow commented Mar 27, 2024

Can someone please tag all of these conversations (#945, #467 and #942) with the provisioning tag or something more appropriate so we can easily find these various solutions to the same problem?

@rocodes
Copy link
Contributor

rocodes commented Apr 22, 2024

Bumping this for a couple reasons, besides the initial benefits discussed above:

  • rpm --import does not update our release pubkey #953 (/earlier Explicitly manage our additions to the rpm keyring instead of appending #423) would be well-served by a package postinst. We should resolve this before our next key expiry this summer
  • rather than configuring .repo files dynamically with Salt, it would be nice to be able to define a static .repo file(s). It would also lend more confidence about system state this way (eg: verifying that securedrop-yum-keyring and not securedrop-yum-keyring-test package is installed, vs manually searching for all the places the key could be)
  • we might be able to automate some manual aspects of QA (such as switching between yum and yum-qa), or take advantage of some of DNF's built-in abilities (for example, an orchestration package that depends on a keyring package and reprovisions on keyring package changes if a user flips between prod and test key - but now I'm spitballing a little bit)

@lsd-cat
Copy link
Member

lsd-cat commented Apr 24, 2024

Just flagging that depending on the Qubes team preferences and choices, we could also ask to ship our repository by default but keep it disabled, and then asks users to just issue

dnf config-manager --set-enabled securedrop-something

Or ship the repo file as a package too as it seems they are doing here https://www.qubes-os.org/doc/installing-contributed-packages/

@zenmonkeykstop
Copy link
Contributor

I'm leery of having our repo included in dom0 by default (even disabled) because it essentially asks the Qubes team to put an elevated level of trust in us and our package repos - were we compromised there would be potential impact for Qubes users that never even looked at SD. But the contrib repo has come up as an option before. It's not exactly great either (now SD users need to add both contrib and us (the last via rpm)) but assuming they continue to maintain -contrib it's worth investigating.

@rocodes
Copy link
Contributor

rocodes commented May 9, 2024

After our conversation with the Qubes team, it sounds like they're open to having a small rpm with our key and repo file in qubes-contrib.

I made a small repo with an rpm .spec file, the .repo file, the pubkey, and our containerized build stuff (I don't know that Qubes will use the latter, but tbd); the spec file was as below, just to show how basic it can be (2 files!):

Name:		securedrop-workstation-keyring
Version:	0.1.0
Release:	1%{?dist}
Summary:	SecureDrop Workstation Keyring

# For reproducible builds:
# [snip]

BuildArch:		noarch
BuildRequires:	systemd-rpm-macros

%description
This package contains the SecureDrop Release public key and yum .repo file
used to bootstrap installation of SecureDrop Workstation.

%prep
%setup -q -n files

%build
# No building necessary

%install
install -m 755 -d %{buildroot}/etc/yum.repos.d
install -m 755 -d %{buildroot}/etc/pki/rpm-gpg
install -m 644 %{_builddir}/files/securedrop-workstation-dom0.repo %{buildroot}/etc/yum.repos.d/
install -m 644 %{_builddir}/files/securedrop-release-signing-pubkey-2021.asc %{buildroot}/etc/pki/rpm-gpg/RPM-GPG-KEY-securedrop-workstation

%files
/etc/pki/rpm-gpg/RPM-GPG-KEY-securedrop-workstation
/etc/yum.repos.d/securedrop-workstation-dom0.repo

%preun
# see https://github.com/rpm-software-management/rpm/issues/2577
# naive idea - this does not work
key_id=$(rpm -q gpg-pubkey --qf '%{NAME}-%{VERSION}-%{RELEASE}\t%{SUMMARY}\n' | grep SecureDrop | cut -f1 -d' ')
rpm -e $key_id || true

%post
# naive idea - this does not work
sudo rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-securedrop-workstation

%changelog
# TODO

but there's a hiccup, as I've indicated in comments - we can't use rpm commands in the rpm %post or %preun even if they're for bootstrapping other keys (our key), because rpm maintains a transaction lock over the rpm database while its installing and updating packages, lest the database be corrupted.

afaict, the best we can do re key management is provide a bootstrapping script that the user runs after the initial RPM is downloaded and the transaction lock on the rpm database is lifted.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants