Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Uses local RPM build for "dev" and "staging" scenarios #587

Merged
merged 14 commits into from
Sep 4, 2020
62 changes: 22 additions & 40 deletions Makefile
Original file line number Diff line number Diff line change
Expand Up @@ -11,80 +11,65 @@ all: assert-dom0
@echo
@echo "make dev"
@echo "make staging"
@echo "make prod"
@echo
@echo "These targets will set your config.json to the appropriate environment."
@false

dev: assert-dom0 ## Configures and builds a DEVELOPMENT install
./scripts/configure-environment --env dev
dev staging: assert-dom0 ## Configures and builds a dev or staging environment
./scripts/configure-environment --env $@
$(MAKE) validate
$(MAKE) prep-salt
./scripts/provision-all

prod: assert-dom0 ## Configures and builds a PRODUCTION install for pilot use
./scripts/configure-environment --env prod
$(MAKE) validate
$(MAKE) prep-salt
./scripts/provision-all

staging: assert-dom0 ## Configures and builds a STAGING install. To be used on test hardware ONLY
./scripts/configure-environment --env staging
$(MAKE) validate
$(MAKE) prep-salt
./scripts/provision-all
$(MAKE) prep-dev
sdw-admin --apply

dom0-rpm: ## Builds rpm package to be installed on dom0
@./scripts/build-dom0-rpm

clone: assert-dom0 ## Pulls the latest repo from work VM to dom0
@./scripts/clone-to-dom0

qubes-rpc: prep-salt ## Places default deny qubes-rpc policies for sd-app and sd-gpg
qubes-rpc: prep-dev ## Places default deny qubes-rpc policies for sd-app and sd-gpg
sudo qubesctl --show-output --targets sd-dom0-qvm-rpc state.highstate

add-usb-autoattach: prep-dom0 ## Adds udev rules and scripts to sys-usb
sudo qubesctl --show-output --skip-dom0 --targets sys-usb state.highstate

remove-usb-autoattach: prep-salt ## Removes udev rules and scripts from sys-usb
remove-usb-autoattach: prep-dev ## Removes udev rules and scripts from sys-usb
sudo qubesctl --show-output state.sls sd-usb-autoattach-remove

sd-workstation-template: prep-salt ## Provisions base template for SDW AppVMs
sd-workstation-template: prep-dev ## Provisions base template for SDW AppVMs
sudo qubesctl --show-output state.sls sd-workstation-buster-template
sudo qubesctl --show-output --skip-dom0 --targets sd-workstation-buster-template state.highstate

sd-proxy: prep-salt ## Provisions SD Proxy VM
sd-proxy: prep-dev ## Provisions SD Proxy VM
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The changes to the dev environment may have repercussions on these individual makefile targets :

prep-salt makefile target would copy the local salt file in /srv/salt/ whereas this updated prep-dev target will install the dom0 rpm. This means that any changes to the local files in the securedrop-workstation folder in dom0 will not be used.

If this is the case, adding a note to this effect in the dev docs could be helpful

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If this is the case, adding a note to this effect in the dev docs could be helpful

That's definitely true. Editing the files in e.g. /srv/salt/ still works fine, but I wouldn't recommend either, given how easy it is to lose changes that way. Will clarify in the docs!

sudo qubesctl --show-output state.sls sd-proxy
sudo qubesctl --show-output --skip-dom0 --targets sd-proxy-buster-template,sd-proxy state.highstate

sd-gpg: prep-salt ## Provisions SD GPG keystore VM
sd-gpg: prep-dev ## Provisions SD GPG keystore VM
sudo qubesctl --show-output state.sls sd-gpg
sudo qubesctl --show-output --skip-dom0 --targets sd-workstation-buster-template,sd-gpg state.highstate

sd-app: prep-salt ## Provisions SD APP VM
sd-app: prep-dev ## Provisions SD APP VM
sudo qubesctl --show-output state.sls sd-app
sudo qubesctl --show-output --skip-dom0 --targets sd-app-buster-template,sd-app state.highstate

sd-whonix: prep-salt ## Provisions SD Whonix VM
sd-whonix: prep-dev ## Provisions SD Whonix VM
sudo qubesctl --show-output state.sls sd-whonix
sudo qubesctl --show-output --skip-dom0 --targets whonix-gw-15,sd-whonix state.highstate

sd-viewer: prep-salt ## Provisions SD Submission Viewing VM
sd-viewer: prep-dev ## Provisions SD Submission Viewing VM
sudo qubesctl --show-output state.sls sd-viewer
sudo qubesctl --show-output --skip-dom0 --targets sd-viewer-buster-template,sd-viewer state.highstate

sd-devices: prep-salt ## Provisions SD Export VM
sd-devices: prep-dev ## Provisions SD Export VM
sudo qubesctl --show-output state.sls sd-devices
sudo qubesctl --show-output --skip-dom0 --targets sd-devices-buster-template,sd-devices,sd-devices-dvm state.highstate

sd-log: prep-salt ## Provisions SD logging VM
sd-log: prep-dev ## Provisions SD logging VM
sudo qubesctl --show-output state.sls sd-log
sudo qubesctl --show-output --skip-dom0 --targets sd-log-buster-template,sd-log state.highstate

clean-salt: assert-dom0 ## Purges SD Salt configuration from dom0
@./scripts/clean-salt

prep-salt: assert-dom0 ## Configures Salt layout for SD workstation VMs
@./scripts/prep-salt
prep-dev: assert-dom0 ## Configures Salt layout for SD workstation VMs
@./scripts/prep-dev
@./scripts/validate_config.py

remove-sd-whonix: assert-dom0 ## Destroys SD Whonix VM
Expand All @@ -109,13 +94,10 @@ remove-sd-devices: assert-dom0 ## Destroys SD EXPORT VMs
remove-sd-log: assert-dom0 ## Destroys SD logging VM
@./scripts/destroy-vm sd-log

clean: assert-dom0 prep-salt ## Destroys all SD VMs
sudo qubesctl --show-output state.sls sd-clean-default-dispvm
$(MAKE) destroy-all
sudo qubesctl --show-output --skip-dom0 --targets whonix-gw-15 state.sls sd-clean-whonix
sudo qubesctl --show-output state.sls sd-clean-all
sudo dnf -y -q remove securedrop-workstation-dom0-config 2>/dev/null || true
$(MAKE) clean-salt
clean: assert-dom0 prep-dev ## Destroys all SD VMs
# Use the local script path, since system PATH location will be absent
# if clean has already been run.
emkll marked this conversation as resolved.
Show resolved Hide resolved
./scripts/sdw-admin.py --uninstall --keep-template-rpm --force

test: assert-dom0 ## Runs all application tests (no integration tests yet)
python3 -m unittest discover -v tests
Expand Down Expand Up @@ -150,7 +132,7 @@ flake8: ## Lints all Python files with flake8
# available only in the developer environment, i.e. Work VM.
@./scripts/lint-all "flake8"

prep-dom0: prep-salt # Copies dom0 config files
prep-dom0: prep-dev # Copies dom0 config files
sudo qubesctl --show-output --targets dom0 state.highstate

destroy-all: ## Destroys all VMs managed by Workstation salt config
Expand Down
20 changes: 11 additions & 9 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,9 @@ Select all VMs marked as **updates available**, then click **Next**. Once all up

#### Download, Configure, Copy to `dom0`

Decide on a VM to use for development. We suggest creating a standalone VM called `sd-dev`. Clone this repo to your preferred location on that VM.
Decide on a VM to use for development. We recommend creating a standalone VM called `sd-dev` by following [these instructions](https://docs.securedrop.org/en/stable/development/setup_development.html#qubes). You must install Docker in that VM in order to build a development environment using the standard workflow.

Clone this repo to your preferred location on that VM.

Next we need to do some SecureDrop-specific configuration:

Expand All @@ -139,7 +141,7 @@ After that initial manual step, the code in your development VM may be copied in
[dom0]$ export SECUREDROP_DEV_VM=sd-dev # set to your dev VM
[dom0]$ export SECUREDROP_DEV_DIR=/home/user/projects/securedrop-workstation # set to your working directory
[dom0]$ cd ~/securedrop-workstation/
[dom0]$ make clone # copy repo to dom0
[dom0]$ make clone # build RPM package (requires Docker) and copy repo to dom0
```

If you plan to work on the [SecureDrop Client](https://github.com/freedomofpress/securedrop-client) code, also run this command in `dom0`:
Expand Down Expand Up @@ -173,13 +175,13 @@ qfile-agent : Fatal error: File copy: Disk quota exceeded; Last file: <...> (err

When the installation process completes, a number of new VMs will be available on your machine, all prefixed with `sd-`.

#### Editing the configuration
When developing on the Workstation, make sure to edit files in `sd-dev`, then copy them to dom0 via `make clone && make dev` to reinstall them. Any changes that you make to the ~/securedrop-workstation folder in dom0 will be overwritten during `make clone`. Similarly, any changes you make to e.g. `/srv/salt/` in dom0 will be overwritten by `make dev`.

### Staging Environment

The staging environment is intended to provide an experience closer to a production environment. For example, it will alter power management settings on your laptop to prevent suspending it to disk, and make other changes that may not be desired during day-to-day development in Qubes.

**IMPORTANT: THE STAGING ENVIRONMENT SHOULD NEVER BE USED FOR PRODUCTION PURPOSES. IT SHOULD ALSO NOT BE USED ON DEVELOPER MACHINES, BUT ONLY ON TEST MACHINES THAT HOLD NO SENSITIVE DATA.**

#### Update `dom0`, `fedora-31`, `whonix-gw-15` and `whonix-ws-15` templates

Updates to these VMs will be provided by the installer and updater, but to ensure they are up to date prior to install, it will be easier to debug, should something go wrong.
Expand All @@ -192,9 +194,9 @@ In the Qubes Menu, navigate to `System Tools` and click on `Qubes Update`. Click

You can install the staging environment in two ways:

- If you have an up-to-date clone of this repo with a valid configuration in `dom0`, you can use the `make staging` target to provision a staging environment. Prior to provisioning, `make staging` will set your `config.json` environment to `staging`. As part of the provisioning, your package repository configuration will be updated to use the latest test release of the RPM package, and the latest nightlies of the Debian packages.
- If you have an up-to-date clone of this repo with a valid configuration in `dom0`, you can use the `make staging` target to provision a staging environment. Prior to provisioning, `make staging` will set your `config.json` environment to `staging`. As part of the provisioning, a locally built RPM will be installed in dom0. The dom0 package repository configuration will be updated to install future test-only versions of the RPM package from the https://yum-test.securedrop.org repository, and Workstation VMs will receive the latest nightlies of the Debian packages (same as `make dev`).

- If you want to install a staging environment from scratch in a manner similar to a production install (starting from an RPM, and using `sdw-admin` for the installation), follow the process in the following sections.
- If you want to download a specific version of the RPM, and follow a verification procedure similar to that used in a production install, follow the process in the following sections.

#### Download and install securedrop-workstation-dom0-config package

Expand Down Expand Up @@ -274,7 +276,7 @@ In a terminal in `dom0`, run the following commands:

This project's development requires different workflows for working on provisioning components and working on submission-handling scripts.

For developing salt states and other provisioning components, work is done in a development VM and changes are made to individual state and top files there. In the `dom0` copy of this project, `make clone` is used to copy over the updated files; `make <vm-name>` to rebuild an individual VM; and `make dev` to rebuild the full installation. Current valid target VM names are `sd-proxy`, `sd-gpg`, `sd-whonix`, and `disp-vm`. Note that `make clone` requires two environment variables to be set: `SECUREDROP_DEV_VM` must be set to the name of the VM where you've been working on the code, the `SECUREDROP_DEV_DIR` should be set to the directory where the code is checked out on your development VM.
For developing salt states and other provisioning components, work is done in a development VM and changes are made to individual state and top files there. In the `dom0` copy of this project, `make clone` is used to package and copy over the updated files; `make <vm-name>` to rebuild an individual VM; and `make dev` to rebuild the full installation. Current valid target VM names are `sd-proxy`, `sd-gpg`, `sd-whonix`, and `disp-vm`. Note that `make clone` requires two environment variables to be set: `SECUREDROP_DEV_VM` must be set to the name of the VM where you've been working on the code, the `SECUREDROP_DEV_DIR` should be set to the directory where the code is checked out on your development VM.

For developing submission processing scripts, work is done directly in the virtual machine running the component. To commit, copy the updated files to a development VM with `qvm-copy-to-vm`and move the copied files into place in the repo. (This process is a little awkward, and it would be nice to make it better.)

Expand All @@ -298,7 +300,7 @@ Be aware that running tests *will* power down running SecureDrop VMs, and may re

Double-clicking the "SecureDrop" desktop icon will launch a preflight updater that applies any necessary updates to VMs, and may prompt a reboot.

To update workstation provisioning logic, one must use the `sd-dev` AppVM that was created during the install. From your checkout directory, run the following commands (replace `<tag>` with the tag of the release you are working with):
To update workstation provisioning logic in a development environment, one must use the `sd-dev` AppVM that was created during the install. From your checkout directory, run the following commands (replace `<tag>` with the tag of the release you are working with):

```
git fetch --tags
Expand All @@ -313,7 +315,7 @@ make clone
make dev
```

In the future, we plan on shipping a *SecureDrop Workstation* installer package as an RPM package in `dom0` to automatically update the salt provisioning logic.
The `make clone` command will build a new version of the RPM package that contains the provisioning logic in your development VM (e.g., `sd-dev`) and copy it to `dom0`. The RPM is built using a Docker container, so Docker must be installed in your development VM.

### Building the Templates

Expand Down
16 changes: 0 additions & 16 deletions dom0/sd-clean-all.sls
Original file line number Diff line number Diff line change
Expand Up @@ -52,22 +52,6 @@ remove-rpc-policy-tags:
cmd.script:
- name: salt://remove-tags

# Removes files that are provisioned by the dom0 RPM, only for the development
# environment, since dnf takes care of those provisioned in the RPM
{% if d.environment == "dev" %}
remove-dom0-sdw-config-files-dev:
file.absent:
- names:
- /opt/securedrop
- /srv/salt/remove-tags
- /srv/salt/securedrop-update
- /srv/salt/update-xfce-settings
# Do not remove these scripts before they have done their cleanup duties
- require:
- cmd: dom0-reset-icon-size-xfce
- cmd: remove-rpc-policy-tags
{% endif %}

sd-cleanup-etc-changes:
file.replace:
- names:
Expand Down
12 changes: 3 additions & 9 deletions dom0/sd-dom0-files.sls
Original file line number Diff line number Diff line change
Expand Up @@ -198,21 +198,15 @@ dom0-securedrop-launcher-desktop-shortcut:
- mode: 755

{% import_json "sd/config.json" as d %}
{% if d.environment == "dev" %}
emkll marked this conversation as resolved.
Show resolved Hide resolved
dom0-remove-securedrop-workstation-dom0-config:
pkg.removed:
- pkgs:
- securedrop-workstation-dom0-config

{% else %}

{% if d.environment != "dev" %}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If I understand correctly, here we are explicitly excluding dev from installing the RPM to avoid installing the latest from the yum repos should the locally built version be lesser than the ones on the server. if that's the case, it may be worth adding a comment here for future maintainers as it is somewhat counter-intuitive.

# In the dev environment, we've already installed the rpm from
# local sources, so don't also pull in from the yum-test repo.
dom0-install-securedrop-workstation-dom0-config:
pkg.installed:
- pkgs:
- securedrop-workstation-dom0-config
- require:
- file: dom0-workstation-rpm-repo

{% endif %}

# Hide suspend/hibernate options in menus in prod systems
Expand Down
6 changes: 3 additions & 3 deletions rpm-build/SPECS/securedrop-workstation-dom0-config.spec
Original file line number Diff line number Diff line change
Expand Up @@ -59,11 +59,11 @@ install -m 644 dom0/*.top %{buildroot}/srv/salt/
install -m 644 dom0/*.j2 %{buildroot}/srv/salt/
install -m 644 dom0/*.yml %{buildroot}/srv/salt/
install -m 644 dom0/*.conf %{buildroot}/srv/salt/
install -m 655 dom0/remove-tags %{buildroot}/srv/salt/
install -m 755 dom0/remove-tags %{buildroot}/srv/salt/
install -m 644 dom0/securedrop-login %{buildroot}/srv/salt/
install -m 644 dom0/securedrop-launcher.desktop %{buildroot}/srv/salt/
install -m 655 dom0/securedrop-handle-upgrade %{buildroot}/srv/salt/
install -m 655 dom0/update-xfce-settings %{buildroot}/srv/salt/
install -m 755 dom0/securedrop-handle-upgrade %{buildroot}/srv/salt/
install -m 755 dom0/update-xfce-settings %{buildroot}/srv/salt/
install -m 755 scripts/sdw-admin.py %{buildroot}/%{_bindir}/sdw-admin
install -m 644 sd-app/* %{buildroot}/srv/salt/sd/sd-app/
install -m 644 sd-proxy/* %{buildroot}/srv/salt/sd/sd-proxy/
Expand Down
2 changes: 1 addition & 1 deletion scripts/build-dom0-rpm
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ function build_local_base {
}

function docker_cmd_wrapper() {
docker run -it --rm \
docker run -t --rm \
--network=none \
-v "${ROOT_DIR}:/sd" \
-v "${ROOT_DIR}/rpm-build:${USER_RPMDIR}" \
Expand Down
27 changes: 0 additions & 27 deletions scripts/clean-salt

This file was deleted.

10 changes: 9 additions & 1 deletion scripts/clone-to-dom0
Original file line number Diff line number Diff line change
Expand Up @@ -20,12 +20,19 @@ dev_dir="${SECUREDROP_DEV_DIR:-/home/user/securedrop-workstation}"
# The dest directory in dom0 is not customizable.
dom0_dev_dir="$HOME/securedrop-workstation"

# Call out to target AppVM, to build an RPM containing
# the latest Salt config for dom0. The RPM will be included
# in the subsequent tarball, which is fetched to dom0.
function build-dom0-rpm() {
printf "Building RPM on %s ...\n" "${dev_vm}"
qvm-run -q "$dev_vm" "make -C $dev_dir dom0-rpm"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it make sense here to bump the RPM version to make sure that the version is always higher than anything available?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not aware of any RPM tooling like dch for programmatically bumping the version. Ideally it'd be something like <current_version><alpha1><git_short_hash>, but absent tooling, simply defaulting to <current_version> makes the most sense to me.

}

# Call out to target AppVM to create a tarball in dom0
function create-tarball() {
printf "Cloning code from %s:%s ...\n" "${dev_vm}" "${dev_dir}"
qvm-run --pass-io "$dev_vm" \
"tar -c --exclude-vcs \
--exclude='*.rpm' \
-C '$(dirname "$dev_dir")' \
'$(basename "$dev_dir")'" > /tmp/sd-proj.tar
}
Expand All @@ -35,5 +42,6 @@ function unpack-tarball() {
tar xf /tmp/sd-proj.tar -C "${dom0_dev_dir}" --strip-components=1
}

build-dom0-rpm
create-tarball
unpack-tarball
32 changes: 5 additions & 27 deletions scripts/configure-environment
Original file line number Diff line number Diff line change
@@ -1,15 +1,12 @@
#!/usr/bin/env python3
"""
Helper script to permit developers to select deployment
strategies for the dom0-based SecureDrop Workstation config.

Updates the config.json in-place in dom0 in order to modify.
Updates the config.json in-place in dom0 to set the environment to 'dev' or
'staging'.
"""
import json
import sys
import argparse
import os
from distutils.util import strtobool


def parse_args():
Expand All @@ -23,10 +20,10 @@ def parse_args():
)
parser.add_argument(
"--environment",
default="prod",
default="dev",
required=False,
action="store",
help="Target deploy strategy, e.g. 'prod', 'dev', or 'staging'",
help="Target deploy strategy, i.e. 'dev', or 'staging'",
)
args = parser.parse_args()
if not os.path.exists(args.config):
Expand All @@ -35,28 +32,12 @@ def parse_args():
parser.print_help(sys.stderr)
sys.exit(1)

if args.environment not in ("prod", "dev", "staging"):
if args.environment not in ("dev", "staging"):
parser.print_help(sys.stderr)
sys.exit(2)
return args


def confirm_staging():
"""Prompt for confirmation if staging selected.
We only want to use staging on test machines.
"""
print("WARNING: Config environment 'staging' was requested.")
print("WARNING: The staging env should only be used on TEST HARDWARE.")
print("WARNING: If you are on a primary laptop for work/production use, ")
print("WARNING: please update your config.json with environment=prod.")
confirmation = input("WARNING: Are you sure you wish to continue? [y/N] ")
try:
assert strtobool(confirmation)
except (AssertionError, ValueError):
print("Confirmation declined, exiting...")
sys.exit(1)


def set_env_in_config(args):
with open(args.config, "r") as f:
old_config = json.load(f)
Expand All @@ -75,7 +56,4 @@ def set_env_in_config(args):
if __name__ == "__main__":
args = parse_args()

if args.environment == "staging":
confirm_staging()

set_env_in_config(args)