Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cli: demo experience #1096

Open
7 of 9 tasks
tgeoghegan opened this issue Jun 13, 2024 · 8 comments
Open
7 of 9 tasks

cli: demo experience #1096

tgeoghegan opened this issue Jun 13, 2024 · 8 comments
Assignees

Comments

@tgeoghegan
Copy link
Contributor

tgeoghegan commented Jun 13, 2024

We want it to be easy for prospective users of Divvi Up to give it a whirl and do some simple aggregations. To enable that, we want to make it easy to use the included compose.yaml to spin up a control plane, web app and a pair of aggregators, and then use divviup dap-client to upload and collect.

Missing pieces are:

@tgeoghegan tgeoghegan self-assigned this Jun 13, 2024
tgeoghegan added a commit that referenced this issue Jun 13, 2024
Adds a simple healthcheck on the `divviup-api` service in `compose.yaml`
that tickles the health endpoint. Also adds a check to the Docker CI job
that runs Docker compose and waits 120 seconds for it to become healthy.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 13, 2024
Adds a simple healthcheck on the `divviup-api` service in `compose.yaml`
that tickles the health endpoint. Also adds a check to the Docker CI job
that runs Docker compose and waits 120 seconds for it to become healthy.

Part of #1096
@inahga
Copy link
Contributor

inahga commented Jun 13, 2024

We may want to clarify the stability policy of divviup-api, now that we would effectively be shipping it to end-users.

I think we still want to be empowered to take incompatible CLI and configuration changes to divviup-api (the server) and the docker-compose file without having to care about compatibility. Maybe the policy ought to be "always download the docker-compose.yml file corresponding to the Janus release"?

@inahga
Copy link
Contributor

inahga commented Jun 13, 2024

It makes me somewhat uncomfortable how we rely on the integration-testing feature for local dev and demo to work.

Its purpose is to remove the authentication requirement and allow http URLs for aggregators, so that Janus integration tests that don't want to have to provision users or certificates can work.

Not sure if there's any action to take here, really. If anything we can break integration-testing into two features or runtime flags no-auth and allow-http, for clarity.

Long term I was hoping to replace the "no authentication required" part of integration-testing with a locally-running authentication provider (i.e. Keycloak) programmed with some fixture users, so we still can exercise authentication code paths in testing and dev. That of course won't work for the demo--I don't think we want to bother users with signing in with fake accounts.

tgeoghegan added a commit that referenced this issue Jun 13, 2024
Adds a simple healthcheck on the `divviup-api` service in `compose.yaml`
that tickles the health endpoint. Also adds a check to the Docker CI job
that runs Docker compose and waits 120 seconds for it to become healthy.

Part of #1096
@tgeoghegan
Copy link
Contributor Author

In #1098, David notes:

This looks good, but the compose.yaml file still has one dependency on the surrounding repo, so it won't work standalone just yet. The divviup_api_vite service has a build stanza, and mounts the webapp source code. We'll need to either use the existing divviup_api image's bundled assets, or bake a new image with just a static web server, the webapp assets, and a way to mount an appropriate api_url file in as well. (the former may be trickier to set up since it uses host-based routing, but maybe we can make it work with setting the right environment variables to localhost with specific ports)

and also:

One other consideration: how will we phase updating the image version numbers in the compose file, tagging commits, and uploading release artifacts? For now, the compose file attached to a release will trail behind the version of the release itself. If we updated tags in the compose file before creating the release, then the compose file itself would have to go untested to some degree, since container images are generated after the release is created.

tgeoghegan added a commit that referenced this issue Jun 14, 2024
To enable the demo workflow, users will need to pair two aggregators
with the control plane. Further, one of them must be first-party, which
can only be done using a CLI built with feature `admin`. But the one we
distribute via GitHub releases isn't, and I argue it shouldn't be, which
means that demo users can't appropriately pair aggregators.

We could provide a `curl` or `wget` command that'd do it, but as it
turns out, we already bundle the `divviup` CLI in the `divviup_api`
container, so let's just use that, since the API URL and token for the
aggregators are static!

This commit:

- adds feature `admin` to builds of `divviup_api_integration_test` in
  `docker-release.yaml` so that the bundled CLI will have
  `--first-party` and `--shared`
- adds a service `pair_aggregator` to `compose.yaml` that uses the
  `/divviup` entrypoint to pair `janus_1_aggregator` as a first-party,
  shared aggregator
- adds a stanza for `pair_aggregator` to `compose.dev.override.yaml`
- makes `compose.dev.override.yaml` build all images with features
  `integration_test,admin`

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
To enable the demo workflow, users will need to pair two aggregators
with the control plane. Further, one of them must be first-party, which
can only be done using a CLI built with feature `admin`. But the one we
distribute via GitHub releases isn't, and I argue it shouldn't be, which
means that demo users can't appropriately pair aggregators.

We could provide a `curl` or `wget` command that'd do it, but as it
turns out, we already bundle the `divviup` CLI in the `divviup_api`
container, so let's just use that, since the API URL and token for the
aggregators are static!

I'm thinking we should have the demo user pair the other aggregator
themselves with `divviup` to simulate what they might have to do in a
realistic use case, if they bring their own helper.

This commit:

- adds feature `admin` to builds of `divviup_api_integration_test` in
  `docker-release.yaml` so that the bundled CLI will have
  `--first-party` and `--shared`
- adds a service `pair_aggregator` to `compose.yaml` that uses the
  `/divviup` entrypoint to pair `janus_1_aggregator` as a first-party,
  shared aggregator
- adds a stanza for `pair_aggregator` to `compose.dev.override.yaml`
- makes `compose.dev.override.yaml` build all images with features
  `integration_test,admin`

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
We want the `divviup` CLI packaged into the
`divviup_api_integration_test` container to be built with feature
`admin` for reasons discussed in #1099. In this commit:

- add feature `admin` to builds of `divviup_api_integration_test` in
  `docker-release.yaml` so that the bundled CLI will have
  `--first-party` and `--shared`
- adds a service `pair_aggregator` to `compose.dev.override.yaml` that
  uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a
  first-party, shared aggregator
- factors the `build` stanza out of services in
  `compose.dev.override.yaml` to apply features to them uniformly

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
We want the `divviup` CLI packaged into the
`divviup_api_integration_test` container to be built with feature
`admin` for reasons discussed in #1099. In this commit:

- add feature `admin` to builds of `divviup_api_integration_test` in
  `docker-release.yaml` so that the bundled CLI will have
  `--first-party` and `--shared`
- adds a service `pair_aggregator` to `compose.dev.override.yaml` that
  uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a
  first-party, shared aggregator
- factors the `build` stanza out of services in
  `compose.dev.override.yaml` to apply features to them uniformly
- tunes the health checks on `divviup-api` and the aggregators so they
  will succeed sooner and speed up startup

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
We want the `divviup` CLI packaged into the
`divviup_api_integration_test` container to be built with feature
`admin` for reasons discussed in #1099. In this commit:

- add feature `admin` to builds of `divviup_api_integration_test` in
  `docker-release.yaml` so that the bundled CLI will have
  `--first-party` and `--shared`
- adds a service `pair_aggregator` to `compose.dev.override.yaml` that
  uses the `/divviup` entrypoint to pair `janus_1_aggregator` as a
  first-party, shared aggregator
- factors the `build` stanza out of services in
  `compose.dev.override.yaml` to apply features to them uniformly
- tunes the health checks on `divviup-api` and the aggregators so they
  will succeed sooner and speed up startup

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
In #1100, we started building `divviup_api_integration_test` with
feature `admin`. This breaks `docker-release.yml`, because the Rust
features are used to construct GH Actions cache keys, and commas are
illegal there (at the very least, `docker buildx build --cache-from`
doesn't like it). Construct the cache key by joining with `-` to work
`around this.

Additionally, this _should_ have broken `docker.yml`, but I forgot to
add the `admin` feature there, which this PR corrects.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
In #1100, we started building `divviup_api_integration_test` with
feature `admin`. This breaks `docker-release.yml`, because the Rust
features are used to construct GH Actions cache keys, and commas are
illegal there (at the very least, `docker buildx build --cache-from`
doesn't like it). Construct the cache key by joining with `-` to work
`around this.

Additionally, this _should_ have broken `docker.yml`, but I forgot to
add the `admin` feature there, which this PR corrects.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
To enable the demo workflow, users will need to pair two aggregators
with the control plane. Further, one of them must be first-party, which
can only be done using a CLI built with feature `admin`. But the one we
distribute via GitHub releases isn't, and I argue it shouldn't be, which
means that demo users can't appropriately pair aggregators.

We could provide a `curl` or `wget` command that'd do it, but as it
turns out, we already bundle the `divviup` CLI in the `divviup_api`
container, so let's just use that, since the API URL and token for the
aggregators are static!

I'm thinking we should have the demo user pair the other aggregator
themselves with `divviup` to simulate what they might have to do in a
realistic use case, if they bring their own helper.

Building on previous changes, this commit moves the `pair_aggregator`
service into the main `compose.yaml`, leaving `build` and `develop`
stanzas for it in `compose.dev.override.yaml`. Additionally, the
`divviup_api_integration_test` image version is bumped to one built with
feature `admin`, which should get the tests working.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 14, 2024
To enable the demo workflow, users will need to pair two aggregators
with the control plane. Further, one of them must be first-party, which
can only be done using a CLI built with feature `admin`. But the one we
distribute via GitHub releases isn't, and I argue it shouldn't be, which
means that demo users can't appropriately pair aggregators.

We could provide a `curl` or `wget` command that'd do it, but as it
turns out, we already bundle the `divviup` CLI in the `divviup_api`
container, so let's just use that, since the API URL and token for the
aggregators are static!

I'm thinking we should have the demo user pair the other aggregator
themselves with `divviup` to simulate what they might have to do in a
realistic use case, if they bring their own helper.

Building on previous changes, this commit moves the `pair_aggregator`
service into the main `compose.yaml`, leaving `build` and `develop`
stanzas for it in `compose.dev.override.yaml`. Additionally, the
`divviup_api_integration_test` image version is bumped to one built with
feature `admin`, which should get the tests working.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
- Autopair the second aggregator in `compose.yaml`
- Rewrite demo script to guide users through brining up `docker compose`
  environment and then doing aggregations against it

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
- Autopair the second aggregator in `compose.yaml`
- Rewrite demo script to guide users through brining up `docker compose`
  environment and then doing aggregations against it

Part of #1096
@tgeoghegan
Copy link
Contributor Author

There's a race between the various DB migrate jobs (well, services, docker compose doesn't have jobs): the depends on condition is "service started", but the DB might not be up yet. I suspect we could rig up a simple healthcheck for the DB services that blocks until the DB is ready, and then we can change the depends ons in the migrator jobs.

tgeoghegan added a commit that referenced this issue Jun 17, 2024
We want demo users to be able to get started with nothing but a
`divviup` binary, a working Docker Compose install and `compose.yaml`.
`divviup_api_vite` relies on having a local checkout of the static
assets to serve. We now use the `diviup_api_integration_test` image to
serve static assets from service `static_assets`. The assets in question
are already present in the image being run in service `divviup_api`, but
I did it this way for the following reasons:

- `divviup-api` routes requests to the static asset handler based on
  hostname, making it difficult to serve alongside the API. I tried
  creating some aliases ([1]) in Docker Compose, but those names are
  only visible inside the compose network, meaning you have to set the
  `Host` header from outside the netns, which is a hassle I don't want
  to inflict on demo users.
- Having a distinct service for assets is convenient because we can make
  it depend on `pair_aggregators`. If that ran last, it could cause
  `docker compose up --wait` to fail (see comment in `compose.yaml`).

[1]: https://docs.docker.com/compose/compose-file/05-services/#aliases

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
We want demo users to be able to get started with nothing but a
`divviup` binary, a working Docker Compose install and `compose.yaml`.
`divviup_api_vite` relies on having a local checkout of the static
assets to serve. We now use the `diviup_api_integration_test` image to
serve static assets from service `static_assets`. The assets in question
are already present in the image being run in service `divviup_api`, but
I did it this way for the following reasons:

- `divviup-api` routes requests to the static asset handler based on
  hostname, making it difficult to serve alongside the API. I tried
  creating some aliases ([1]) in Docker Compose, but those names are
  only visible inside the compose network, meaning you have to set the
  `Host` header from outside the netns, which is a hassle I don't want
  to inflict on demo users.
- Having a distinct service for assets is convenient because we can make
  it depend on `pair_aggregators`. If that ran last, it could cause
  `docker compose up --wait` to fail (see comment in `compose.yaml`).

[1]: https://docs.docker.com/compose/compose-file/05-services/#aliases

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
We want demo users to be able to get started with nothing but a
`divviup` binary, a working Docker Compose install and `compose.yaml`.
`divviup_api_vite` relies on having a local checkout of the static
assets to serve. We now use the `diviup_api_integration_test` image to
serve static assets from service `static_assets`. The assets in question
are already present in the image being run in service `divviup_api`, but
I did it this way for the following reasons:

- `divviup-api` routes requests to the static asset handler based on
  hostname, making it difficult to serve alongside the API. I tried
  creating some aliases ([1]) in Docker Compose, but those names are
  only visible inside the compose network, meaning you have to set the
  `Host` header from outside the netns, which is a hassle I don't want
  to inflict on demo users.
- Having a distinct service for assets is convenient because we can make
  it depend on `pair_aggregators`. If that ran last, it could cause
  `docker compose up --wait` to fail (see comment in `compose.yaml`).

[1]: https://docs.docker.com/compose/compose-file/05-services/#aliases

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
Addresses some rough edges in compose.yaml:

- Adds a healthcheck on the postgres service to make startup of
  dependent schema migrator services more reliable.
- `pair_aggregator` would re-run every time `docker compose up` is run,
  and fail after the first time. It now drops a file `/tmp/done` to
  indicate it has already run to completion.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
We want demo users to be able to get started with nothing but a
`divviup` binary, a working Docker Compose install and `compose.yaml`.
`divviup_api_vite` relies on having a local checkout of the static
assets to serve. We now use the `diviup_api_integration_test` image to
serve static assets from service `static_assets`. The assets in question
are already present in the image being run in service `divviup_api`, but
I did it this way for the following reasons:

- `divviup-api` routes requests to the static asset handler based on
  hostname, making it difficult to serve alongside the API. I tried
  creating some aliases ([1]) in Docker Compose, but those names are
  only visible inside the compose network, meaning you have to set the
  `Host` header from outside the netns, which is a hassle I don't want
  to inflict on demo users.
- Having a distinct service for assets is convenient because we can make
  it depend on `pair_aggregators`. If that ran last, it could cause
  `docker compose up --wait` to fail (see comment in `compose.yaml`).

[1]: https://docs.docker.com/compose/compose-file/05-services/#aliases

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
We want demo users to be able to get started with nothing but a
`divviup` binary, a working Docker Compose install and `compose.yaml`.
`divviup_api_vite` relies on having a local checkout of the static
assets to serve. We now use the `diviup_api_integration_test` image to
serve static assets from service `static_assets`. The assets in question
are already present in the image being run in service `divviup_api`, but
I did it this way for the following reasons:

- `divviup-api` routes requests to the static asset handler based on
  hostname, making it difficult to serve alongside the API. I tried
  creating some aliases ([1]) in Docker Compose, but those names are
  only visible inside the compose network, meaning you have to set the
  `Host` header from outside the netns, which is a hassle I don't want
  to inflict on demo users.
- Having a distinct service for assets is convenient because we can make
  it depend on `pair_aggregators`. If that ran last, it could cause
  `docker compose up --wait` to fail (see comment in `compose.yaml`).

[1]: https://docs.docker.com/compose/compose-file/05-services/#aliases

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
Addresses some rough edges in compose.yaml:

- Adds a healthcheck on the postgres service to make startup of
  dependent schema migrator services more reliable.
- `pair_aggregator` would re-run every time `docker compose up` is run,
  and fail after the first time. It now drops a file `/tmp/done` to
  indicate it has already run to completion.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 17, 2024
Addresses some rough edges in compose.yaml:

- Adds a healthcheck on the postgres service to make startup of
  dependent schema migrator services more reliable.
- `pair_aggregator` would re-run every time `docker compose up` is run,
  and fail after the first time. It now drops a file `/tmp/done` to
  indicate it has already run to completion.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 18, 2024
We need parentheses around the right hand side of the `||` so that
everything get skipped if `/tmp/done` exists.

Part of #1096
tgeoghegan added a commit that referenced this issue Jun 18, 2024
We need parentheses around the right hand side of the `||` so that
everything get skipped if `/tmp/done` exists. Also tests in CI that
bringing the deployment up, down and up again works.

Part of #1096
@philips
Copy link
Contributor

philips commented Jun 25, 2024

You may know about this but I am getting dependency failed to start: container divviup-api-docker2-divviup_api-1 is unhealthy on the compose.yaml that is checked in. Full log: https://gist.github.com/philips/c4d57b280a8e6a9fc80dd5372576120b

Also, I get an empty account list when I try to go through the README instructions.

$ ./divviup-aarch64-apple-darwin  account list
[]

@divergentdave
Copy link
Contributor

I installed podman version 4.9.3 on Linux, and it's using version 2.27.1 of the docker-compose plugin as an external compose provider. With this setup, podman compose up just works for me, and all services report as healthy. The divviup-api health check just runs SELECT 1 on the database, so whatever's going on, it's probably an issue with the health check command itself, or more weird database permissions problems. It might be worth trying a different version of podman, as they've made a long series of Docker compatibility fixes over recent years.

@philips
Copy link
Contributor

philips commented Jun 25, 2024

@divergentdave Yeah, I installed the latest, 5.1.1, and still encountering the same thing. I installed Docker Desktop and it is fine. I will just proceed from there.

@tgeoghegan
Copy link
Contributor Author

We concluded that managing collector credentials in a config file isn't a good idea right now:

  • we're reluctant to commit to a config file format
  • the use case we have in mind right now is enabling prospective adopters to run demos against ephemeral, local instances of Divvi Up. We're not really doing them a favour by accumulating credentials in ~/.config that match up to a database that only ever existed in a long-destroyed Docker container.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants