Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Let's get our definitions straight :-) #174

Closed
jeff-mccoy opened this issue Nov 15, 2021 · 12 comments
Closed

Let's get our definitions straight :-) #174

jeff-mccoy opened this issue Nov 15, 2021 · 12 comments

Comments

@jeff-mccoy
Copy link
Contributor

jeff-mccoy commented Nov 15, 2021

As the project has matured a lot of words and definitions have surfaced that need to be sorted out so the CLI and docs and clearly communicate to users what exactly is happening and how to use Zarf.

Background:

  • Zarf deploys a K3s cluster with some pre-configured technologies in a simple, low-resource, declarative way we have been calling Appliance Mode.
    • As the Multi-distro support #153 work finishes, Zarf will also support deploying to non-k3s clusters, but I believe we will always lean on K3s as the defacto cluster tech used for this mode.
  • The Utility Cluster is actually an Appliance Mode deployment done by the zarf init command today. This is where it gets interesting--the native apply work will allow for "HA Utility Clusters" so it might not be an actual Appliance Mode deployment for the Utility Cluster, but some HA system (pick your K8s distro).
  • We've determined we're going to not advertise/broadly support a 3rd use-case for K8s here, where there is no Appliance Mode or Utility Cluster (not possible anyway until the native apply work is done). The Zarf solutions will always be either
    • Moving simple files for things like IaC (terraform and friends)
    • Appliance Mode
    • Utility Cluster
  • I started using the term Gitops Service as we moved the registry into every zarf appliance, that's actually going to change a little with native apply. There will always be an embedded registry for appliance mode (system service outside K8s) and a second registry for Utility Cluster / Gitops Service, this will run on K8s and also support HA as an option.

Other Notes:

  • I find the word embedded / edge to be far too abused to use for naming anything in zarf. Not everyone loves Appliance Mode, looking at you @unicornbunny182. But we need a word / phrase to communicate something that is long-lived, lightweight, single-purpose, easy to recreate / redeploy and fully declarative for the airgap environment

Maybe??:

  • One phrase we could use instead of Appliance Mode could be Zarf Agent instead. Thinking here would be it's a special thing that Zarf deploys, and that it continuously runs (i.e an agent). We could describe variations, Utility Cluster, as "agent features". So the Zarf Agent could have custom apps/features/thingys (e.g. Doom) or otherwise Gitops Service or similar.

When this is complete, revisit the language in #186

@unicornbunny182
Copy link
Contributor

I just hate "special words". If we can avoid making up our own words, would be easier to explain. I know there isn't a "standard", but clearly other's have looked at "tiny edge" before and come up with some words to describe.

@jeff-mccoy
Copy link
Contributor Author

Who besides P1 has used tiny edge?

@btlghrants
Copy link
Contributor

btlghrants commented Nov 15, 2021

For context, over in #172 I've been using:

  • Appliance Mode to indicate the Zarf-installed k3s cluster directly serving apps the end users
  • Utility Mode to indicate the Zarf-installed k3s cluster serving images/git repos to downstream clusters
  • the Zarf cluster as a blanket term to cover both Applicance Mode & Utility Mode usages of the k3s cluster

So, saying something like this makes sense:

The Zarf cluster can be run in either Appliance or Utility mode.

That can all be changed to use whatever we decide here though, for sure, cuz I love clarity in terms.

@btlghrants
Copy link
Contributor

One phrase we could use instead of Appliance Mode could be Zarf Agent instead. Thinking here would be it's a special thing that Zarf deploys, and that it continuously runs (i.e an agent). We could describe variations, Utility Cluster, as "agent features". So the Zarf Agent could have custom apps/features/thingys (e.g. Doom) or otherwise Gitops Service or similar.

I like how agent describes the way we install & always-on the Zarf k3s cluster. Might make a decent substitute for "appliance", as in: Appliance Mode --> Agent Mode maybe?

@btlghrants
Copy link
Contributor

btlghrants commented Nov 15, 2021

One phrase we could use instead of Appliance Mode could be Zarf Agent instead. Thinking here would be it's a special thing that Zarf deploys, and that it continuously runs (i.e an agent). We could describe variations, Utility Cluster, as "agent features". So the Zarf Agent could have custom apps/features/thingys (e.g. Doom) or otherwise Gitops Service or similar.

It feels really awkward to me to try and have a term like the Zarf Agent describe the post-native-apply-"HA Utility Cluster" situation though. I get hella cognitive discord from trying to map something like the following onto machines & processes in my mind:

the Zarf Agent with the "HA Utility Cluster" feature enabled

@Madeline-UX
Copy link
Contributor

Fascinating conversation. I agree that appliance mode could be confusing and think trying to find words that explicitly descript the action or service provided is key.

Designer brain - would a miro board where we can all brain dump ideas async help? And then we can find a time to converge? I would be happy to document the options already listed out here

@runyontr
Copy link
Contributor

It feels like we're missing the name for a cluster that pulls something from the central utility cluster. In my head this is how I was thinking of things:

Appliance Mode - Any cluster/application that is deployed by zarf that does not depend on an external "datasource" for git repos or registries. This would necessitate that those artifacts are co-deployed as part of the application installation. This could be done via the k3d boostrapping done currently, or by k8s-native-applying a git repo and registry as part of the installation process onto an existing cluster. I always pictured this as a single use cluster, i.e. no other applications would be run co-located, but that doesn't necessarily need to be true.

Hub and Spoke - I don't necessarily love these names, but the Hub would be an appliance mode deployment of a git repo and/or registry that's used to centrally host artifacts. The Spoke is deployed via Zarf on an existing or bootstrapped cluster, but all artifacts are pulled from the Hub rather than being co-located on the cluster. Zarf would then be used a second time to "fill"/"seed"/"populate"/etc the artifacts for a particular spoke deployment. Ideally Zarf could use the same bundle to deploy via appliance mode or fill the hub and deploy on the spoke.

Other names: upstream and downstream, mothership and satellite

There are deployment paradigms where an end user doesn't want a Hub and Spoke, e.g. if they want to pull from upstream in connected environments. We would/could/should argue that we don't advocate for having a spoke/downstream/satellite by itself for production, but nothing would prevent this and provide a clear language for what this architecture looks like. It also doesn't prevent us from having a hub be a non-zarf cluster and just use existing managed services, that Zarf would still be responsible for filling/seeding with required artifacts for the spoke.

@btlghrants
Copy link
Contributor

btlghrants commented Nov 17, 2021

In an attempt to make the various/expected usage scenarios more explicit & scope the conversation here, I've roughed out a diagram that:

  • lays out basic usage scenarios across the x-axis, and
  • flows actors & actions for each scenario down the y-axis

The intention is to make clear that (based upon the chosen scenario) differently-named actors can be performing similar roles... which seems relevant for how & where we use specific terms to label certain actions / actors.

Perhaps this helps us come up with "good names" for any of the nouns on the graph that I have intentionally filled called by functional-but-not-currently-used-placeholder names (anything inside "{{ blah blah }}" brackets)..?

Any of those boxes look funny? Are there any "big ones" missing?

Anyone feel like taking a wag at translating those "{{ --- }}" terms into something real(-ish)?

@jeff-mccoy
Copy link
Contributor Author

Except for the datacenter feeder vs bootstrap I am hoping to be the same if possible, or at least from a users perspective. Ideally a user could easily decide to later scale the gitops service to HA vs completely change everything to go HA. If they are K3s or some larger distro that should be easy so long as we configure manifests to support that out of the box. Biggest issues I see are better persistent storage and the gitea db using SQLite.

@JasonvanBrackel
Copy link
Contributor

@jeff-mccoy @Racer159 @YrrepNoj @Madeline-UX I'd like to close this if it no longer needs discussion, or get this closed soon. Let's get this done next week.

@jeff-mccoy
Copy link
Contributor Author

I think it's mostly already covered by our extensive docs work, no issue closing.

@JasonvanBrackel
Copy link
Contributor

Closing per @jeff-mccoy

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants