-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Consider inverting build/serving control #25
Comments
Number 1 was something I was wondering about too... while the current design is ok for local dev, or even the first stage of a pipeline, since I wouldn't want the build step to be defined/re-run as my app moved thru testing, staging and prod I would end up having to define a new service.yaml w/o the build step for those 2nd+ stages - thus requiring me to keep these two in-sync. Which is error prone and cumbersome. Number 2 is interesting, and I would include in there the notion of a "build" doing more than just creating an image. As you mentioned, generating swagger docs, or just docs in general, release zip files, or other artifacts could be needed. The question I wonder about is whether people would more naturally want to define these things in a "workflow/build" system or in something more simple like a Makefile. If the steps are as simple as "docker build ... && docker push ..." then I think it's easier to claim we can do it in Knative "build", but when it's more complex and people need multiple/non-trivial steps, I think they're going to feel constrained w/o something more like a Makefile or scripts/bash. I also believe people will want to version these build steps along with their source code - meaning they'll want them placed in the same git repo as the source. Keeping the build steps in Knative meaning keeping two systems in-sync. I do agree that build and serving might be better if they are more loosely coupled and I wonder if eventing should be the bridge between them rather than any hard-coded link. For example, what if the workflow was something more like: Whether step 2 is a github function, travis, or kn build is an impl choice. How the Kn Service is linked via this event could be due to someone setting up the dockerhub event source which calls a "rebuild function" which pokes the Kn Service to do a redeploy. Or, we could make it easier with something like:
or to support a rolling upgrade:
where "imageWatch" sets up all of the eventing infrastructure for them. |
@duglin I think that putting an Linking the two with eventing, especially eventing mediated through the image repo, seems like it's introducing a lot of indirection for little benefit if we want to use this as a PAAS, since we're left with no way other than an event for user intent to travel from build/pipeline to serving. |
Point 1 above is really interesting and a challenge. Point 2 seems easier to work with, and I'm not sure it's easier if we invert the flow — either way, there's some API point of contact that needs to be made, it's just a question of whose API expanded to include some part of the other group's api. Point 3, I'm not sure. Does your answer change if we can't push code and configuration at the same time? I have another idea about how to do the orchestration between source and serving I'm working on on paper. Stay tuned. |
What I like about using eventing for this is that we're then no longer dependent on build at all - meaning people can continue to use Kn build, jenking, travis or anything else. But in the end, IMO, what serving should be looking for a new image to appear - which may, or may not, be related to some "source"/github repo. That's an impl detail/choice of how the user chooses to define that part of the pipeline. We're just concerned with how to serve/host the output of it. Having said that, we can choose to optionally continue to have a reference to the source/build in the Service - I wouldn't be against it. But even then, I'm not sure watching for an event would be a bad idea since the build process could result in no new image being uploaded despite there being a modification to the build section of the Service. |
Ok, the thing having Serving orchestrate buys us is a single place to put configuration and code information, down to what source code we want to be serving. We can still kick off arbitrary CI flows from serving, but it gives us an API where you describe what you want running and how all in one place, which is valuable. I keep trying to figure out, for example, what the flow is when you want to change an environment variable from a build-triggers-serving standpoint, and it doesn't seem quite right. There are ways to have Serving orchestrate that are more in line with the pipelines project, for example by borrowing some of their api. |
Looking at this in purely declarative terms, just because the user has only patched an environment variable it doesn't mean that they don't want a Build. This is a declaration of intent, and the user's intention is that the given source is deployed with the new environment variable. Eliding the build is really an optimization that assumes (with debate-able correctness that) we're further optimizing a hermetic / reproducible build that would be fully cached if executed. Honestly, I feel like:
... are more correct ways of expressing these two intents, respectively. It occurs to me writing this that |
Directly altering the service on env-only changes loses you any benefit you were getting from your CD system. Dispatching a Pipeline that culminates in a Service mutation on source changes ends up working, but only for source changes that don't need to be accompanied with simultaneous env changes. |
In knative/build#276 the releases moved to the knative-releases GCR Bonus: add the location of eventing releases.
* Update spec file to release 1.6 * Fix volume creation for PVC (knative#1723) Co-authored-by: David Simansky <dsimansk@redhat.com> * Cherry-pick test fixes, bump to client v1.6.1 Co-authored-by: Knative Prow Robot <knative-prow-robot@google.com>
A more detailed proposal coming; so far this is a thought experiment.
The current Knative Serving stack sometimes performs a build as a side-effect when deploying a new Revision. When this happens, the Revision is given a reference to the build (which must be a resource in the same cluster) which should block the Revision activation until the build reaches the
Succeeded=True
condition. This has a number of unfortunate side effects:The initial "getting started" usage suggests starting with Service and having build be orchestrated by Serving. When applications reach higher levels of maturity and begin using CI systems, this ordering becomes reversed, and the build system generates an image and then applies it to one or more clusters (the rollout could include deployment to both a staging and a production cluster, or to multiple production clusters for redundancy). This creates a "jump" where users throw away their old knowledge rather than building on it.
When Serving orchestrates the build, it is more difficult to feed insights from the build steps (i.e. OpenAPI specifications, resource requirements, etc) into the Serving deployment. This has a few possible mitigations, but reversing this control would make it easier for builds to contribute resource information to Serving:
I posit that most users know whether or not they want to deploy new source code, and it might be okay to have different commands for "push new code and configuration" vs "only update configuration". With the current Serving-orchestrates-build, this occasionally means we need client conventions like updating an annotation to kick off a build where it might not otherwise be known (i.e. if the same zipfile is used but has new contents). Separating these into "update via build" and "direct update" might simplify things for both client and server.
I prototyped this a bit in https://github.com/evankanderson/pyfun/blob/build-experiment/packaging/build-template.yaml#L33, but that's not a very "production" solution.
A benefit of either Serving-orchestrates-build or build-orchestrates-Serving is that it is possible to deploy new code with a single API call, which reduces the total amount of client workflow and the changes of partially-created changes when compared with a "do a build, then do a deploy" client-driven workflow.
/cc @duglin
The text was updated successfully, but these errors were encountered: