Commits on Feb 1, 2019
… breaking the pipeline
Commits on Nov 23, 2018
* Implemented a workaround for off-the-shelf charts #32 This commit introduces a few changes to the installer behavior: * If there is only 1 service object in a chart but it contains no shipper-lb label, installer will pass it as is: there is not that much to disambiguate. If there is more than 1 service, it will raise an error * If a service selector contains selector label `release`, shipper will raise an error because it breaks the traffic shifting logic. The user can tell the system to remove this label if present by providing `enable-helm-release-workaround` application metadata label. The actual value is never checked, can be either "true" or "enabled". * installer: comment & error message tweak It isn't a `panic()` anymore, and probably it should not be, since it boils down to whether the Release has `shipper-app` as a label. The TODO and error message are pointers for a broader story about making sure that one can use Release objects independently of the Application controller setup.
Commits on Nov 14, 2018
Commits on Oct 31, 2018
Oct 31, 2018
This commit is a followup on @kanatohodets comments regarding the direct installation, capacity and traffic targets fetch. This commit replaces these calls with lister API fetches.
Commits on Oct 29, 2018
Oct 29, 2018
This commit fixes some failing tests. We changed the way the scheduler controller reacts to the existsing duplicate traffic / capacity / installation targets: it checks if the duplicate belongs to the release object for real (by inspecting the target owner references) and if it does, proceed normally. If it's not, it's an error and we bail out right away. This behavior lets us secure ourselves from a situation where we have an orphan object which would be garbage-collected soon but has nothing to do with the current release.
Commits on Oct 26, 2018
This commit changes the behavior of schedulercontroller.scheduler in such a way where it treats already-exists cases (installation target, capacity target and traffic target) as errors. The reason for this change is the recent investigation on stall tests from #11. What was discovered is: a release deletion triggers kubernetes to garbage-collect all release children objects. This happens asynchronously, and sometimes a new application installation finds these orphan objects and could not distinguish them from the normal ones, so it immediately returns. This behavior is being changed by this very commit: if we detect that an object exists but we were planning to create a new one, we return an error immediately.
This commit extends the end-to-end test suite with 2 new test cases: one ofr a new application and the other one for an existing one. In both cases, we remove the current active release object and watch what's happening. In the case of a new application we expect the system to revert everything to step 0 and wait for a command, whereas an existing application, which is being rolled forward, should pick up the previous existing release and move full-on to the last step of this release.
Commits on Oct 15, 2018
Commits on Oct 9, 2018
This commit cleans up some bits that were commented out in the previous changes. In particular, expectedCapacity used to be calculated using locally-defined capacityInPods function, and now we use replicas.CalculateDesiredReplicaCount method.
This commit adds 2 new e2e test cases where a rollout strategy is being tested against a revert scenario. We are testing 2 distinct scenarios: a brand new application and an existing one rolling forward. In both cases we are using the existing vanguard strategy: 0 | 50/50 | full-on. New tests are moving the release from step 0 towards 50/50 and revert immediately.