Skip to content

Knative Serving release v0.6.0

Pre-release
Pre-release
Compare
Choose a tag to compare
@knative-prow-releaser-robot knative-prow-releaser-robot released this 14 May 14:47
· 1 commit to release-0.6 since this release

Meta

New API Shape

We have approved a proposal for the “v1beta1” API shape for knative/serving. These changes will make the Serving resources much more familiar for experienced Kubernetes users, unlock the power of Route to users of Service, and enable GitOps scenarios with features like “bring-your-own-Revision-name”. We will be working towards this over the next few releases.

In this release we have backported the new API surface to the v1alpha1 API as the first part of the transition to v1beta1 (aka “lemonade”). The changes that will become breaking in 0.7+ are:

  • Service and Configuration will no longer support “just-in-time” Builds.
  • Service will no longer support “manual” mode.

You can see the new API surface in use throughout our samples in knative/docs, but we will continue to support the majority of the legacy surface via v1alpha1 until we turn it down.

Overhauled Scale-to-Zero

We have radically changed the mechanism by which we scale to zero. The new architecture creates a better separation of concerns throughout the Serving resource model with fewer moving parts, and enables us to address a number of long-standing issues (some in this release, some to come). See below for more details.

Auto-TLS (alpha, opt-in)

We have added support for auto-TLS integration! The default implementation builds on cert-manager to provision certificates (e.g. via Let’s Encrypt), but similar to how we have made Istio pluggable, you can swap out cert-manager for other certificate provisioning systems. Currently certificates are provisioned per-Route, but stay tuned for wildcard support in a future release. This feature requires Istio 1.1, and must be explicitly enabled.

Moar Controller Decoupling

We have started to split the “pluggable” controllers in Knative into their own controller processes so that folks looking to replace Knative sub-systems can more readily remove the bundled default implementation. For example, to install Knative Serving without the Istio layer run:

kubectl apply -f serving.yaml \
  -l networking.knative.dev/ingress-provider!=istio

Note that we may see some error due to kubectl not understanding the yaml for Istio objects (even if they are filtered out by the label selector). It is safe to ignore the errors no matches for kind "Gateway" in version "networking.istio.io/v1alpha3".

You can also use this to omit the optional Auto-TLS controller based on cert-manager with:

kubectl apply -f serving.yaml \
  -l networking.knative.dev/certificate-provider!=cert-manager

Autoscaling

Move the Knative PodAutoscaler (aka “KPA”) from the /scale sub-resource for scaling to a PodScalable “duck type”. This enables us to leverage informer caching, and the expanded contract will enable the ServerlessService (aka “SKS”) to leverage the PodSpec to do neat optimizations in future releases. (Thanks @mattmoor)

We now ensure that our “activator” component has been successfully wired in before scaling a Revision down to zero (aka “positive hand-off”, #2949). This work was enabled by the Revision-managed activation work below. (Thanks @vagababov)

New annotations autoscaling.knative.dev/window, autoscaling.knative.dev/panicWindowPercentage, and autoscaling.knative.dev/panicThresholdPercentage allow customizing the sensitivity of KPA-class PodAutoscalers (#3103). (Thanks @josephburnett)

Added tracing to activator to get more detailed and persistently measured performance data (#2726). This fixes #1276 and will enable us to troubleshoot performance issues, such as cold start. (Thanks @greghaynes).

Fixed a Scale to Zero issue with Istio 1.1 lean installation (#3987) by reducing the idle timeouts in default transports (#3996) (Thanks @vagababov) which solves the k8's service not being terminated when the endpoint changes.

Resolved an issue which prevented disabling Scale to Zero (#3629) with fix (#3688) (Thanks @yanweiguo) which takes enable-scale-to-zero from configmap into account in KPA reconciler when doing scale. If minScale annotation is not set or set to 0 and enable-scale-to-zero is set to false, keep 1 pod as minimum.

Fix the autoscaler bug that make rash decision when the autoscaler restarts (#3771). This fixes issues #2705 and #2859. (Thanks @hohaichi)

Core API

We have an approved v1beta1 API shape! As above, we have started down the path to v1beta1 over the next several milestones. This milestone landed the v1beta1 API surface as a supported subset of v1alpha1. See above for more details. (Thanks to the v1beta1 task force for many hours of hard work on this).

We changed the way we perform validation to be based on a “fieldmask” of supported fields. We will now create a copy of each Kubernetes object limited to the fields we support, and then compare it against the original object; this ensures we are deliberate with which resource fields we want to leverage as the Kubernetes API evolves. (#3424, #3779) (Thanks @dgerd). This was extended to cleanup our internal API validations (#3789, #3911) (Thanks @mattmoor).

status.domain has been deprecated in favor of status.url. (#3970) (Thanks @mattmoor) which uses the apis.URL for our URL status fields, resolving the issue "Unable to get the service URL" (#1590)

Added the ability to specify default values for the matrix of {cpu, mem} x {request, limit} via our configmap for defaults. This also removes the previous CPU limit default so that we fallback on the configured Kubernetes defaults unless this is specifically specified by the operator. (#3550, #3912) (Thanks @mattmoor)

Dropped the use of the configurationMetadataGeneration label (#4012) (thanks @dprotaso), and wrapped up the last of the changes transitioning us to CRD sub-resources (#643).

Networking

Overhauled the way we scale-to-zero! (Thanks @vagababov) This enables us to have Revisions managing their own activation semantics, implement positive hand-off when scaling to zero, and increase the autoscaling controller’s resync period to be consistent with our other controllers.

Added support for automatically configuring TLS certificates! (Thanks @ZhiminXiang) See above for more details.

We have stopped releasing Istio yamls. It was never our intention for knative/serving to redistribute Istio, and prior releases exposed our “dev”-optimized Istio yamls. Users should consult either the Istio or vendor-specific documentation for how to get a “supported” Istio distribution. (Thanks @mattmoor)

We have started to adopt a flat naming scheme for the named sub-routes within a Service or Route. The old URLs will still work for now, but the new URLs will appear in the status.traffic[*].url fields. (Thanks @andrew-su)

Support the installation of Istio 1.1 (#3515, #3353) (Thanks @tcnghia)

Fixed readiness probes with Istio mTLS enabled (#4017) (Thanks @mattmoor)

Monitoring

Activator now reports request logs (#3781) with check-in (#3927) (Thanks @mdemirhan)

Test and Release

Assorted Fixes

  • label serving.knative.dev/release: devel should have the release name/number instead of devel (#3626) fixed with Export TAG to fix our annotation manipulation. (#3995) (Thanks @mattmoor)

  • Always install istio from HEAD for upgrade tests (#3522) (Thanks @jonjohnsonjr) fixing errors with upgrade / downgrade testing of knative (#3506)

  • Additional runtime conformance test coverage (9 new tests), improvements to existing conformance tests, and v1beta1 coverage. (Thanks @andrew-su, @dgerd, @yt3liu, @mattmoor, @tzununbekov)