Skip to content

@knative-prow-releaser-robot knative-prow-releaser-robot released this Jun 25, 2019 · 1 commit to release-0.7 since this release

Meta (requires K8s 1.14+ due to #4533)

  • In 0.6 we expanded our v1alpha1 API to include our v1beta1 fields. In this release, we are contracting the set of fields we store for v1alpha1 to that subset (and disallowing those that don’t fit). With this, we can leverage the “same schema” CRD-conversion supported by Kubernetes 1.11+ to ship v1beta1.

HPA-based scaling on concurrent requests

  • We previously supported using the HPA “class” autoscaler to enable Knative services to be scaled on CPU and Memory. In this release, we are adding support for using the HPA to scale them on the same “concurrent requests” metrics used by our default autoscaler.
  • HPA still does not yet support scaling to zero, and more work is needed to expose these metrics to arbitrary autoscaler plugins, but this is exciting progress!

Non-root containers

  • This release, all of the containers we ship run as a “nonroot” user. This includes the queue-proxy sidecar injected into the user pod. This enables the use of stricter “Pod Security Policies” with knative/serving.

Breaking Changes

  • Previously deprecated status fields are no longer populated.
  • Build and Manual (deprecated in 0.6) are now unsupported
  • The URLs generated for Route tags by default have changed, see the tagTemplate section below for how to avoid this break.


Support concurrency-based scaling on the HPA (thanks @markusthoemmes).

Metric-scraping and decision-making has been separated out of the Knative internal autoscaler (KPA). The metrics are now also available to the HPA.

Dynamically change autoscaling metrics sample size based on pod population (thanks @yanweiguo).

Depending on how many pods the specific revision has, the autoscaler now scrapes a computed number of pods to gain more confidence in the reported metrics while maintaining scalability.


  • Added readiness probes to the autoscaler #4456 (thanks @vagababov)
  • Adjust activator’s throttling behavior based on activator scale (thanks @shashwathi and @andrew-su).
  • Revisions wait until they have reached “minScale” before they are reported “Ready” (thanks @joshrider).

Core API

Expose v1beta1 API #4199 (thanks @mattmoor)

This release exposes resources under

Non-root containers #3237 (thanks @bradhoekstra and @dprotaso)

This release, all of the containers we ship run as a “nonroot” user. This includes the queue-proxy sidecar injected into the user pod. This enables the use of stricter “Pod Security Policies” with knative/serving.

Allow users to specify their container name #4289 (thanks @mattmoor)

This will default to user-container, which is what we use today, and that default may be changed for config-defaults to a Go template with access to the parent resource’s (e.g. Service, Configuration) ObjectMeta fields.

Projected volume support #4079 (thanks @mattmoor)

Based on community feedback, we have added support for mounting ConfigMaps and Secrets via the projected volume type.

Drop legacy status fields #4197 (thanks @mattmoor)

A variety of legacy fields from our v1alpha1 have been dropped in preparation to serve these same objects over v1beta1.

Build is unsupported #4099 (thanks @mattmoor)

As mentioned in the 0.6 release notes, support for just-in-time builds has been removed, and requests containing a build will now be rejected.

Manual is unsupported #4188 (thanks @mattmoor)

As mentioned in the 0.6 release notes, support for manual mode has been removed, and requests containing it will now be rejected.

V1beta1 clients and conformance testing #4369 (thanks @mattmoor)

We have generated client libraries for v1beta1 and have a v1beta1 version of the API conformance test suite under ./test/conformance/api/v1beta1.

Defaulting based conversion #4080 (thanks @mattmoor)

Objects submitted with the old v1alpha1 schema will be upgraded via our “defaulting” logic in a mutating admission webhook.

New annotations for queue-proxy resource limits #4151 (thanks @raushan2016)

The annotation now allows setting the percetnage of user container resources to be used for the queue-proxy.

Annotation propagation #4363, #4367 (thanks @vagababov)

Annotations now propagate from the Knative Service object to Route and Configuration.




Reconcile annotations from Route to ClusterIngress #4087 (thanks @vagababov)

This allows ClusterIngress class annotation to be specified per-Route instead of cluster wide through a config-network setting.

Introduce tagTemplate configuration #4292 (thanks @mattmoor)

This allows operators to configure the names that are given to the services created for tags in Route.
This also changes the default to transpose the tag and route name, which is a breaking change to the URLs these received in 0.6. To avoid this break, you can set tagTemplate: {{.Name}}-{{.Tag}} in config-network.

Enable use of annotations in domainTemplate #4210 (thanks @raushan2016)

User can now provide custom subdomain via label

Allow customizing max allowed request timeout #4172 (thanks @mdemirhan)

This introduces a new config entry max-revision-timeout-seconds in config-defaults to set the max allowed request timeout.

Set Forwarded header on request #4376 (thanks @tanzeeb)

The Forwarded header is constructed and appended to the headers by the queue-proxy if only legacy x-forwarded-* headers are set.


  • Enable short names for cluster-local Service without relying on sidecars #3824 (thanks @tcnghia)
  • Better surfacing of ClusterIngress Status #4288 #4144 (thanks @tanzeeb, @nak3)
  • SKS private service uses random names to avoid length limitation #4250 (thanks @vagababov)


Set memory request for zipkin pods #4353 (thanks @sebgoa)

This lowers the memory necessary to schedule the zipkin pod.

Collect /var/log without fluentd sidecar #4156 (thanks @JRBANCEL)

This allows /var/log collection without the need to load fluentd sidecar, which is large and significantly increases pod startup time.

Enable queue-proxy metrics scraping by Prometheus. #4111 (thanks @mdemirhan)

The new metrics exposed by queue proxy are now exposed as part of the pod spec and Prometheus can now scrape these metrics.


  • Fix 'Revision CPU and Memory Usage' Grafana dashboard #4106 (thanks @JRBANCEL)
  • Fix 'Scaling Debugging' Grafana dashboard. #4096 (thanks @JRBANCEL)
  • Remove embedded jaeger-operator and include as dependency instead #3938 (thanks @objectiser)
  • Fix HTTP request dashboards #4418 (thanks @mdemirhan)
Assets 10
You can’t perform that action at this time.