Skip to content

Latest commit

 

History

History
97 lines (68 loc) · 7.82 KB

ReleaseNotes.md

File metadata and controls

97 lines (68 loc) · 7.82 KB

Release notes

v1.2

Admiral 1.2 is finally out after a year long wait! Thanks to the admiral community for constant feedback and all the bugs that have been reported.

Features

Support for latest versions of Istio

Admiral now supports latest Istio version 1.12.2 and the minimum version now supported is 1.8.6. Istio introduced a concept of east-west gateway for multi-cluster traffic since 1.8.x release. This ingressgateway is special (runs in sni-dnat mode) and allows cross cluster mTLS traffic. Admiral now has a parameter to configure the ingressgateway app label (defaults to istio-ingressgateway for backward compatibility) to use this new gateway when generating a ServiceEntry.

Added compare and contrast between Admiral and ongoing development of MCS (Multi-Cluster Services) in K8s

Admiral now has APIs!

Admiral now supports APIs to get ServiceEntries (endpoints) by Cluster or Identity. This is an alpha feature, so feedback is appreciated. A health check endpoint is also available to be used as a readiness probe.

Argo Rollouts has a Canary strategy which supports TrafficRouting using Istio. Admiral generated endpoints for Argo Rollouts now honor the Canary strategy defined using host based Istio Traffic Routing. Admiral watches for changes as Argo rollouts controller updates the traffic routing and configures the global mesh endpoints accordingly.

Performance improvements

  • Removed pod controller (reduces memory footprint of Admiral drastically, this was never used)
  • Endpoint generation is no longer triggered for dependency record updates (this is taken care off by regular deployment/rollout syncs - 5 mins by default)

v1.1

This is a minor release that addresses some usability and clean up aspects.

Features

Version 1.1 adds support to use gRPC and http2 protocols in addition to the default http for admiral generated endpoints. Try the example to explore this further.

Admiral now cleans up the CNAMEs and the associated configurations when a k8s deployement is deleted.

Added documentation and guidance on how to deploy admiral in a production setting. The documentation is available here

Project improvements

Added automated integration tests for admiral to simulates real uses cases

v1.0

Admiral has graduated to Generally Available! Version 1.0 has a series of bugfixes and includes support for Argo Rollouts

Once again, many thanks to everyone who has tried out Admiral and provided their valuable feedback throughout the development process. And a special thanks to everyone who has contributed to the project This would not be possible without you.

Features

Version 1.0 adds support to use Argo Rollouts rather than deployments, allowing you to leverage their advanced deployment capabilities without giving up Admiral's functionality. Try the example to get started.

Version 1.0 now also properly handles secret updates, allowing for cluster secrets to be ignored or changed without requiring an Admiral restart to take effect.

Services are added to the set of resources that Admiral can be configured to ignore, using admiral.io/ignore: "true".

Assorted quality of life improvements

Including an added linter, dockerfile improvements, parameter overrides, and a flag-driven log level

Bug Fixes

  • Fixed a bug preventing 100/0 load balancing with Global Traffic Policies.
  • Put Argo Rollouts behind a feature flag to prevent excessive error logging in clusters without Argo CRDs installed.
  • Fixed a bug where virtual services in destination namespaces weren't being imported to the sidecars in the client namespace.
  • Admiral now correctly updates service entries in response to a previously watched being ignored.
  • Mitigated a memory leak related to the recreation of cache controllers

v0.9

We are excited to announce the release of Admiral version v0.9 with lots of cool functionality added. This version is ready for production usage and addresses some of the biggest requests from our users.

We would like to thank all the contributors and everyone who played a role in testing the alpha and beta releases of Admiral.

Global traffic policies allow defining custom traffic routing behaviour for a Admiral generated CNAME, for example routing all traffic for a service to a specific region or AZ. This feature relies on Istio's locality load balancing

Try out this example

Lots of improvements to usability

  • Only istio resources with exportTo: * or exportTo field missing are synced across clusters to obey the spec.
  • Added a feature to update Istio Sidecar resource in the client's namespace. This allows for Admiral based automation to filter what endpoint configuration to be loaded by a istio-proxy and keeping the footprint minimal and still manageable.
  • Annotation (admiral.io/ignore) to exempt k8s Deployments/Namespaces from Admiral processing. This would be useful for migration k8s Deployments into other clusters.

Simplified installing the examples and organized them by use case

Bug fixes

  • Handle Admiral crashes in special scenarios like below:
    • for resource deletions
    • missing resource permissions
    • missing k8s Service for a k8s Deployment

Summary

Complete list of issues fixed in v0.9

Report issues and/or post your questions via:

Stay tuned for v1.0 release!!