Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kiali] use kiali operator to install and manage kiali #20131

Closed
wants to merge 1 commit into from

Conversation

jmazzitelli
Copy link
Member

@jmazzitelli jmazzitelli commented Jan 13, 2020

The gist of what is trying to be done here is:

If --set Values.kiali.operator.enabled=true, the operator is to be deployed in which case the following should happen:

  • Operator will be installed in the .Release.Namespace (e.g. istio-system)
  • All the necessary operator resources (Deployment, ClusterRole, ServiceAccount, etc) are created.
  • This includes the 2 CRDs necessary for Kiali operator.
  • The operator Deployment will have a WATCH_NAMESPACE equal to the namespace where it is deployed - thus allowing this one operator to listen for Kiali CRs only in that namespace. This allows you to create multiple operators each in their own istio namespace, thus allowing you to install Kiali in those different namespaces.
  • NOTE: to support multi-tenancy, there really should only be a single operator installed in the entire cluster. That one operator should be installed in a separate namespace within the cluster (which istioctl should create) and that one operator's Deployment env var WATCH_NAMESPACE should be set to "". This allows a single cluster-wide operator to install multiple Kiali instances anywhere in the cluster. This PR does NOT implement things this way, but it could be changed to do that. But since this feature is considered for demo purposes only, to keep it simple I did not implement it that way.

If --set Values.kiali.enabled=true, Kiali should be deployed in which case the following should happen:

  • Create a demo secret in istio-system namespace, if configured to do so
  • Create a Kiali CR in istio-system namespace (this will trigger the operator to install Kiali in istio-system).

The purpose of having the two enabled flags (kiali.operator.enabled and kiali.enabled) is to allow you to install Kiali if you already had a Kiali Operator running, say, from being installed via OLM or through some other external mechanism). In this case, you just pass in --set Values.kiali.operator.enabled=false so istioctl doesn't try to install another kiali operator. Indeed, in the future we could rip out this entire operator-helm-chart from istioctl and just document that the user must install the Kiali Operator already (outside of istioctl) - at that point, istioctl need only have to create a Kiali CR and let that externally-installed Kiali Operator install Kiali based on the configuration found in that Kiali CR created by istioctl.

Notice that this implementation allows a user to wholly configure the Kiali CR by specifying the kiali.fullSpec value. This should allow a user to be able to configure any setting within Kiali - see https://github.com/kiali/kiali/blob/master/operator/deploy/kiali/kiali_cr.yaml for all the settings available.

This combines the old PRs istio/installer#556 and istio/operator#622 which existed prior to the monorepo PR being merged. Now that we have a monorepo, those two PRs are closed in preference to this PR.

@istio-testing istio-testing added the do-not-merge/work-in-progress Block merging of a PR because it isn't ready yet. label Jan 13, 2020
@googlebot googlebot added the cla: yes Set by the Google CLA bot to indicate the author of a PR has signed the Google CLA. label Jan 13, 2020
@istio-testing istio-testing added the size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files. label Jan 13, 2020
@jmazzitelli
Copy link
Member Author

This first commit is just combining the two old PRs that existed prior to the monorepo work. Now that we have a monorepo, this PR can take off where those two left off and the work can hopefully be continued to completion.

@jmazzitelli
Copy link
Member Author

Might need this PR (#20128) before being able to test and complete this.

Copy link
Member

@howardjohn howardjohn left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

name: kiali-service-account
namespace: {{ .Release.Namespace }}
name: kiali-operator-service-account
namespace: {{ .Values.kialiOperator.namespace }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think .Release.Namespace here refers to specifically to the kiali release, I could be wrong though

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note: We want this (the Kiali Operator) to go into its own namespace, not where Kiali itself is to be deployed. This is analogous to how Operator Lifecycle Manager (OLM) will install the operator. This allows you to have a single operator install multiple Kialis (for multi-tenant scenarios, for example).

priorityClassName: "{{ .Values.global.priorityClassName }}"
{{- end }}
{{- if .Values.kialiOperator.tolerations }}
tolerations:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is this required? I think users can just use the k8s overlay to set advanced settings like this and we avoid the complexity. Same for most optional fields here like annotations, etc

@richardwxn can you confirm?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was following the examples of how things were previously done. If there are other ways to define tolerations and resources and the like outside of defining it directly in the deployment yaml here, we can do that instead.

image: kiali
contextPath: /kiali # The root context path to access the Kiali UI.

enabled: false # If using demo profile, this will be set to `true`.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am a bit confused why we have a kiali/ and kiali-operator/

I think that means users would need to do something like kialiOperator.enabled=true kiali.enabled=true which doesn't make too much sense to me -- you always want them together, right?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You do not necessarily always want both. For example, someone could be installing operators external to istioctl (people using OLM / OperatorHub.io will be installing operators with that mechanism, not directly via istioctl). If someone already installed the Kiali operator, then they need only tell istioctl to install the Kiali CR (kiali.enabled=true).

Even if someone installed Kiali Operator via istioctl, there could be (if not now, in the future) a multi-tenant scenario where I want to install multiple instances of Kiali. In this case, you do not want to install Kiali Operator multiple times (you only need a single operator per cluster) - this is another use-case where istioctl is told kialiOperator.enabled=false but kiali.enabled=true

@istio-testing istio-testing added the needs-rebase Indicates a PR needs to be rebased before being merged label Feb 13, 2020
@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Feb 15, 2020
@istio-testing istio-testing removed the needs-rebase Indicates a PR needs to be rebased before being merged label Feb 15, 2020
@jmazzitelli
Copy link
Member Author

jmazzitelli commented Feb 15, 2020

Just in case we might need this, I rebased. I'm still unsure if we want to close this or test/review/merge it for short term. This introduces the kiali operator rather than relying on the kiali helm charts to install kiali. We could do a hybrid approach: do NOT install the kiali operator, BUT still provide a small helm chart that allows istioctl to install the Kiali CR. This would require us to document that the user have the kiali operator already installed, say, via, bash <(curl -L https://git.io/getLatestKialiOperator).

(note: did not do a make gen yet - an error occurred that I am not sure how to fix yet: file /work/manifests/istio-telemetry/kiali/values.yaml failed validation with: unknown field "fullSpec" in v1alpha1.KialiConfig -- didn't happen last time I ran "make gen" before the rebase.

@istio-policy-bot istio-policy-bot removed the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label Feb 15, 2020
@jmazzitelli
Copy link
Member Author

Env WG meeting has discussed this and the current thinking is that Istio is not going to be in the business of installing operators for all the third-party components, and won't even even create CRs that trigger operators to install the components (at least that is what I heard on the env WG meeting on Feb 19, 2020). At best it will edit existing CRs for components already installed.

Not sure how valid of a use-case that will be for Kiali (it makes more sense for Prometheus) because Kiali isn't like Prometheus in that it can be installed BEFORE istio is installed. Kiali is paired with an existing Istio installation (or is installed at the same time) so I think istioctl should at mininum be able to create the Kiali CR.

But in either case (creating the CR or assuming it already exists at install time) it sounds like we are not going to be installing operators. So this PR should not be merged as is. It could change so it only creates the CR.

I'll leave this PR open in case we want to change it to create the CR only. Others can close this if they feel none of this is useful.

@voroniys
Copy link
Contributor

Nice approach - close a lot of PRs in favor of this one, and then declare this one out of scope and close it as well.

@jmazzitelli
Copy link
Member Author

Nice approach - close a lot of PRs in favor of this one, and then declare this one out of scope and close it as well.

@voroniys not my call - I'm just doing and reporting what I'm hearing at the environment WG meetings. Feel free to attend those meetings to discuss your thoughts.

@jmazzitelli
Copy link
Member Author

jmazzitelli commented Feb 25, 2020

@sdake See my latest error (I can't figure out what I need to fix this): #20131 (comment)
I figured it out

@voroniys
Copy link
Contributor

(note: did not do a make gen yet - an error occurred that I am not sure how to fix yet: file /work/manifests/istio-telemetry/kiali/values.yaml failed validation with: unknown field "fullSpec" in v1alpha1.KialiConfig -- didn't happen last time I ran "make gen" before the rebase.

I had similar issue when I merged mainstrem to my fork. It looks like make gen uses previously generated files somehow. I've fixed it by excluding all generated files from the merge.

@jmazzitelli
Copy link
Member Author

jmazzitelli commented Feb 26, 2020

I just rebased on the latest master and once again tried (and failed) to run BUILD_WITH_CONTAINER=1 make gen (it failed without the env var, too).

I've no idea what other files I need to touch to get this to "make gen" to work. excludedWorkloads is new and added in the kiali values in this PR (so is accessibleNamespaces and fullSpec, and those settings also cause errors sometimes - the error is different for different runs of "make gen" on my box, but all errors always relate to new settings in the kiali values.yaml that have been added in this PR).

I figured it out

@jmazzitelli
Copy link
Member Author

@jmazzitelli
Copy link
Member Author

/retest

@jmazzitelli jmazzitelli force-pushed the kiali-operator branch 2 times, most recently from dd8d441 to 9a85f74 Compare March 19, 2020 17:45
@istio-testing istio-testing added the needs-rebase Indicates a PR needs to be rebased before being merged label Mar 19, 2020
@istio-testing istio-testing removed the needs-rebase Indicates a PR needs to be rebased before being merged label Mar 19, 2020
@sdake
Copy link
Member

sdake commented Mar 20, 2020

@jmazzitelli given the state of life, I've been a little distracted for the last month. I'm good to go now. Thank you for your patience in my review of this PR.

I think a few things went wrong here:

  • The environments WG is not responsible for defining what SHOULD be in scope for installation for extensions such as Kiali, but rather HOW to install what is in scope.
  • The environments WG did not properly consult @istio/wg-policies-and-telemetry-maintainers, who should be driving any changes to the telemetry features before @istio/wg-environments-maintainers declared this is out of scope.
  • I had a misstep as a leader here to escalate this problem to the @istio/technical-oversight-committee. Thank you, again for your patience, in me dealing with my life.

To correct where we stand today:

  1. I will engage with @istio/wg-policies-and-telemetry-maintainers to help determine the division of decision making on the topic of Telemetry (and addons or extensions).
  2. I will add this to the next @istio/working-group-leads meeting agenda. Unfortunately, this PR predates this new organizational structure, which permits another opportunity for cross-project working groups to collaborate.
  3. And, If need be, I will escalate to the @istio/technical-oversight-committee if we can't gain sufficient cross-project collaboration.

and I will review/approve this PR once it is in an operational state. If I'm slow to review or approve this PR, please ping me on slack in #environments.

Cheers
-steve

@sdake
Copy link
Member

sdake commented Mar 20, 2020

Nice approach - close a lot of PRs in favor of this one, and then declare this one out of scope and close it as well.

@voroniys that is not what happened here. Several PRs were closed as a result of the mono-repo merge, which needed to happen to unblock developers. This work has not been declared out of scope until the @istio/technical-oversight-committee says out of scope. Until then, we will proceed with addons/extensions/integrations, but perhaps with a different approach. @dgn is leading an effort to propose a plan from the old model to the new, whatever the new model is. @istio/wg-environments-maintainers were a bit ahead of the headlights on declaring this out of scope.

Copy link
Member

@sdake sdake left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My main concern with this review is around the number of options needed to properly configure the system as well as the namespace the extensions are installed in. I think it would make alot more sense, for dual control plane enablement, for this work to be placed in a unique namesapce on its own or with other extensions.

kind: Kiali
metadata:
name: kiali
namespace: {{ .Release.Namespace }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd like to decouple the release namespace (istio-system) from where this cr is installed. Is that viable? For example, a default of istio-extensions makes sense to me. Another option is istio-integrations.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. In fact that is how it works when you install Kiali/Kiali Operator via its own installer (both via OLM or the installer from https://git.io/getLatestKialiOperator - see https://kiali.io/documentation/getting-started/). But for simplicity, it was implemented this way. See the description of this PR for an alternative. Specifically where it says:

NOTE: to support multi-tenancy, there really should only be a
single operator installed in the entire cluster. That one operator
should be installed in a separate namespace within the cluster
(which istioctl should create) and that one operator's Deployment
env var WATCH_NAMESPACE should be set to "". This allows a
single cluster-wide operator to install multiple Kiali instances
anywhere in the cluster. This PR does NOT implement things this
way, but it could be changed to do that. But since this feature is
considered for demo purposes only, to keep it simple I
did not implement it that way.

# see https://github.com/kiali/kiali/blob/master/operator/deploy/kiali/kiali_cr.yaml"
istio_component_namespaces:
grafana: "{{ .Values.global.telemetryNamespace }}"
pilot: "{{ .Values.global.configNamespace }}"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be renamed istiod, or can pilot and istiod be set leading to the same result?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

"pilot" setting is necessary specifically so Kiali can have a reasonable default to determine what URL to use when determining what version of Istio is installed in the control plane (i.e. 'http://istio-pilot.' + <this istio_component_namespaces.pilot setting> + ':8080/version). This setting is used in the Kiali operator here.

For backwards compat with Istio 1.4 and prior, this setting will be needed. There is still pilot in Istio 1.5 as I understand it (its just rolled into istiod) - so this could still be a valid name. Not sure if we need to explicitly define "istiod" here as Kiali doesn't need it or will use it.

{{- else }}
###################################################################
# For details about the supported settings,
# see https://github.com/kiali/kiali/blob/master/operator/deploy/kiali/kiali_cr.yaml"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it would be pretty cool if you could document this (possibly as we do in Istio, via a protobuf - and then generate the CR from that). We have tools to generate protobuf docs on istio.io.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We do not have any protobuf integrations anywhere in Kiali, but are not opposed to it if it makes sense and makes life easier. Would need some community contributions for this; not sure what this would entail to implement this -- note that this Kiali CR is NOT consumed anywhere in Kiali via Go code - it is consumed by Ansible playbooks within the Operator.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good to hear the Ansible Operator work is being used here. From a docs perspective, we have tooling to process protobufs which is the only reason I mentioned it at all.

Processing yaml is harder, because no yaml parser I've been able to find round trips comments (or if they claim this feature, it doesn't work). It isn't Istio's mission to write a yaml round-trip parser :) It really isn't our mission to develop protobuf documentation tooling, but that has been done. I have heard there is an influx of interns coming - perhaps we can let them tackle this problem? @ericvn mentioned the interns to me a few days ago.

web_root: {{ .Values.kiali.contextPath }}
{{- end }}
{{- end }}
{{- end }}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the major problem I see with this CR is the consumption of a bunch of key/value pairs. We just had a long discussion in the last env WG meeting that we essentially want to "start over" with a new API with minimal options and choice.

This does not mean that Kiali must also have minimal choice, as Kiali is an upstream of Istio. Instead it means that as a downstream, we may choose to limit config option choices in our direct integration of this code. Perhaps extensions and telemetry should have been called IET 😁

One of the above comments I left in the review is salient here - and that is the protobuf idea (or as discussed in the last env wg meeting, perhaps a JSON blob to support all the models. TBD. Should not block this review.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Obviously, istioctl can install Kiali CR configured however it wants. Feel free to reduce the amount of config it adds to its own Kiali CR here, thus relying on the Kiali defaults for the rest. As for Kiali itself, it will retain these settings in case people do need to do "backdoor" customization to Kiali's config.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should not dictate to an upstream what config options they support or don't support. I wasn't suggesting such a hardline approach and ruled that out in my second paragraph above.

What I did say, and perhaps this was unclear, was that this PR should be a little more focused around config options that are absolutely mandatory for Kiali to function. Not being a Kiali expert, this is hard for me to judge as a reviewer.

- patch
- update
- watch
- apiGroups: ["route.openshift.io"]
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you confirm that adding these api groups works for upstream K8s 1.17?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can confirm these work fine for upstream K8s. We have people using this same yaml in upstream Kiali on upstream k8s and it works. The way k8s does this is it still adds these API group permissions but if they do not exist, it is just an no-op. However, these are required to work properly when people install on OpenShift.

Without something like this, OpenShift users are required (as they are today) to perform additional steps after istioctl installs Istio in order to get Kiali to work properly. As it is currently, istioctl users wanting to install Kiali on OpenShift need to perform additional commands after istioctl installs everything - see https://istio.io/docs/tasks/observability/kiali/#running-on-openshift

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

SGTM.

- mountPath: /tmp/ansible-operator/runner
name: runner
env:
# For now, we assume this Kiali operator installed by Istio will only watch the namespace the operator is in.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

seems like a bad assumption. Is it possible, in the next 4 weeks, to remove this assumption?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes.

In fact, how this PR is currently implemented here isn't how Kubernetes Operators typically are supposed to work and not how Kiali Operator works in others contexts (e.g. Maistra). But the reason why this assumption is here was to make the PR implementation easier. The alternative would be for this PR to touch more global files such that istioctl creates a separate namespace into which the Kiali Operator is installed, and then the Operator can watch all namespaces for Kiali CRs - this is how multi-tenancy support is handled by Kiali - it is the typical way Kubernetes Operators work.

Again, we could opt to rip out this Kiali Operator installation helm chart entirely in istioctl and require the user to install operators (such as Kiali Operator) as a pre-install step for istioctl.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I realize we could opt to rip out the install, and provide a pre-install step. This PR seems to indicate you wish to maintain Kiali install within istioctl. I am good with this approach - although again, you will need to limit the config options to those most mandatory. Another approach would be to limit to those absolutely mandatory and provide a CR override as you have suggested. I think this code could be modified to do that.

hub: quay.io/kiali
tag: v1.14
contextPath: /kiali
operator:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would like other's opinions if telemetry should be in the default profile. I think it would be great if Istio's operator code could handle telemetry installation (using CRs) independently of istiod installation.

If the anwer is no to the aboe questions, then a different profile - perhaps "full" or something of the like would make more sense.

@@ -137,4 +137,40 @@ spec:
targetPort: 15443
name: tls
kiali:
enabled: true
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@richardwxn can you answer this question please.

@@ -2,7 +2,7 @@
title: v1alpha1
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@ostromart is this the proper place to do API review of the operator?

@@ -2125,21 +2125,17 @@ message KialiServiceConfig {
message KialiDashboardConfig {
string secretName = 1;

string usernameKey = 2;
google.protobuf.BoolValue viewOnlyMode = 2;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

well - I guess the ship has sailed on the protobuf for KialiDashBoardConfig. Make sure to keep compatibility with the old protobuf.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This "usernameKey" (and a couple others in here - "passwordKey" is another) hasn't been used for a long time by Kiali and hasn't been needed by several Istio releases at least. This is cruft that should have been removed a long time ago. I know these haven't been used at all by Kiali for a long time. I didn't think we need it here in this .proto since it isn't used in the Kiali values.yaml (in fact, it has been removed from Kiali's values.yaml since at least Istio 1.1).

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds fair. I think we need to consider for a 1.7 work item where the Istio installation and operations API heads next. For 1.6, I think we should be considerate of the amount of change we are pushing on our operators.

@sdake
Copy link
Member

sdake commented Apr 1, 2020

@jmazzitelli feel free to continue rebasing as needed, although I believe the make gen problems are coming front and center, and we should (hopefully) have a working solution shortly to the constant make gen churn. In the meantime, I'd like a little more input from other reviewers so we can get this work moving.

PING:

@istio/wg-policies-and-telemetry-maintainers
@istio/wg-environments-maintainers

@howardjohn
Copy link
Member

To me, it seems like we should await the resolution of https://docs.google.com/document/d/1jJ4kkALCUPWIlz_Ldu6jpeelMNxyY_SZDMaZmeSpfkM (should be this week) before moving forward

@istio-testing istio-testing added the needs-rebase Indicates a PR needs to be rebased before being merged label Apr 7, 2020
@istio-testing
Copy link
Collaborator

@jmazzitelli: PR needs rebase.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@sdake
Copy link
Member

sdake commented Apr 29, 2020

@jmazzitelli have you had an opportunity to review the latest consensus plan John has linked? While you may not be happy about the wasted work, we do plan to offer straight yaml with zero configuration options in 1.6. Additionally there is a very strong desire to offer integration documentation on istio.io. We have in the past used istio.io to host extensions documentation and I don’t see a good argument for not continuing that. It does require various interested parties to participate.

I am hopeful you will participate in the transition process as Observability is Istio’s strongest capability. Kiali is (imo) Istio’s best option for observabilty for 2020 and beyond.

Cheers
Steve

@jmazzitelli
Copy link
Member Author

@sdake We can close this PR if it is not needed. I haven't been keeping pace with the rebase/conflicts for several weeks now as I assumed this work would not be needed.

Just let me know what you need from the Kiali team moving forward, even if it means just updating istio.io docs. I don't know what the straight-yaml approach will look like; feel free to ask me questions about Kiali configuration to help build that out.

@istio-policy-bot istio-policy-bot added the lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while label May 30, 2020
@istio-policy-bot
Copy link

🚧 This issue or pull request has been closed due to not having had activity from an Istio team member since 2020-04-29. If you feel this issue or pull request deserves attention, please reopen the issue. Please see this wiki page for more information. Thank you for your contributions.

Created by the issue and PR lifecycle manager.

@istio-policy-bot istio-policy-bot added the lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. label Jun 14, 2020
@jmazzitelli jmazzitelli deleted the kiali-operator branch January 19, 2022 22:07
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/environments area/extensions and telemetry cla: yes Set by the Google CLA bot to indicate the author of a PR has signed the Google CLA. lifecycle/automatically-closed Indicates a PR or issue that has been closed automatically. lifecycle/stale Indicates a PR or issue hasn't been manipulated by an Istio team member for a while needs-rebase Indicates a PR needs to be rebased before being merged size/XXL Denotes a PR that changes 1000+ lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants