Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] Each Kubeflow distribution should have its GitHub team #451

Closed
Bobgy opened this issue Nov 11, 2020 · 16 comments
Closed

[Proposal] Each Kubeflow distribution should have its GitHub team #451

Bobgy opened this issue Nov 11, 2020 · 16 comments

Comments

@Bobgy
Copy link
Contributor

Bobgy commented Nov 11, 2020

Hi community,

I think each Kubeflow distribution should have its POCs on www.kubeflow.org. When people report issues specific to some distribution, other WGs should be able to know who to assign the issue to.

Because we have decided that distributions should be outside of Kubeflow (#434), so the POCs won't be WGs. They will be individuals that represent the organization github team alias that supports each distribution.

I think we should make POC info clear on our website pages:

So that people know who to ask about it.
/cc @kubeflow/project-steering-group
/cc @kubeflow/wg-manifests-leads

What do you think about this?

@Bobgy
Copy link
Contributor Author

Bobgy commented Nov 11, 2020

Distribution POC
Existing Kubernetes cluster using a standard Kubeflow installation @swiftdiaries said he doesn't have bandwidth, who can we put here?
Existing Kubernetes cluster using Dex for authentication @kubeflow/arrikto
Amazon Web Services (AWS) using the standard setup @kubeflow/aws
Amazon Web Services (AWS) with authentication @kubeflow/aws
Microsoft Azure @kubeflow/azure
Google Cloud Platform (GCP) with Cloud Identity-Aware Proxy (Cloud IAP) @kubeflow/google
IBM Cloud (IKS) @animeshsingh
OpenShift Who should be here?

I put up a rough list, please correct me if I am wrong.
also
/cc @joeliedtke

@Bobgy
Copy link
Contributor Author

Bobgy commented Nov 11, 2020

also
/cc @jbottum @yanniszark

@PatrickXYS
Copy link
Member

Existing Kubernetes cluster using a standard Kubeflow installation

Existing Kubernetes cluster using Dex for authentication

I think "Existing" should be replaced by "On-prem" because Existing Kubernetes cluster is not clear to users

Besides mentioning individual names on the POC column, how about making one specific team?

E.g, for AWS, we specify @kubeflow/aws team, this should be more maintainable and scalable if one folk left or not responsive anymore.

@cvenets
Copy link

cvenets commented Nov 11, 2020

@Bobgy

For the Istio+Dex configuration @yanniszark from our team is responsible, but I agree with @PatrickXYS. I think assigning teams will be easier to handle and maintain in the future. We can do @kubeflow/arrikto from our side for the Istio+Dex config for now. Note that this will change with the new Manifests WG after 1.2.

Also note that most probably this page needs to get updated altogether, since there are quite a few external Kubeflow distros at this point additional to the cloud providers, which are currently not included in this list. We should somehow unify it going forward. We will try to propose something in the next release.

@Jeffwan
Copy link
Member

Jeffwan commented Nov 11, 2020

In the past, we assign labels to those issues. However, this needs some manual triage and won't /cc to right POC. We probably need a mechanism to improve that. Creating teams like @kubeflow/aws is one option but we still need some one to triage issues.

@thesuperzapper
Copy link
Member

Existing Kubernetes cluster using a standard Kubeflow installation

Existing Kubernetes cluster using Dex for authentication

I think "Existing" should be replaced by "On-prem" because Existing Kubernetes cluster is not clear to users

Besides mentioning individual names on the POC column, how about making one specific team?

E.g, for AWS, we specify @kubeflow/aws team, this should be more maintainable and scalable if one folk left or not responsive anymore.

@PatrickXYS I think we could call it something like "Base Kubernetes", "Minimal Kubernetes", "Compliant Kubernetes", becuase there is nothing about that distro which is specific to "on-prem".

@Bobgy Bobgy changed the title [Proposal] Each Kubeflow distribution should have its POCs (point of contact) [Proposal] Each Kubeflow distribution should have its GitHub team Nov 17, 2020
@Bobgy
Copy link
Contributor Author

Bobgy commented Nov 17, 2020

Thanks, updated contacts. That makes sense to me.

I didn't find a github team for IBM folks, @animeshsingh is there an existing one or can you create one?

@Bobgy
Copy link
Contributor Author

Bobgy commented Nov 17, 2020

Note that this will change with the new Manifests WG after 1.2.

@cvenets for clarification, right now manifests WG do not own the distribution itself, it's only supporting the catalog that helps every distribution.
and in spirit of #434, we should keep the existing kubernetes distribution out of Kubeflow org like all other distributions.

@cvenets
Copy link

cvenets commented Nov 17, 2020

@Bobgy I agree. What I meant is that the new Manifests will be clean from kfct. Currently this config contains kfctl, as does the AWS one if I'm not mistaken. The manifests will be pure kustomize as KFP does now for example, and then everyone can build whatever they want, including distros, on top of them, independently.

@BenTheElder
Copy link

aside: in the future please consider escaping team mentions with backticks if you're discussing the team versus intending to notify the team like @kubeflow/aws which will NOT subscribe everyone in the team to the thread.

@PatrickXYS
Copy link
Member

notify the team like @kubeflow/aws which will NOT subscribe everyone in the team to the thread.

@BenTheElder What's the reason for this desired behavior? What I can tell is it might be annoying to have 1000+ members's github team got notified for one single command @kubeflow/aws. But what about some small github teams want to notified as a whole? It would be nice to have you elaborated a little bit.

However, I think this should be fine given current proposal, what @Bobgy said is to create a Github Team for each distribution then take as reference in this documentation

@Bobgy
Copy link
Contributor Author

Bobgy commented Nov 17, 2020

I think Ben is making a right point that not everyone in @kubeflow/google is actively maintaining the kubeflow on GCP.

So for us, I think making another team called @kubeflow/oncall-gcp specifically would be better.

But it's purely for us. You can decide what team scope works for you.

@BenTheElder
Copy link

I mean for this very thread right here I don't know that it was intended to subscribe everyone in all of the teams in the table above to this discussion but they are now.

That looks unintentional to me, and can be avoided using backticks around the team if you just want to reference a team for the purposes of creating a table of teams without actually subscribing everyone to the thread 🤷‍♂️
Not all of those teams are 3 people

In my case I've actually been reminded to remove myself from the team.

This is clearly off-topic now, I'll try to leave it at that.

@PatrickXYS
Copy link
Member

PatrickXYS commented Dec 12, 2020

@Bobgy When I was cleaning up kubeflow/kubeflow issues, I found users continue reporting issues based on different platforms including AWS, AKS, IKS, GCP.

But I found it might be a bother to @ one github team per issue, especially when the folks of that team increases to 10+.

So each team/platform, if they are releasing their manifests in kuebflow/manifests repo, they need to establish their on-call team described below, otherwise, they need to release manifests in their own repo. It can help avoid the behavior of releasing manifest but not fixing issue.

Can we have each existing distribution establish a team with format of kubeflow/xxx-oncall , so that will be:

@kubeflow/aws-oncall
@kubeflow/gcp-oncall
@kubeflow/ibm-oncall
@kubeflow/azure-oncall

This can also avoid adding too many users in those on-call teams, because of the responsibilities they need to take.

Meanwhile, we need to document in the kubeflow/manifests and kubeflow/internal-acls README to make it aware by community.

@Bobgy
Copy link
Contributor Author

Bobgy commented Dec 15, 2020

@PatrickXYS Totally agree, to add to that, I'd say moving manifests to everyone's own repos is more important, so each platform has its own issue board and backlog.

@stale
Copy link

stale bot commented Jun 2, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the lifecycle/stale label Jun 2, 2021
@stale stale bot closed this as completed Jun 11, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants