Skip to content
This repository has been archived by the owner on May 6, 2022. It is now read-only.

Metadata for tracking instance to deployed artifacts to enable service graphs #36

Closed
judkowitz-zz opened this issue Nov 7, 2016 · 8 comments
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Milestone

Comments

@judkowitz-zz
Copy link
Contributor

At the offsite, we discussed how to make service graphs. We have almost everything we need in the current design except a way to map instances to instances. Namespaces/label-selectors bind to instances so if we want to figure out the instance to instance binding, we need to have the instance to namespace/label-selector mapping so that we can traverse the chain (permissions allowing).

This is probably just a matter of storing one more piece of metadata somewhere on the binding operation. This issue to to figure out the best implementation and to add that to the design.

@duglin
Copy link
Contributor

duglin commented Nov 8, 2016

To restate this w/o using the word "instance" on both sides.... I think what's being asked for is:
how do we go from a consumer's 'service instance' resource to the K8s deployment artifacts that make up that instance?

For example, when a consumer asks for an instance of a mongoDB, the consumer can see a "serviceInstance" resource but at best there's just a reference to that instance's access coordinates, there's nothing in that service instance resource that allows the consumer to then traverse over to the environment where the mongoDB instance is actually deployed and then query the K8s platform for the deployment/replicaSet/PetSet/whatever... that makes up that instance.

This could be as simple as adding an "object reference" (assuming that can be used for remote k8s clusters) to be returned in the createServiceInstance() response message - and then store it in the "serviceInstance" resource that the consumer can see. But we'll see...

@pmorie pmorie added this to the Later milestone Jan 11, 2017
@arschles arschles removed this from the Later milestone Apr 3, 2017
@duglin duglin added this to the Post-1.0.0 milestone Jul 9, 2017
jboyd01 pushed a commit to jboyd01/service-catalog that referenced this issue Dec 20, 2018
support for running e2e with Catalog deployed from CSV
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 21, 2019
@jberkhahn jberkhahn removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label May 23, 2019
mszostok referenced this issue in mszostok/service-catalog Aug 13, 2019
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 21, 2019
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Sep 20, 2019
@mszostok
Copy link
Contributor

mszostok commented Oct 3, 2019

/remove-lifecycle rotten
/lifecycle frozen

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. labels Oct 3, 2019
@mrbobbytables
Copy link

This project is being archived, closing open issues and PRs.
Please see this PR for more information: kubernetes/community#6632

/close

@k8s-ci-robot
Copy link
Contributor

@mrbobbytables: Closing this issue.

In response to this:

This project is being archived, closing open issues and PRs.
Please see this PR for more information: kubernetes/community#6632

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
Development

No branches or pull requests

9 participants