Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create service binding through topology connector #3065

Merged
merged 6 commits into from
Oct 29, 2019
Merged

Create service binding through topology connector #3065

merged 6 commits into from
Oct 29, 2019

Conversation

divyanshiGupta
Copy link
Contributor

@divyanshiGupta divyanshiGupta commented Oct 23, 2019

Jira Story - https://jira.coreos.com/browse/ODC-1978

This PR adds following functionalities -

  • Allow service binding only when service binding operator is installed.
  • Create service binding between nodes if the target node is an operator-backed-service otherwise create pseudo binding.
  • Remove service binding.

create-binding

delete-binding

To Do -

  • Add visual changes to distinguish between service binding and pseudo binding connector
  • Add visual changes to depict service binding failure
    cc: @serenamarie125

Note -
Removing connects-to label from source app and deleting the SBR doesn't trigger re-deployment of the source app. @sbose78 is there a way we can do that?

@openshift-ci-robot openshift-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. component/dev-console Related to dev-console component/shared Related to console-shared labels Oct 23, 2019
@divyanshiGupta
Copy link
Contributor Author

/cc @christianvogt

@divyanshiGupta
Copy link
Contributor Author

/cc @rohitkrai03

@divyanshiGupta
Copy link
Contributor Author

/test e2e-gcp-console

@divyanshiGupta
Copy link
Contributor Author

/cc @serenamarie125

@divyanshiGupta
Copy link
Contributor Author

/test e2e-gcp-console

@sbose78
Copy link

sbose78 commented Oct 23, 2019

Removing connects-to label from source app and deleting the SBR doesn't trigger re-deployment of the source app. @sbose78 is there a way we can do that?

Sure, we can. CC / @otaviof @akashshinde

isList: true,
kind: referenceForModel(ClusterServiceVersionModel),
namespace,
prop: 'clusterServiceVersion',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this in console shared? Since this is only used by topology we shouldn't be fetching this for the overview.

What happens if this resource isn't available? Probably want this optional or feature flagged.

Cc @invincibleJai

return {
id: dcUID,
name:
_.get(deploymentConfig, 'metadata.name') || deploymentsLabels['app.kubernetes.io/instance'],
type: 'workload',
resources: { ...dc },
pods: dc.pods,
operatorBackedService: !!_.find(operatorBackedServiceKinds, (k) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use includes

apiVersion: 'apps.openshift.io/v1alpha1',
kind: 'ServiceBindingRequest',
metadata: {
name: `${sourceName}-${targetName}-sbr`,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should probably use a random string suffix to lessen the chance of failure.

matchLabels: { ...applicationSelectorLabels },
group: 'apps.openshift.io',
version: 'v1',
resource: 'deploymentconfigs',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What about the other workload types? Eg deployment

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note, Any deployment, deploymentconfig and knativeService is supported.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This section must be generated based on the source resource backing the topology node.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for pointing this out. Almost forgot about making the resource type generic.

},
spec: {
applicationSelector: {
matchLabels: { ...applicationSelectorLabels },
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this service binding is named specifically for binding the given source and target, shouldn't the selector be very specific to the target? I think the connects-to convention was intended to create a single service binding that can bind multiple apps to the service.

What happens when we have many service binding request resources that bind to the same service using the same selector?

If we intend to create a specific binding per source and target, it may be better to create a selector against the recommended labels of the source.

Thoughts @sbose78 ?
Is there only a selector? Or also a resource ref?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could use resourceRef too.

apiVersion: apps.openshift.io/v1alpha1
kind: ServiceBindingRequest
metadata:
  name: binding-request
  namespace: service-binding-demo
spec:
  applicationSelector:
    resourceRef: nodejs-app
    group: apps.openshift.io
    version: v1
    resource: deploymentconfigs
  backingServiceSelector:
    resourceRef: db-demo
    group: postgresql.baiju.dev
    version: v1alpha1
    kind: Database

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Awesome!
I think it makes more sense to create very targeted SBR in this case.

@@ -44,6 +47,20 @@ export const edgesFromAnnotations = (annotations): string[] => {
return edges;
};

export const edgesFromLabels = (labels): string[] => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This assumes that by having a connects-to label that the app is bound to a service. That isn't always true. We should be getting the service binding requests and matching the selector to workloads. The connects-to label selector is a convention however any selector can be used. Then our topology wouldn't show the app binding.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Further to this, we need a resource backing the edge itself so that we can select the edge and display the SBR in the side bar.

},
backingServiceSelector: {
group: targetResourceGroup[0],
version: 'v1alpha1',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This cannot be hard coded.

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@@ -50,6 +51,13 @@ const plugin: Plugin<ConsumedExtensions> = [
flag: SHOW_PIPELINE,
},
},
{
type: 'FeatureFlag/Model',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the flag being used. But it should probably.
We'll need feature checks to enable SBR feature as well as rbac checks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oops missed to add a check for service binding operator before allowing the binding. My bad :(

Comment on lines 63 to 76
resources.push(
{
isList: true,
kind: referenceForModel(ClusterServiceVersionModel),
namespace,
prop: 'clusterServiceVersion',
},
{
isList: true,
kind: referenceForModel(ServiceBindingRequestModel),
namespace,
prop: 'servicebindingrequests',
},
);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You're fetching these resources even if they do not exist. This can cause an error just like we encountered with knative. We need to only include these resources if they are installed and if we have read access. So using optional: true here seems like it would solve the issue. Additionally don't include if serviceBinding === false

const { obj: deploymentConfig, current, previous, isRollingOut } = dc;
const dcUID = _.get(deploymentConfig, 'metadata.uid');
const deploymentsLabels = _.get(deploymentConfig, 'metadata.labels', {});
const deploymentsAnnotations = _.get(deploymentConfig, 'metadata.annotations', {});
const { buildConfigs } = dc;
const nodeResourceKind = _.get(deploymentConfig, 'metadata.ownerReferences[0].kind');
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is the first ownerReference sufficient?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

const edges: string[] = [];
_.forEach(sbrs, (sbr) => {
let edgeExists = false;
if (_.get(sbr, 'spec.applicationSelector.resourceRef') === source.metadata.name) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You also need to match the other properties like resource to ensure it's the right kind.

}
}
}
edgeExists && edges.push(sbr.spec.backingServiceSelector.resourceRef);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need more than the name here. Again you need the kind.

},
};

return checkServiceBindingPermissions(namespace, 'delete', sbrName)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check here isn't really useful because the operation would have failed anyways if they didn't have permission. We should not be allowing the user to perform the operation in the first place. I say remove the rbac checks here for now. When we move to the new topology it'll be easier to add rbac to prevent the user from seeing the delete icon in the first place.

if (targetNode) {
edges.push({
id: `${uid}_${targetNode}`,
type: 'connects-to',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should be of type service-binding.
You need to identify the edge type being deleted to call the right delete function.
For now you can contribute the same edge shape inside shape-providers.ts for this new type.

Comment on lines 378 to 394
if (target.operatorBackedService) {
return removeServiceBinding(sourceObj, targetObj);
}

return removeResourceConnection(sourceObj, targetObj);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to check which is the edge.type being deleted here.
Update the following inside D3ForceDirectedRenderer to also include the edge type. If connects-to do the old delete, if service-binding do the new delete.
onRemove={() => onRemoveConnection(viewEdge.source.id, viewEdge.target.id)}

@openshift-ci-robot openshift-ci-robot added needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. and removed needs-rebase Indicates a PR cannot be merged because it has merge conflicts with HEAD. labels Oct 26, 2019
);
const uid = _.get(dc, ['metadata', 'uid']);
if (targetNode) {
edges.push({
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Save the trouble of lookup when you delete the edge by storing the SBR in the data prop of the edge.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, initially I also thought of doing that but I was not sure if it makes sense to store the SBR in the edge data itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

const targetNode = _.get(
_.find(resources, (deployment) => {
const targetFound =
_.get(deployment, 'metadata.ownerReferences[0].name') === edge.resourceRef &&
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is this not a comparison against the name and kind of the resource? Why the ownerReference?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looking at the examples it seems that they are probably identifying which service to bind to based on the service name and its kind for eg Database etc and not based on its Deployment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Service

kind: Database
metadata:
  name: db-demo
  namespace: service-binding-demo
spec:
  image: docker.io/postgres
  imageName: postgres
  dbName: db-demo

SBR

apiVersion: apps.openshift.io/v1alpha1
kind: ServiceBindingRequest
metadata:
  name: binding-request
  namespace: service-binding-demo
spec:
  applicationSelector:
    matchLabels:
      connects-to: postgres
      environment: demo
    group: apps.openshift.io
    version: v1
    resource: deploymentconfigs
  backingServiceSelector:
    group: postgresql.baiju.dev
    version: v1alpha1
    kind: Database
    resourceRef: db-demo

spec: {
applicationSelector: {
resourceRef: sourceName,
group: 'apps.openshift.io',
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the wrong group for a deployment.
This field needs to be dynamic based on the source resource.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done


const targetResourceKind = _.get(target, 'metadata.ownerReferences[0].kind');
const targetResourceRefName = _.get(target, 'metadata.ownerReferences[0].name');
const removeSbr = _.find(sbrs, (sbr) => {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This can be avoided if you store the SBR in the edge data.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

Comment on lines 273 to 275
if (!source || !target || source === target) {
return Promise.reject();
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this matters really. If you got the sbr, go ahead and delete it

@christianvogt
Copy link
Contributor

/approve

@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 28, 2019
@christianvogt
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 28, 2019
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

1 similar comment
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-ci-robot openshift-ci-robot removed the lgtm Indicates that a PR is ready to be merged. label Oct 29, 2019
@rohitkrai03
Copy link
Contributor

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Oct 29, 2019
@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: christianvogt, divyanshiGupta, rohitkrai03

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@rohitkrai03
Copy link
Contributor

/test e2e-gcp-console

@openshift-merge-robot openshift-merge-robot merged commit 197b678 into openshift:master Oct 29, 2019
@spadgett spadgett added this to the v4.3 milestone Oct 31, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. component/dev-console Related to dev-console component/shared Related to console-shared lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants