Skip to content
This repository has been archived by the owner on Jun 8, 2022. It is now read-only.

HealthScope #61

Closed
artursouza opened this issue Jun 13, 2020 · 12 comments
Closed

HealthScope #61

artursouza opened this issue Jun 13, 2020 · 12 comments
Assignees

Comments

@artursouza
Copy link
Member

Context

Parent Issue: crossplane/crossplane#1480

HealthScope aggregates the health status of multiple components via a web API.

In Rudr, it was implemented as described here: https://github.com/oam-dev/rudr/tree/master/healthscope

Proposal

Re-implement Rudr's approach but for the new v1alpha2 spec.

  1. Deploy OAM's health scope controller:
helm install healthscope ./charts/healthscope
export POD_NAME=$(kubectl get pods -l "app.kubernetes.io/name=healthscope,app.kubernetes.io/instance=healthscope" -o jsonpath="{.items[0].metadata.name}")
kubectl port-forward $POD_NAME 8080:80

The health scope controller (like in Rudr) does 2 things:
1. Periodically check health status of components and update the HealthScope resource status.
2. serve as a http server, to output aggregated health information.

Optionally, it can be extended to offer /metrics API so health status can be exported to Prometheus.

  1. Instantiate HealthScope as per the v1alpha2 spec:
apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
metadata:
  name: example-health-scope
spec:
  probe-method: GET
  probe-endpoint: /health
$ kubectl get healthscope
NAME              AGE
example-health-scope   31s

HealthScope controller keeps track of the healthscopes created.

  1. Declare one ApplicationConfiguration, using the health-scope:
apiVersion: core.oam.dev/v1alpha2
kind: ApplicationConfiguration
metadata:
  name: first-app
spec:
  components:
    - componentName: helloworld-python-v1
      parameterValues:
        - name: target
          value: Rudr
        - name: port
          value: "9999"
      scopes:
        - apiVersion: core.oam.dev/v1alpha2
           kind: HealthScope
           name: example-health-scope

Services generated as part of the ApplicationConfig have the following additional label:

healthscope.core.oam.dev/example-health-scope: true

Example:

$ k describe service helloworld-python-v1
Name:                     helloworld-python-v1
Namespace:                default
Labels:                   workload.oam.crossplane.io=2df4035f-8019-4600-9317-ffbf2ac409b0
                            healthscope.core.oam.dev/example-health-scope: true
Annotations:            <none>
Selector:                 containerizedworkload.oam.crossplane.io=2df4035f-8019-4600-9317-ffbf2ac409b0
Type:                     LoadBalancer
IP:                       10.97.248.231
Port:                     helloworld-python-v1  3003/TCP
TargetPort:               3003/TCP
NodePort:                 helloworld-python-v1  31919/TCP
Endpoints:                172.17.0.12:3003,172.17.0.13:3003
Session Affinity:         None
External Traffic Policy:  Cluster
Events:                   <none>

HealthScope controller can now query for all services in a health scope by selecting them based on label.

Finally, the health of the scope is reported in:

http://localhost:8080/scopes/example-health-scope

The spec for HealthScope might need to change to incorporate other components, like a database. That work is out of scope (ta dan) for this issue.

@artursouza artursouza self-assigned this Jun 13, 2020
@hongchaodeng
Copy link
Member

That's a great start!

Just one nit:

healthscope.core.oam.dev/example-health-scope: true

In k8s , we usually use the following format instead:

 healthscope.core.oam.dev/name: example-health-scope

Label key is an indicator of the type, not the instance.

@artursouza
Copy link
Member Author

artursouza commented Jun 13, 2020 via email

@hongchaodeng
Copy link
Member

I see. In this case, why not add a status to indicate all the scopes in a component:

kind: Component
status:
  scopes: ... list of scopes ...

They are all oam concepts and we should have a standardized status like this.

@artursouza
Copy link
Member Author

The healthscope is not at the Component declaration, only at ApplicationConfiguration. In the PR, I am adding Scopes to WorkloadStatus: https://github.com/crossplane/oam-kubernetes-runtime/pull/44/files#diff-3a6b6a3a711288e20dd54b6bbf14e369R303

@hongchaodeng
Copy link
Member

I think it is ambiguous on the "services" and diverging our discussions.
Could you write more details on how HealthScope controller check Components' health status?

@wonderflow
Copy link
Member

Instead of using labels, what about aggregate all information into Scope instance.

apiVersion: core.oam.dev/v1alpha2
kind: HealthScope
metadata:
  name: foo
spec:
  workloadRefs:
    - apiVersion: core.oam.dev/v1alpha2
      kind: ContainerizedWorkload
      name: bar
    - apiVersion: alibaba.com/v1
      kind: RDS
      name: database

We can assume every OAM AppScope will have a fixed field such as workloadRefs. So oam-kubernetes-runtime can insert workloads to this fixed field without understand the AppScope.

This mechanism is a little like workload and trait interaction mechanism.

@hongchaodeng
Copy link
Member

I agree with @wonderflow .
As mentioned in offline meeting, we should aggregate the components within the scope into the status of the Scope object.

@artursouza
Copy link
Member Author

BTW, I also propose we report the health status directly on the scope status, this way, there is no need for a HTTP API. Users can just use:

$ k get healthscope
NAME                   HEALTH
example-health-scope   healthy

@wonderflow
Copy link
Member

@artursouza Yeah, agree. Restful API could be an advanced feature, we can add it later.

@negz
Copy link
Member

negz commented Jun 30, 2020

Per #55 we're centralising all things OAM into one repo, and oam-kubernetes-runtime seems like the most likely place. I'm going to move this issue there.

@negz negz transferred this issue from crossplane/crossplane Jun 30, 2020
@wonderflow
Copy link
Member

I think this issue is almost finished, we can close it, right? \cc @artursouza

@artursouza
Copy link
Member Author

Closing it.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants