Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feat: vela adopt command #5197

Merged
merged 3 commits into from
Dec 20, 2022
Merged

Conversation

Somefive
Copy link
Collaborator

@Somefive Somefive commented Dec 15, 2022

Signed-off-by: Somefive yd219913@alibaba-inc.com

Description of your changes

Fixes #3552
Fixes #2823

Support vela adopt command for adopting resources into KubeVela application.

Adopt resources into applications

 Adopt resources into a KubeVela application. This command is useful when you already have resources applied in your
Kubernetes cluster. These resources could be applied natively or with other tools, such as Helm. This command will
automatically find out the resources to be adopted and assemble them into a new application which won't trigger any
damage such as restart on the adoption.

 There are two types of adoption supported by far, 'native' Kubernetes resources (by default) and 'helm' releases. 1.
For 'native' type, you can specify a list of resources you want to adopt in the application. Only resources in local
cluster are supported for now. 2. For 'helm' type, you can specify a helm release name. This helm release should be
already published in the local cluster. The command will find the resources managed by the helm release and convert them
into an adoption application.

 There are two working mechanism (called 'modes' here) for the adoption by far, 'read-only' mode (by default) and
'take-over' mode. 1. In 'read-only' mode, adopted resources will not be touched. You can leverage vela tools (like Vela
CLI or VelaUX) to observe those resources and attach traits to add new capabilities. The adopted resources will not be
recycled or updated. This mode is recommended if you still want to keep using other tools to manage resources updates or
deletion, like Helm. 2. In 'take-over' mode, adopted resources are completely managed by application which means they
can be modified. You can use traits or directly modify the component to make edits to those resources. This mode can be
helpful if you want to migrate existing resources into KubeVela system and let KubeVela to handle the life-cycle of
target resources.

 The adopted application can be customized. You can provide a CUE template file to the command and make your own
assemble rules for the adoption application. You can refer to
https://github.com/kubevela/kubevela/blob/master/references/cli/adopt-templates/default.cue to see the default
implementation of adoption rules.

Usage:
  vela adopt [flags]

Examples:
  # Native Resources Adoption
  
  ## Adopt resources into new application
  ## Use: vela adopt <resources-type>[/<resource-namespace>]/<resource-name>
<resources-type>[/<resource-namespace>]/<resource-name> ...
  vela adopt deployment/my-app configmap/my-app
  
  ## Adopt resources into new application with specified app name
  vela adopt deployment/my-deploy configmap/my-config --app-name my-app
  
  ## Adopt resources into new application in specified namespace
  vela adopt deployment/my-app configmap/my-app -n demo
  
  ## Adopt resources into new application across multiple namespace
  vela adopt deployment/ns-1/my-app configmap/ns-2/my-app
  
  ## Adopt resources into new application with take-over mode
  vela adopt deployment/my-app configmap/my-app --mode take-over
  
  ## Adopt resources into new application and apply it into cluster
  vela adopt deployment/my-app configmap/my-app --apply
  
  -----------------------------------------------------------
  
  # Helm Chart Adoption
  
  ## Adopt resources in a deployed helm chart
  vela adopt my-chart -n my-namespace --type helm
  
  ## Adopt resources in a deployed helm chart with take-over mode
  vela adopt my-chart --type helm --mode take-over
  
  ## Adopt resources in a deployed helm chart in an application and apply it into cluster
  vela adopt my-chart --type helm --apply
  
  ## Adopt resources in a deployed helm chart in an application, apply it into cluster, and recycle the old helm release
after the adoption application successfully runs
  vela adopt my-chart --type helm --apply --recycle
  
  -----------------------------------------------------------
  
  ## Customize your adoption rules
  vela adopt my-chart -n my-namespace --type helm --adopt-template my-rules.cue

Flags:
      --adopt-template string   The CUE template for adoption. If not provided, the default template will be used when
--auto is switched on.
      --app-name string         The name of application for adoption. If empty for helm type adoption, it will inherit
the helm chart's name.
      --apply                   If true, the application for adoption will be applied. Otherwise, it will only be
printed.
  -d, --driver string           The storage backend of helm adoption. Only take effect when --type=helm.
  -e, --env string              The environment name for the CLI request
  -h, --help                    help for adopt
  -m, --mode string             The mode of adoption. Available values: [read-only, take-over] (default "read-only")
  -n, --namespace string        If present, the namespace scope for this CLI request
      --recycle                 If true, when the adoption application is successfully applied, the old storage (like
Helm secret) will be recycled.
  -t, --type string             The type of adoption. Available values: [native, helm] (default "native")

I have:

  • Read and followed KubeVela's contribution process.
  • Related Docs updated properly. In a new feature or configuration option, an update to the documentation is necessary.
  • Run make reviewable to ensure this PR is ready for review.

How has this code been tested

Special notes for your reviewer

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
@codecov
Copy link

codecov bot commented Dec 15, 2022

Codecov Report

Base: 61.12% // Head: 61.07% // Decreases project coverage by -0.05% ⚠️

Coverage data is based on head (2439eaa) compared to base (2b3da03).
Patch has no changes to coverable lines.

Additional details and impacted files
@@            Coverage Diff             @@
##           master    #5197      +/-   ##
==========================================
- Coverage   61.12%   61.07%   -0.06%     
==========================================
  Files         305      305              
  Lines       45377    45445      +68     
==========================================
+ Hits        27738    27754      +16     
- Misses      14788    14826      +38     
- Partials     2851     2865      +14     
Flag Coverage Δ
apiserver-e2etests 35.00% <ø> (-0.03%) ⬇️
apiserver-unittests 36.87% <ø> (-0.03%) ⬇️
core-unittests 55.10% <ø> (-0.04%) ⬇️
e2e-multicluster-test 18.86% <ø> (+<0.01%) ⬆️
e2e-rollout-tests 20.53% <ø> (+0.06%) ⬆️
e2etests 26.05% <ø> (+0.09%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
...kg/apiserver/infrastructure/datastore/datastore.go 64.28% <0.00%> (-14.29%) ⬇️
...es/policydefinition/policydefinition_controller.go 69.51% <0.00%> (-7.32%) ⬇️
pkg/apiserver/interfaces/api/oam_application.go 59.18% <0.00%> (-5.11%) ⬇️
pkg/apiserver/event/sync/cr2ux.go 41.17% <0.00%> (-4.71%) ⬇️
pkg/apiserver/event/sync/worker.go 67.69% <0.00%> (-4.62%) ⬇️
...tepdefinition/workflowstepdefinition_controller.go 70.58% <0.00%> (-3.53%) ⬇️
pkg/apiserver/domain/service/target.go 57.95% <0.00%> (-3.41%) ⬇️
pkg/apiserver/domain/service/oam_application.go 84.61% <0.00%> (-3.30%) ⬇️
pkg/apiserver/domain/service/workflow.go 51.98% <0.00%> (-3.18%) ⬇️
pkg/appfile/dryrun/dryrun.go 44.24% <0.00%> (-2.66%) ⬇️
... and 20 more

Help us with your feedback. Take ten seconds to tell us how you rate us. Have a feature suggestion? Share it here.

☔ View full report at Codecov.
📢 Do you have feedback about the report comment? Let us know in this issue.

@barnettZQG
Copy link
Collaborator

@Somefive Could you print an example Application YAML?

@Somefive
Copy link
Collaborator Author

Somefive commented Dec 16, 2022

@Somefive Could you print an example Application YAML?

Sure. Below is an example for kruise helm chart.

apiVersion: core.oam.dev/v1beta1
kind: Application
metadata:
  creationTimestamp: null
  labels:
    app.oam.dev/adopt: helm
  name: kruise
  namespace: default
spec:
  components:
  - name: kruise.crds
    properties:
      objects:
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: advancedcronjobs.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: broadcastjobs.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: nodepodprobes.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: persistentpodstates.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: imagepulljobs.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: resourcedistributions.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: uniteddeployments.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: daemonsets.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: statefulsets.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: workloadspreads.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: podunavailablebudgets.policy.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: clonesets.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: nodeimages.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: containerrecreaterequests.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: podprobemarkers.apps.kruise.io
      - apiVersion: apiextensions.k8s.io/v1
        kind: CustomResourceDefinition
        metadata:
          name: sidecarsets.apps.kruise.io
    type: k8s-objects
  - name: kruise.ns.kruise-system
    properties:
      objects:
      - apiVersion: v1
        kind: Namespace
        metadata:
          name: kruise-system
    type: k8s-objects
  - name: kruise.DaemonSet.kruise-daemon
    properties:
      objects:
      - apiVersion: apps/v1
        kind: DaemonSet
        metadata:
          name: kruise-daemon
          namespace: kruise-system
        spec:
          minReadySeconds: 3
          selector:
            matchLabels:
              control-plane: daemon
          template:
            metadata:
              labels:
                control-plane: daemon
            spec:
              containers:
              - args:
                - --logtostderr=true
                - --v=4
                - --addr=:10221
                - --feature-gates=
                - --socket-file=
                command:
                - /kruise-daemon
                env:
                - name: KUBE_CACHE_MUTATION_DETECTOR
                  value: "true"
                - name: NODE_NAME
                  valueFrom:
                    fieldRef:
                      apiVersion: v1
                      fieldPath: spec.nodeName
                image: openkruise/kruise-manager:v1.3.0
                imagePullPolicy: Always
                livenessProbe:
                  failureThreshold: 3
                  httpGet:
                    path: /healthz
                    port: 10221
                    scheme: HTTP
                  initialDelaySeconds: 60
                  periodSeconds: 10
                  successThreshold: 1
                  timeoutSeconds: 1
                name: daemon
                resources:
                  limits:
                    cpu: 50m
                    memory: 128Mi
                  requests:
                    cpu: "0"
                    memory: "0"
                volumeMounts:
                - mountPath: /hostvarrun
                  name: runtime-socket
                  readOnly: true
              hostNetwork: true
              serviceAccountName: kruise-daemon
              terminationGracePeriodSeconds: 10
              tolerations:
              - operator: Exists
              volumes:
              - hostPath:
                  path: /var/run
                  type: ""
                name: runtime-socket
          updateStrategy:
            rollingUpdate:
              maxUnavailable: 10%
            type: RollingUpdate
    type: k8s-objects
  - name: kruise.Deployment.kruise-controller-manager
    properties:
      objects:
      - apiVersion: apps/v1
        kind: Deployment
        metadata:
          name: kruise-controller-manager
          namespace: kruise-system
        spec:
          minReadySeconds: 3
          replicas: 2
          selector:
            matchLabels:
              control-plane: controller-manager
          strategy:
            rollingUpdate:
              maxSurge: 100%
              maxUnavailable: 0
            type: RollingUpdate
          template:
            metadata:
              labels:
                control-plane: controller-manager
            spec:
              affinity:
                podAntiAffinity:
                  preferredDuringSchedulingIgnoredDuringExecution:
                  - podAffinityTerm:
                      labelSelector:
                        matchExpressions:
                        - key: control-plane
                          operator: In
                          values:
                          - controller-manager
                      topologyKey: kubernetes.io/hostname
                    weight: 100
              containers:
              - args:
                - --enable-leader-election
                - --metrics-addr=:8080
                - --health-probe-addr=:8000
                - --logtostderr=true
                - --leader-election-namespace=kruise-system
                - --v=4
                - --feature-gates=
                - --sync-period=0
                command:
                - /manager
                env:
                - name: KUBE_CACHE_MUTATION_DETECTOR
                  value: "true"
                - name: POD_NAMESPACE
                  valueFrom:
                    fieldRef:
                      fieldPath: metadata.namespace
                - name: WEBHOOK_PORT
                  value: "9876"
                - name: WEBHOOK_CONFIGURATION_FAILURE_POLICY_PODS
                  value: Ignore
                image: openkruise/kruise-manager:v1.3.0
                imagePullPolicy: Always
                name: manager
                ports:
                - containerPort: 9876
                  name: webhook-server
                  protocol: TCP
                - containerPort: 8080
                  name: metrics
                  protocol: TCP
                - containerPort: 8000
                  name: health
                  protocol: TCP
                readinessProbe:
                  httpGet:
                    path: readyz
                    port: 8000
                resources:
                  limits:
                    cpu: 200m
                    memory: 512Mi
                  requests:
                    cpu: 100m
                    memory: 256Mi
              hostNetwork: false
              serviceAccountName: kruise-manager
              terminationGracePeriodSeconds: 10
    type: k8s-objects
  - name: kruise.Service.kruise-webhook-service
    properties:
      objects:
      - apiVersion: v1
        kind: Service
        metadata:
          name: kruise-webhook-service
          namespace: kruise-system
        spec:
          ports:
          - port: 443
            targetPort: 9876
          selector:
            control-plane: controller-manager
    type: k8s-objects
  - name: kruise.config
    properties:
      objects:
      - apiVersion: v1
        kind: Secret
        metadata:
          name: kruise-webhook-certs
          namespace: kruise-system
    type: k8s-objects
  - name: kruise.sa
    properties:
      objects:
      - apiVersion: v1
        kind: Secret
        metadata:
          name: kruise-webhook-certs
          namespace: kruise-system
    type: k8s-objects
  - name: kruise.operator
    properties:
      objects:
      - apiVersion: v1
        kind: Secret
        metadata:
          name: kruise-webhook-certs
          namespace: kruise-system
    type: k8s-objects
  policies:
  - name: read-only
    properties:
      rules:
      - selector:
          componentNames:
          - kruise.crds
          - kruise.ns.kruise-system
          - kruise.DaemonSet.kruise-daemon
          - kruise.Deployment.kruise-controller-manager
          - kruise.Service.kruise-webhook-service
          - kruise.config
          - kruise.sa
          - kruise.operator
    type: read-only

@Somefive Somefive force-pushed the feat/vela-adopt branch 2 times, most recently from 7742b1f to 773590e Compare December 19, 2022 07:38
references/cli/adopt.go Outdated Show resolved Hide resolved
references/cli/adopt.go Outdated Show resolved Hide resolved
references/cli/adopt.go Outdated Show resolved Hide resolved
references/cli/adopt.go Outdated Show resolved Hide resolved
references/cli/adopt.go Outdated Show resolved Hide resolved
references/cli/adopt.go Outdated Show resolved Hide resolved
references/cli/adopt.go Outdated Show resolved Hide resolved

The adopted application can be customized. You can provide a CUE template file to
the command and make your own assemble rules for the adoption application. You can
refer to https://github.com/kubevela/kubevela/blob/master/references/cli/adopt.cue to
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we should write a doc for the rules instead of just giving one example.

}
cmd.Flags().StringVarP(&o.Type, "type", "t", o.Type, fmt.Sprintf("The type of adoption. Available values: [%s]", strings.Join(adoptTypes, ", ")))
cmd.Flags().StringVarP(&o.Mode, "mode", "m", o.Mode, fmt.Sprintf("The mode of adoption. Available values: [%s]", strings.Join(adoptModes, ", ")))
cmd.Flags().StringVarP(&o.AppName, "app-name", "", o.AppName, "The name of application for adoption. If empty for helm type adoption, it will inherit the helm chart's name.")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why don't just use name?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

because name could be ambiguous, like resource name / helm chart name / app name, ...

Copy link
Collaborator

@wonderflow wonderflow left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This Implementation is elegant! 👍

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
Signed-off-by: Somefive <yd219913@alibaba-inc.com>
@wonderflow wonderflow merged commit c98d0d5 into kubevela:master Dec 20, 2022
barnettZQG pushed a commit to barnettZQG/kubevela that referenced this pull request Jan 30, 2023
* Feat: vela adopt

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Feat: support adopt native resources

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Test: add test for vela adopt

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
zhaohuiweixiao pushed a commit to zhaohuiweixiao/kubevela that referenced this pull request Mar 7, 2023
* Feat: vela adopt

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Feat: support adopt native resources

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

* Test: add test for vela adopt

Signed-off-by: Somefive <yd219913@alibaba-inc.com>

Signed-off-by: Somefive <yd219913@alibaba-inc.com>
@Somefive Somefive deleted the feat/vela-adopt branch June 20, 2023 13:32
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
4 participants