New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The vm-based Application Runtime Plugin Architecture #111

Closed
chilianyi opened this Issue Jan 10, 2018 · 13 comments

Comments

Projects
None yet
4 participants
@chilianyi
Copy link
Contributor

chilianyi commented Jan 10, 2018

To deploy vm-based application into cloud platform such as QingCloud, AWS, OpenStack etc, we need to deploy two kinds of vm-based clusters.

  • metadata cluster with metad (etcd as backend store) installed
  • application cluster with confd installed

Where metadata cluster provides the meta data service for user's applications. Confd is the auto configuration daemon running in the application cluster instances and updates application configurations based on the info from metadata service.


Things need to design

  1. Where to deploy metadata cluster? Is it per cloud? or per user? or per cloud and per user?
  2. How OpenPitrix system, metad and confd. communicate?
  3. Multi-tenancy support

There are two possible solutions for the communications among components.

  1. Wrap confd and metad into a REST-based service so they can send requests back and forth. For security, the rest service requires certificate to authorise requests.
  2. Generate ssh key for OpenPitrix runtime subsystem, create metadata vm and application cluster vm with the key pair, so the runtime can execute cmd through ssh without password.

A case: Create application cluster steps:

  1. User starts to deploy an application.
  2. Runtime service first checks if metadata service created or not, if not, creates metadata service first in the background.
  3. Runtime service creates application cluster, starts confd daemon on each instance.
  4. Runtime service registers the application cluster info to metadata service.
  5. Confd in each cluster instance watches the changes of the metadata info and starts to refresh its configuration, and exec reload cmd if appropriate.
  6. Runtime service registers application init and start cmd (the cmd to make application working) into metadata service. The cluster then executes the cmd from the metadata service. After everything is finished successfully, OpenPitrix transforms the application cluster to the status "active".
@zheng1

This comment has been minimized.

Copy link
Member

zheng1 commented Jan 10, 2018

The architecture of ssh-based solution.

architecture

@chilianyi

This comment has been minimized.

Copy link
Contributor Author

chilianyi commented Jan 11, 2018

After discussion, we prefer approach one:

Wrap confd and metad into a REST-based service so they can send requests back and forth. For security, the rest service requires certificate to authorise requests.

For the first version, we use one-central metadata service for all users and multi clouds which is deployed within OpenPitrix platform.


Refined case: Create cluster steps:

prerequisite: metadata service is installed (metad and etcd are self-started by supervisord or systemd)

openpitrix_vm1

  1. User starts to deploy an application, call CreateCluster Api.
  2. The ClusterManager module in Runtime Service parse the cluster config and generate a create cluster job, send the job to JobManager module in Runtime Service. When the job is done, transform the application cluster status.
  3. The JobManager module in Runtime Service devide the job into multiple tasks and send the tasks to TaskManager module in Runtime Service. When all tasks are done, transform the job status.
  4. The TaskManager module in Runtime Service checks if Metadata VM created or not, if not, launches Metadata VM. There are two auto started modules: Wrapped Metad and Etcd. The Wrapped Metad is made up of AgentManager and Metad, Etcd is backend of Metad. When the task is done, transform the task status.
  5. The TaskManager module in Runtime Service launches Application Cluster VM, each instance of which starts the Wrapped Confd containing Agent and Confd automaticly (developer needs to install it into application image). The Wrapped Confd starts Agent process, while Confd process is pending. When the task is done, transform the task status.
  6. The Runtime Service registers application cluster info to the Wrapped Metad through rest api (Need authentication in version 2 maybe)
  7. The AgentManager of Wrapped Metad in Metadata VM send request to Agent of Wrapped Confd in Application Cluster VM, ask Agent to config and start Confd.
  8. The Agent of Wrapped Confd in Application Cluster VM completing configuration of Confd and start it.
  9. The Confd of Wrapped Confd in Application Cluster VM watches the changes of the Metad of Wrapped Metad in Metadata VM info and starts to refresh its configuration, and exec reload cmd if appropriate.
  10. Runtime Service registers application init and start cmd (the cmd to make application working) into Metad of Wrapped Metad in Metadata VM.
  11. Confd of Wrapped Confd in Application Cluster VM gets the cmd through watching, execs the cmd.
  12. The TaskManager module in Runtime Service send request to the AgentManager of Wrapped Metad in Metadata VM periodicity to check the exec status of the cmd.
  13. The AgentManager of Wrapped Metad in Metadata VM send tail cmd.log request to the Agent of Wrapped Confd in Application Cluster VM synchronized.
@zheng1

This comment has been minimized.

Copy link
Member

zheng1 commented Jan 11, 2018

multi-tenance architecture:

vm

  1. Metadata & Agent Manager is a lot of k8s Pod contains metad and agent manager;
  2. User create vm in private network, running Proxy service;
  3. Proxy will use grpc stream communicate with Metadata & Agent Manager, likes etcd watch;
  4. If user create cluster:
    1. when vm created, Agent auto-startup;
    2. calling chain: Runtime => Metadata => Proxy => Agent. Agent configure and start Confd;
    3. Confd get metadata from Proxy;

Update:
Diagram

Update2:
Diagram

@rayzhou2017 rayzhou2017 changed the title The vm architecture The vm-based application architecture Jan 12, 2018

@chilianyi

This comment has been minimized.

Copy link
Contributor Author

chilianyi commented Jan 16, 2018

openpitrix_vm

@rayzhou2017

This comment has been minimized.

Copy link
Contributor

rayzhou2017 commented Jan 16, 2018

@chilianyi check the above "Refined case: Create cluster steps:

prerequisite: metadata service is installed (metad and etcd are self-started by supervisord or systemd)"

@chilianyi

This comment has been minimized.

Copy link
Contributor Author

chilianyi commented Jan 16, 2018

OK,updated.

@rayzhou2017

This comment has been minimized.

Copy link
Contributor

rayzhou2017 commented Jan 17, 2018

A couple of things we need to clarify:

  1. We mess up the concepts of runtime service, cluster service.
  2. We need to think about multi-tenance within an architecture.
  3. k8s app also needs a metadata service, i.e., the architecture should be same for two kinds of applications. The only difference between them is deployment method and no confd for k8s app (k8s app might have configuration info stored in metadata service).
  4. We need to think about the naming of AgentManger (or AgentController)
  5. Metadata may be a sub service of OpenPitrix such as Runtime.
  6. Helm Tailler should be deployed in the k8s which our OpenPitrix deployed in, instead of the user's k8s.
  7. Agent register cmd status back to Metad automaticly when the cmd is finished(need authentication and authorization). Metad push the status to TaskManager through websocket.

@rayzhou2017 rayzhou2017 added cluster and removed runtime labels Jan 17, 2018

@rayzhou2017 rayzhou2017 changed the title The vm-based application architecture The vm-based application cluster management architecture Jan 17, 2018

@rayzhou2017 rayzhou2017 changed the title The vm-based application cluster management architecture The vm-based application runtime plugin architecture Jan 18, 2018

@rayzhou2017 rayzhou2017 changed the title The vm-based application runtime plugin architecture The vm-based Application Runtime Plugin Architecture Jan 18, 2018

@zheng1

This comment has been minimized.

Copy link
Member

zheng1 commented Jan 19, 2018

@chilianyi

This comment has been minimized.

Copy link
Contributor Author

chilianyi commented Jan 19, 2018

Services in OpenPitrix:

  • Api
  • Cluster(ClusterManager, JobController, TaskController)
  • RuntimeEnv(RuntimeEnvManager)
  • App(AppManager)
  • Repo(RepoManager, RepoIndexer)
  • Tiller: needed when k8s runtime is supported
  • Pilot: needed when vm runtime is supported

Services In Specific Runtime Env(VM)

  • Frontgate: run in Frontgate Vm(Etcd, Proxy)
  • Drone: run in App Vm(Confd, Agent)

CreateCluster Workflow

  1. User register Runtime Env through gui. Input information:
    • Runtime name and description
    • Runtime labels such as QingCloud, K8s
    • User credentials such as API key
    • Runtime Api Server Addr
  2. Module RuntimeEnvManager in Runtime Env Service will save the information into DB, and manage the CURD of the Runtime Env Info afterwards.
  3. User sends an api request to create a cluster in a selected runtime env by her/him.
    • User can choose a vxnet which has already been bound to a router and eip has already been bound to the router.
    • If not choose or no one to choose, system will create router and vxnet and eip automatically.
  4. Module ClusterManager in Cluster Service will parse the files in config package and the user changing env params, extract common info and write them into DB. Then generate a CreateCluster job, send the job to a job queue. When the job is done, transform the application cluster status.
    • Pending job queue: jobs who are waiting to execute.
    • Running job queue: jobs who are runing.
  5. Module JobController in Runtime Service picks up the job and parses the job
    into an execute plan. In the plan there are multiple tasks , sends the tasks to a task queue. When all tasks in the execute plan are done, transform the job status.
    • Pending task queue: tasks who are waiting to execute.
    • Running task queue: tasks who are running, there are two kinds of task:
      • Specific cloud api task, like RunInstance, CreateRouter, InstallRelease(Helm) etc. TaskController will call Describe Api to check if the task is finished periodly.
      • Metadata task, like RegisterMetadata, RegisterCmd. TaskController will wait and check cmd status from Pilot.
  6. Module TaskController checks if Frontgate VM created or not, if not, launches Frontgate VM with Pilot's ip as userdata of vm. Service Frontgate is auto started.
    • DB table metadata: cluster_id, router_id, user_id, runtime_env_id
      If exist (router_id, user_id, runtime_env_id) record means Frontgate VM already created
      else add new db record and create Frontgate VM with (cluster_id, router_id, user_id, runtime_env_id) as userdata.
  7. Module TaskController launches Application Cluster VM, each instance of which starts the Drone containing Agent and Confd automatically (developer needs to install it into application image). The AppAgent starts Slave process, Confd process is pending.
    • User should be able to run instance with developer's image, so the image should be a generate one with docker in it, should be public to the image market. When confd is launched, docker image will be pulled.
  8. Proxy in Frontgate will connect to Pilot through the ip written when create vm and establish tcp connections, so that Pilot is able to send request to Frontgate.
  9. The RegisterMetadata task sends request to Pilot
    • Register to /user_id/runtime_env_id/router_id/cluster_id/(specific metadata)
  10. Pilot sends request to Proxy of Frontgate to register cluster metadata info through the tcp connection established in step 8.
  11. Module Proxy of Frontgate sends request to Agent of Drone, configs and starts Confd.
  12. The Confd of Drone watches the changes of the Proxy of Frontgate info and starts to refresh its configuration, and executes reload_cmd if appropriate.
  13. The RegisterCmd task sends request to Pilot, with init and start cmd(the cmd to make application working)
  14. Pilot send request to Proxy to register the cmd.
  15. Confd of Drone gets the cmd through watching, executes the cmd.
  16. Drone will keep tailing cmd.log, when a cmd is finished, report the status to Proxy
  17. Proxy will check the cmd, if it existed in Proxy, register the cmd status for persistence and report the cmd status to Pilot with retry.(Not reply ack untill Pilot store the cmd status into DB. ).
  18. Pilot will store the cmd status into it's cmd DB(When upgrade Pilot, after restart, the task can still get the cmd status)
  19. TaskController will check the cmd status from Pilot periodly. When the cmd is done, change the status of task.
  20. When the task is finished, TaskController will notify JobController to check the job which the task belongs to, if the task is the last one in the job, update the status of the job, and update the status of the cluster, otherwise go on the next task of the job.
@martinyunify

This comment has been minimized.

Copy link

martinyunify commented Feb 2, 2018

I got two quick question: how is kubernetes plugin invoked(is there any interface definition available?) how can I publish k8s service to metadata service?

@chilianyi

This comment has been minimized.

Copy link
Contributor Author

chilianyi commented Feb 2, 2018

  1. Yes, here is the interface, not the final version.
  2. When deploy k8s app, will use tiller, not publish to metadata service.
@chilianyi

This comment has been minimized.

Copy link
Contributor Author

chilianyi commented Feb 5, 2018

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster` (
  `cluster_id` VARCHAR(50) NOT NULL,
  `name` VARCHAR(50) NULL,
  `description` VARCHAR(1000) NULL,
  `app_id` VARCHAR(50) NOT NULL,
  `app_version` VARCHAR(50) NOT NULL,
  `frontgate_id` VARCHAR(50) NULL,
  `cluster_type` INT(11) NOT NULL,
  `endpoints` VARCHAR(1000) NULL,
  `status` VARCHAR(50) NOT NULL,
  `transition_status` VARCHAR(50) NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `status_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `owner` VARCHAR(255) NOT NULL,
  `metadata_root_access` TINYINT(1) NULL,
  `global_uuid` MEDIUMTEXT NOT NULL,
  `upgrade_status` VARCHAR(50) NOT NULL,
  `upgrade_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `auto_backup_time` INT(11) NOT NULL,
  `security_group_id` VARCHAR(50) NOT NULL,
  `runtime_env_id` VARCHAR(50) NOT NULL,
  INDEX `cluster_status_index` (`status` ASC),
  INDEX `cluster_create_time_index` (`create_time` ASC),
  INDEX `cluster_owner_index` (`owner` ASC),
  PRIMARY KEY (`cluster_id`));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster_node` (
  `node_id` INT NOT NULL,
  `name` VARCHAR(50) NULL,
  `cluster_id` VARCHAR(50) NOT NULL,
  `instance_id` VARCHAR(50) NOT NULL,
  `volume_id` VARCHAR(50) NOT NULL,
  `vxnet_id` VARCHAR(50) NOT NULL,
  `private_ip` VARCHAR(50) NOT NULL,
  `server_id` INT(11) NULL,
  `role` VARCHAR(50) NOT NULL,
  `status` VARCHAR(50) NOT NULL,
  `transition_status` VARCHAR(50) NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `status_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `owner` VARCHAR(255) NOT NULL,
  `group_id` INT(11) NOT NULL,
  `global_server_id` MEDIUMTEXT NOT NULL,
  `custom_metadata` TEXT NULL,
  `is_backup` TINYINT(1) NOT NULL DEFAULT 0,
  `auto_backup` TINYINT(1) NOT NULL DEFAULT 0,
  `reserved_ips` TEXT NULL,
  `pub_key` TEXT NULL,
  `health_status` VARCHAR(50) NOT NULL,
  PRIMARY KEY (`node_id`),
  INDEX `cluster_node_cluster_id_index` (`cluster_id` ASC),
  INDEX `cluster_node_status_index` (`status` ASC),
  INDEX `cluster_node_create_time_index` (`create_time` ASC),
  INDEX `cluster_node_owner_index` (`owner` ASC));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster_common` (
  `app_id` VARCHAR(50) NOT NULL,
  `app_version` VARCHAR(50) NOT NULL,
  `role` VARCHAR(50) NOT NULL,
  `server_id_upper_bound` INT(11) NOT NULL,
  `advanced_actions` TEXT NULL,
  `init_service` TEXT NULL,
  `start_service` TEXT NULL,
  `stop_service` TEXT NULL,
  `scale_out_service` TEXT NULL,
  `scale_in_service` TEXT NULL,
  `restart_service` TEXT NULL,
  `destroy_service` TEXT NULL,
  `upgrade_service` TEXT NULL,
  `custom_service` TEXT NULL,
  `health_check` TEXT NULL,
  `monitor` TEXT NULL,
  `display_tabs` TEXT NULL,
  `passphraseless` TEXT NULL,
  `vertical_scaling_policy` VARCHAR(50) NOT NULL DEFAULT 'parallel',
  `agent_installed` TINYINT(1) NOT NULL,
  `custom_metadata_script` TEXT NULL,
  `image_id` TEXT NULL,
  `backup_service` TEXT NULL,
  `backup_policy` VARCHAR(50) NULL,
  `restore_service` TEXT NULL,
  `delete_snapshot_service` TEXT NULL,
  `incremental_backup_supported` TINYINT(1) NOT NULL DEFAULT 0,
  `hypervisor` VARCHAR(50) NOT NULL DEFAULT 'kvm',
  PRIMARY KEY (`app_id`, `app_version`, `role`));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster_snapshot` (
  `snapshot_id` INT NOT NULL,
  `role` VARCHAR(50) NOT NULL,
  `server_id` TEXT NOT NULL,
  `count` INT(11) NOT NULL,
  `app_id` VARCHAR(50) NOT NULL,
  `app_version` VARCHAR(50) NOT NULL,
  `child_snapshot_ids` TEXT NOT NULL,
  `size` INT(11) NOT NULL,
  PRIMARY KEY (`snapshot_id`, `role`, `server_id`));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster_upgrade_audit` (
  `cluster_upgrade_audit_id` VARCHAR(50) NOT NULL,
  `cluster_id` VARCHAR(50) NOT NULL,
  `from_app_version` VARCHAR(50) NOT NULL,
  `to_app_version` VARCHAR(50) NOT NULL,
  `service_params` TEXT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `upgrade_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `status` VARCHAR(50) NOT NULL,
  `owner` VARCHAR(255) NOT NULL,
  PRIMARY KEY (`cluster_upgrade_audit_id`),
  INDEX `cluster_upgrade_audit_cluster_index` (`cluster_id` ASC),
  INDEX `cluster_upgrade_audit_owner_index` (`owner` ASC));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster_link` (
  `cluster_id` VARCHAR(50) NOT NULL,
  `name` VARCHAR(50) NOT NULL,
  `external_cluster_id` VARCHAR(50) NOT NULL,
  `owner` VARCHAR(255) NOT NULL,
  PRIMARY KEY (`cluster_id`, `name`),
  INDEX `cluster_link_name_index` (`name` ASC),
  INDEX `cluster_link_owner_index` (`owner` ASC));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster_role` (
  `cluster_id` INT NOT NULL,
  `role` VARCHAR(50) NOT NULL,
  `cpu` INT(11) NOT NULL,
  `gpu` INT(11) NOT NULL,
  `memory` INT(11) NOT NULL,
  `instance_size` INT(11) NOT NULL,
  `storage_size` INT(11) NOT NULL,
  `env` TEXT NULL,
  PRIMARY KEY (`cluster_id`, `role`));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cluster_loadbalancer` (
  `cluster_id` INT NOT NULL,
  `role` VARCHAR(50) NOT NULL,
  `loadbalancer_listener_id` VARCHAR(50) NOT NULL,
  `loadbalancer_port` INT(11) NOT NULL,
  `loadbalancer_policy_id` VARCHAR(50) NOT NULL,
  PRIMARY KEY (`cluster_id`, `role`, `loadbalancer_listener_id`),
  INDEX `cluster_loadbalancer_loadbalancer_listener_id_index` (`loadbalancer_listener_id` ASC),
  INDEX `cluster_loadbalancer_loadbalancer_policy_id_index` (`loadbalancer_policy_id` ASC));

CREATE TABLE IF NOT EXISTS `openpitrix`.`job` (
  `job_id` VARCHAR(50) NOT NULL,
  `cluster_id` VARCHAR(50) NOT NULL,
  `app_id` VARCHAR(50) NOT NULL,
  `app_version` VARCHAR(50) NOT NULL,
  `job_action` VARCHAR(50) NOT NULL,
  `status` VARCHAR(50) NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `status_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `error_codes` TEXT NOT NULL,
  `owner` VARCHAR(255) NOT NULL,
  `directive` TEXT NOT NULL,
  `executor` VARCHAR(50) NOT NULL,
  `task_count` INT(11) NOT NULL,
  PRIMARY KEY (`job_id`),
  INDEX `job_create_time_index` (`create_time` ASC),
  INDEX `job_job_action_index` (`job_action` ASC),
  INDEX `job_owner_index` (`owner` ASC),
  INDEX `job_status_index` (`status` ASC));

CREATE TABLE IF NOT EXISTS `openpitrix`.`task` (
  `task_id` VARCHAR(50) NOT NULL,
  `job_id` VARCHAR(50) NOT NULL,
  `status` VARCHAR(50) NOT NULL,
  `directive` TEXT NOT NULL,
  `create_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `status_time` TIMESTAMP NOT NULL DEFAULT CURRENT_TIMESTAMP,
  `error_code` INT(11) NOT NULL,
  `owner` VARCHAR(255) NOT NULL,
  `executor` VARCHAR(50) NOT NULL,
  `node_id` VARCHAR(50) NOT NULL,
  `runtime_env_id` VARCHAR(50) NOT NULL,
  PRIMARY KEY (`task_id`));

CREATE TABLE IF NOT EXISTS `openpitrix`.`cmd` (
  `task_id` INT NOT NULL,
  `frontgate_id` VARCHAR(50) NOT NULL,
  `cmd` VARCHAR(50) NOT NULL,
  `cmd_status` VARCHAR(50) NOT NULL,
  `node_id` VARCHAR(50) NOT NULL,
  PRIMARY KEY (`task_id`));

cluster

@martinyunify martinyunify added this to In progress in Repo Feb 5, 2018

@martinyunify martinyunify moved this from In progress to To Do in Repo Feb 5, 2018

@martinyunify martinyunify removed this from To Do in Repo Feb 5, 2018

@chilianyi

This comment has been minimized.

Copy link
Contributor Author

chilianyi commented Feb 13, 2018

Task

{
  "action": "RunInstances",
  "taskId": "t-abcdefgh",
  "target": "qingcloud",
  "directive": {"image_id": "i-xxxxxxxx", cpu": 2, "memory": 2048, "instance_name":"cl-abcdefgh.cln-abcdefgh.role", "timeout": 600}
}
{
  "action": "InstallRelease",
  "taskId": "t-abcdefgh",
  "target": "kubernetes",
  "directive": {"values": "", "timeout": 600}
}
{
  "action": "StartConfd",
  "taskId": "t-abcdefgh",
  "target": "pilot",
  "directive": {"frontgateId": "cl-abcdefgh", "ip": "192.168.0.1", "timeout": 600}
}
{
  "action": "RegisterMetadata",
  "taskId": "t-abcdefgh",
  "target": "pilot",
  "directive": {"frontgateId": "cl-abcdefgh", "cmd": "mkdir -p /data/cnodes/cl-abcdefgh;echo {\"key\", \"value\"} > /data/cnodes/cl-abcdefgh/cnodes.json_tmp; mv /data/cnodes/cl-abcdefgh/cnodes.json_tmp /data/cnodes/cl-abcdefgh/cnodes.json;/opt/metad/register-data.sh 9611 /data/cnodes/cl-abcdefgh/cnodes.json", "timeout": 600, "retry": 5}
}
{
  "action": "RegisterCmd"/"DeregisterCmd"/"DeregisterMetadata",
  "taskId": "t-abcdefgh",
  "target": "pilot",
  "directive": {"frontgateId": "cl-abcdefgh", "ip": "192.168.0.1", cmd": "mkdir -p /data/cnodes/cl-abcdefgh;echo {\"key\", \"value\"} > /data/cnodes/cl-abcdefgh/cnodes.json_tmp; mv /data/cnodes/cl-abcdefgh/cnodes.json_tmp /data/cnodes/cl-abcdefgh/cnodes.json;/opt/metad/register-data.sh 9611 /data/cnodes/cl-abcdefgh/cnodes.json", "timeout": 600, "retry": 5}
}
switch action:
case "RunInstances":
  RunInstances
  DescribeJobs
  DescribeInstances
  UpdateClusterNodesIp
case "InstallRelease":
  Describe
  UpdateCluster
  UpdateClusterNodes
case "RegisterMetadata"/"RegisterCmd"/"DeregisterCmd"/"DeregisterMetadata":
  DescribeTaskStatus

@zheng1 zheng1 closed this Aug 23, 2018

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment