Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Proposal] apollo-operator design details #4708

Closed
JaredTan95 opened this issue Jan 19, 2023 · 12 comments · Fixed by apolloconfig/apollo-operator#1
Closed

[Proposal] apollo-operator design details #4708

JaredTan95 opened this issue Jan 19, 2023 · 12 comments · Fixed by apolloconfig/apollo-operator#1

Comments

@JaredTan95
Copy link
Member

JaredTan95 commented Jan 19, 2023

This proposal follows #4696

Summary

As Cloud Native and Container Orchestration are popular, service lifecycle management has different ways under Kubernetes, it makes service deployment, upgrade and uninstall more easier than before. So for the Apollo Configuration Management System ecosystem, it is the time to change and follow up.

Motivation

So many users manage Microservices and related systems in Kubernetes, include Apollo, and the traditional system lifecycle management way is complicate in Cloud Native world, so we need to find out a better way to do lifecycle management in Kubernetes.

Proposal

An Operator extends Kubernetes to automate the management of the entire life cycle of a particular application. Operators serve as a packaging mechanism for distributing applications on Kubernetes, and they monitor, maintain, recover, and upgrade the software they deploy.

Production Design

Backend Operator

  • For external Mysql DB, you need to create the secret for Apollo-operator using it.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: myapollodbsecret
type: Opaque
data:
  # KEY: base64-encoded string
  HOST: base64(my-apollometaserver-db)
  PORT: DFlMmU2
  USER_NAME: YWRtaW4=
  PASSWORD: MWYyZDFlMmU2N2Rm
EOF

ApolloEnvironment CRD

This CRD response for LCM of Apollo AdminService and Apollo ConfigService, It also has spec to declare which database it uses, and which ApolloPortal CR it uses.
So, when the user apply this CR, the operator will deploy a ConfigService/MetaService and Adminservice and bind them together as a whole.

Besides, operator also provides embedded mysql.

cat <<EOF | kubectl apply -f -
apiVersion: k8s.apolloconfig.com/v1
kind: ApolloEnvironment
metadata:
  name: quickstart-apollo-env
spec:
  version: 1.2
  #ingress: need more investigations
  http:
      service:
        spec:
          type: LoadBalancer #  NodePort, ClusterIP(default)
  configServiceCount: 1
  adminServiceCount: 1
  # this embedded mysql only for POC or demo. otherwise, you need to use external db way.
  # suggest no support embed db in this CR, if want to demo use the all in one CR
  mysqlEmbedded: true
  env: one of LOCAL, DEV, BETA, FWS, FAT, UAT, LPT, PRO, TOOLS, UNKNOWN;       
#https://github.com/apolloconfig/apollo/blob/master/apollo-core/src/main/java/com/ctrip/framework/apollo/core/enums/Env.java
  #mysql:
       # the secret of extenal db url and username/password. that means you should create secret before apply AplloEnvironment CRDs. 
       # secretRef :myapollodbsecret
EOF

Of course, we also need to detect whether the user is the first time installing or upgrading the version. As Apollo are stateless services, it mainly involves database changes, so, We need to consider the handling of different versions of SQL statements. we can create a job to modify database schemes or tables.

ApolloPortal CRD

This CRD response for LCM of Apollo Portal and the portal can be used for ApolloEnv. It also has spec to declare which database it uses. And it allows user use one ApolloPortal refs multiple ApolloEnvironment CR instances.

cat <<EOF | kubectl apply -f -
apiVersion: k8s.apolloconfig.com/v1
kind: ApolloPortal
metadata:
  name: quickstart
spec:
  version: 0.0.1
  count: 1
  #ingress: need more investigations
  http:
      service:
        spec:
          type: LoadBalancer #  NodePort, ClusterIP(default)
  #mysqlEmbedded: true
  mysql:
    # the secret of extenal db url and username/password. that means you should create secret before apply AplloEnvironment CRDs. 
    secretRef :myapollodbsecret
  apolloEnvRef:
    - name: quickstart-apollo-env
    - name: another-quickstart-apollo-env
EOF

Apollo CRD

This CRD response for LCM a Apollo all in one demo, includes Portal, AdminService, ConfigService and database server, recommending only demonstration usage. So, we can also name it All-In-One(configservice + admin + portal + embedded mysql db) for demo:

cat <<EOF | kubectl apply -f -
apiVersion: k8s.apolloconfig.com/v1
kind: Apollo
metadata:
  name: quickstart
spec:
  version: 0.0.1
  count: 1
  #ingress: need more investigations
  http:
      service:
        spec:
          type: LoadBalancer #  NodePort, ClusterIP(default)
EOF

Technology Selection

  • Development Language: GO
  • Operator dev tool: kubebuilder
  • Building tool: Make & Docker
  • Installation: Helm3 chart
  • Repository: github.com/apolloconfig/apollo-operator
  • CI: Github action

more details of this design docs can refs: https://docs.google.com/document/d/1p9OJMpLiZvkeFvTBE8GaFjyPD3RhZf1vxNbr6d_gVqo/edit#

@apolloconfig/pmc Looking forward to your comments, and if there are no objections, I will proceed to the next step of creating the github.com/apolloconfig/apollo-operator repo as our more explicit action items.

@kezhenxu94
Copy link
Member

Some doubts:

  • you bundle config service and admin service into ApolloEnvironment CRD but lift ApolloPortal to an individual CRD, what makes it different for the portal to be an individual CRD?
  • you add an Apollo CRD that includes all in one CRD, from my understanding "all-in-one" stuffs are usually for demo purpose that simplifies the steps to set up an environment, but with kubernetes operator we already minimize the steps to set up the environment, what's the point to add the all-in-one CRD? I mean it's (nearly) equivalent complexity to setup the all-in-one env and the default demo (with embedded database) env, just run a couple of commands.

@JaredTan95
Copy link
Member Author

you bundle config service and admin service into ApolloEnvironment CRD but lift ApolloPortal to an individual CRD, what makes it different for the portal to be an individual CRD?

@kezhenxu94 apollo portal can manage multiple apollo env from apollo design. And also from design, every apollo env is composed of config and admin service.

you add an Apollo CRD that includes all in one CRD, from my understanding "all-in-one" stuffs are usually for demo purpose that simplifies the steps to set up an environment, but with kubernetes operator we already minimize the steps to set up the environment, what's the point to add the all-in-one CRD? I mean it's (nearly) equivalent complexity to setup the all-in-one env and the default demo (with embedded database) env, just run a couple of commands.

all-in-one CRD just a nice to have design, we prefer to implement the above separate(ApolloEnvironment, ApolloPortal) design first. If user could not run a apollo by couple of commands, we might need to consider of this impl.

@nobodyiam
Copy link
Member

Is it possible to customize the configuration in the CR? e.g.

  1. the meta server address of each environment
  2. the ingress setup of the apollo portal

@JaredTan95
Copy link
Member Author

  1. the meta server address of each environment

you meaning customize metaserver address in ApolloPortal CR? In our design, we recommend binding apollo environment through ref the name of apollo environment, without requiring the user to explicitly specify metaserver address to bind, that means operator will be responsible for generating the address of the metaserver and using the K8S native service discovery and splice into a fully K8S SVC for the portal to use. If we really need it, we can also add it in the future.

  1. the ingress setup of the apollo portal

igress we missed, we will make up

@nobodyiam
Copy link
Member

  1. the meta server address of each environment

you meaning customize metaserver address in ApolloPortal CR? In our design, we recommend binding apollo environment through ref the name of apollo environment, without requiring the user to explicitly specify metaserver address to bind, that means operator will be responsible for generating the address of the metaserver and using the K8S native service discovery and splice into a fully K8S SVC for the portal to use. If we really need it, we can also add it in the future.

  1. the ingress setup of the apollo portal

igress we missed, we will make up

Sounds good to me :-)

@stale
Copy link

stale bot commented Mar 5, 2023

This issue has been automatically marked as stale because it has not had activity in the last 30 days. It will be closed in 7 days unless it is tagged "help wanted" or other activity occurs. Thank you for your contributions.

@CSWYF3634076
Copy link

I am very interested in this ospp project. If I want to learn about this ospp project, is it mainly based on this issue?

@JaredTan95
Copy link
Member Author

JaredTan95 commented Apr 19, 2023

I am very interested in this ospp project. If I want to learn about this ospp project, is it mainly based on this issue?

very welcome and yes, we described the details of the operator design and the proposal.
But, If you really want to get started on this project, you need to learn about Apollo first and also need to learn some basic skills, such as golang,docker,kubernetes and Helm3 chart .etc.

@CSWYF3634076
Copy link

Okay, I am proficient in Docker, K8S CRD, and Helm3, but I am not very proficient in Apollo and have only used it. Let me first learn about Apollo's architecture

@iyear
Copy link

iyear commented Apr 19, 2023

Hi everyone here! I'm also interested in the program and want to work on this project under OSPP2023. I have experience with Golang, Docker, Kubernetes, etc. I will delve into Apollo and related materials in a few days. How should we get in touch?

@JaredTan95
Copy link
Member Author

How should we get in touch?

If you registered for OSPP2023 and apply this task on 4/29, we will start the right process to guide you to finish it.

@iyear @CSWYF3634076 FYI

@iyear
Copy link

iyear commented May 3, 2023

What can we do about the ingress part? Two possible structs:

  1. Use ingress spec directly and expose all fields: (like grafana operator)
ingress:
  annotations:
  labels:
  spec:
    ...
  1. Only exposes some fields: (like harbor, jaeger operator)
ingress:
  enabled:
  metadata: # or inline
    annotations:
    labels:
  ingressClassName: 
  tls:
    - hosts: []
      secretName: 
  hosts:
    - host: 
      path: []

I prefer the second one and it is also consistent with the helm chart. WDYT?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants