Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

build(deps): Bump azure/setup-helm from 3 to 4 #86

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

dependabot[bot]
Copy link
Contributor

@dependabot dependabot bot commented on behalf of github Mar 3, 2024

Bumps azure/setup-helm from 3 to 4.

Release notes

Sourced from azure/setup-helm's releases.

v4

Latest v4 release

v4.0.0

  • #121 update to node20 as node16 is deprecated

v3.5 release

Bump @​actions/core version to remove output warning.

v3.4 release

Improves the querying method to find the latest Helm release. Takes advantage of new GitHub api changes.

v3.3 release

Add token input. Needed for fetching latest

v3.1 release

Swap to GraphQL GitHub API

Commits

Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options

You can trigger Dependabot actions by commenting on this PR:

  • @dependabot rebase will rebase this PR
  • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
  • @dependabot merge will merge this PR after your CI passes on it
  • @dependabot squash and merge will squash and merge this PR after your CI passes on it
  • @dependabot cancel merge will cancel a previously requested merge and block automerging
  • @dependabot reopen will reopen this PR if it is closed
  • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
  • @dependabot show <dependency name> ignore conditions will show all of the ignore conditions of the specified dependency
  • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
  • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)

Bumps [azure/setup-helm](https://github.com/azure/setup-helm) from 3 to 4.
- [Release notes](https://github.com/azure/setup-helm/releases)
- [Commits](Azure/setup-helm@v3...v4)

---
updated-dependencies:
- dependency-name: azure/setup-helm
  dependency-type: direct:production
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>
@dependabot dependabot bot requested review from cubxxw and a team as code owners March 3, 2024 19:52
@dependabot dependabot bot added the dependencies Pull requests that update a dependency file label Mar 3, 2024
@pull-request-size pull-request-size bot added the size/XS Denotes a PR that changes 0-9 lines, ignoring generated files. label Mar 3, 2024
@kubbot
Copy link

kubbot commented Mar 3, 2024

Kubernetes Templates in openim Namespace

openim templates get ./charts/openim-server -f k8s-open-im-server-config.yaml -f config-imserver.yaml
---
# Source: openim-api/templates/app-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: openim-cm
data:
  config.yaml: |+
    api:
      listenIP: 0.0.0.0
      openImApiPort:
      - 80
    callback:
      afterSendGroupMsg:
        enable: false
        timeout: 5
      afterSendSingleMsg:
        enable: false
        timeout: 5
      beforeAddFriend:
        enable: false
        failedContinue: true
        timeout: 5
      beforeCreateGroup:
        enable: false
        failedContinue: true
        timeout: 5
      beforeMemberJoinGroup:
        enable: false
        failedContinue: true
        timeout: 5
      beforeSendGroupMsg:
        enable: false
        failedContinue: true
        timeout: 5
      beforeSendSingleMsg:
        enable: false
        failedContinue: true
        timeout: 5
      beforeSetGroupMemberInfo:
        enable: false
        failedContinue: true
        timeout: 5
      msgModify:
        enable: false
        failedContinue: true
        timeout: 5
      offlinePush:
        enable: false
        failedContinue: true
        timeout: 5
      onlinePush:
        enable: false
        failedContinue: true
        timeout: 5
      setMessageReactionExtensions:
        enable: false
        failedContinue: true
        timeout: 5
      superGroupOnlinePush:
        enable: false
        failedContinue: true
        timeout: 5
      url: null
      userKickOff:
        enable: false
        timeout: 5
      userOffline:
        enable: false
        timeout: 5
      userOnline:
        enable: false
        timeout: 5
    chatPersistenceMysql: true
    chatRecordsClearTime: 0 2 * * 3
    envs:
      discovery: k8s
    groupMessageHasReadReceiptEnable: true
    iosPush:
      badgeCount: true
      production: false
      pushSound: xxx
    kafka:
      addr:
      - im-kafka:9092
      consumerGroupID:
        msgToMongo: mongo
        msgToMySql: mysql
        msgToPush: push
        msgToRedis: redis
      latestMsgToRedis:
        topic: latestMsgToRedis
      msgToPush:
        topic: msgToPush
      offlineMsgToMongo:
        topic: offlineMsgToMongoMysql
      password: proot
      username: root
    log:
      isJson: false
      isStdout: true
      remainLogLevel: 6
      remainRotationCount: 2
      rotationTime: 24
      storageLocation: ../logs/
      withStack: false
    longConnSvr:
      openImMessageGatewayPort:
      - 88
      openImWsPort:
      - 80
      websocketMaxConnNum: 100000
      websocketMaxMsgLen: 4096
      websocketTimeout: 10
    manager:
      nickname:
      - system1
      - system2
      - system3
      userID:
      - openIM123456
      - openIM654321
      - openIMAdmin
    messageVerify:
      friendVerify: false
    mongo:
      address:
      - im-mongodb:27017
      database: openIM_v3
      maxPoolSize: 100
      password: openIM123
      uri: ""
      username: root
    msgCacheTimeout: 86400
    msgDestructTime: 0 2 * * *
    multiLoginPolicy: 1
    mysql:
      address:
      - im-mysql:3306
      database: openIM_v3
      logLevel: 4
      maxIdleConn: 100
      maxLifeTime: 60
      maxOpenConn: 1000
      password: openIM123
      slowThreshold: 500
      username: root
    object:
      apiURL: https://openim1.server.top/api
      cos:
        bucketURL: https://temp-1252357374.cos.ap-chengdu.myqcloud.com
        secretID: ""
        secretKey: ""
        sessionToken: ""
      enable: minio
      minio:
        accessKeyID: root
        bucket: openim
        endpoint: http://im-minio:9000
        secretAccessKey: openIM123
        sessionToken: ""
        signEndpoint: https://openim1.server.top/im-minio-api
      oss:
        accessKeyID: ""
        accessKeySecret: ""
        bucket: demo-9999999
        bucketURL: https://demo-9999999.oss-cn-chengdu.aliyuncs.com
        endpoint: https://oss-cn-chengdu.aliyuncs.com
        sessionToken: ""
    prometheus:
      apiPrometheusPort:
      - 90
      authPrometheusPort:
      - 90
      conversationPrometheusPort:
      - 90
      enable: false
      friendPrometheusPort:
      - 90
      grafanaUrl: https://openim2.server.top/
      groupPrometheusPort:
      - 90
      messageGatewayPrometheusPort:
      - 90
      messagePrometheusPort:
      - 90
      messageTransferPrometheusPort:
      - 90
      - 90
      - 90
      - 90
      pushPrometheusPort:
      - 90
      rtcPrometheusPort:
      - 90
      thirdPrometheusPort:
      - 90
      userPrometheusPort:
      - 90
    push:
      enable: getui
      fcm:
        serviceAccount: x.json
      geTui:
        appKey: ""
        channelID: ""
        channelName: ""
        intent: ""
        masterSecret: ""
        pushUrl: https://restapi.getui.com/v2/$appId
      jpns:
        appKey: null
        masterSecret: null
        pushIntent: null
        pushUrl: null
    redis:
      address:
      - im-redis-master:6379
      password: openIM123
      username: ""
    retainChatRecords: 365
    rpc:
      listenIP: 0.0.0.0
      registerIP: ""
    rpcPort:
      openImAuthPort:
      - 80
      openImConversationPort:
      - 80
      openImFriendPort:
      - 80
      openImGroupPort:
      - 80
      openImMessageGatewayPort:
      - 88
      openImMessagePort:
      - 80
      openImPushPort:
      - 80
      openImThirdPort:
      - 80
      openImUserPort:
      - 80
    rpcRegisterName:
      openImAuthName: openimserver-openim-rpc-auth:80
      openImConversationName: openimserver-openim-rpc-conversation:80
      openImFriendName: openimserver-openim-rpc-friend:80
      openImGroupName: openimserver-openim-rpc-group:80
      openImMessageGatewayName: openimserver-openim-msggateway:88
      openImMsgName: openimserver-openim-rpc-msg:80
      openImPushName: openimserver-openim-push:80
      openImThirdName: openimserver-openim-rpc-third:80
      openImUserName: openimserver-openim-rpc-user:80
    secret: openIM123
    singleMessageHasReadReceiptEnable: true
    tokenPolicy:
      expire: 90
    zookeeper:
      address:
      - 172.28.0.1:12181
      password: ""
      schema: openim
      username: ""
  notification.yaml: |+
---
# Source: openim-api/charts/openim-msggateway-proxy/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-msggateway-proxy
  labels:
    helm.sh/chart: openim-msggateway-proxy-0.1.0
    app.kubernetes.io/name: openim-msggateway-proxy
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 88
      targetPort: rpc
      protocol: TCP
      name: rpc
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-msggateway-proxy
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-msggateway/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-msggateway
  labels:
    helm.sh/chart: openim-msggateway-0.1.0
    app.kubernetes.io/name: openim-msggateway
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 88
      targetPort: rpc
      protocol: TCP
      name: rpc
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-msggateway
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-msggateway/templates/serviceheadless.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-msggateway-headless
  labels:
    helm.sh/chart: openim-msggateway-0.1.0
    app.kubernetes.io/name: openim-msggateway
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 88
      targetPort: rpc
      protocol: TCP
      name: rpc
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-msggateway
    app.kubernetes.io/instance: release-name
  clusterIP: None
---
# Source: openim-api/charts/openim-msgtransfer/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-msgtransfer
  labels:
    helm.sh/chart: openim-msgtransfer-0.1.0
    app.kubernetes.io/name: openim-msgtransfer
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-msgtransfer
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-push/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-push
  labels:
    helm.sh/chart: openim-push-0.1.0
    app.kubernetes.io/name: openim-push
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-push
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-rpc-auth/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-rpc-auth
  labels:
    helm.sh/chart: openim-rpc-auth-0.1.0
    app.kubernetes.io/name: openim-rpc-auth
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-rpc-auth
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-rpc-conversation/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-rpc-conversation
  labels:
    helm.sh/chart: openim-rpc-conversation-0.1.0
    app.kubernetes.io/name: openim-rpc-conversation
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-rpc-conversation
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-rpc-friend/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-rpc-friend
  labels:
    helm.sh/chart: openim-rpc-friend-0.1.0
    app.kubernetes.io/name: openim-rpc-friend
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-rpc-friend
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-rpc-group/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-rpc-group
  labels:
    helm.sh/chart: openim-rpc-group-0.1.0
    app.kubernetes.io/name: openim-rpc-group
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-rpc-group
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-rpc-msg/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-rpc-msg
  labels:
    helm.sh/chart: openim-rpc-msg-0.1.0
    app.kubernetes.io/name: openim-rpc-msg
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-rpc-msg
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-rpc-third/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-rpc-third
  labels:
    helm.sh/chart: openim-rpc-third-0.1.0
    app.kubernetes.io/name: openim-rpc-third
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-rpc-third
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-rpc-user/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-rpc-user
  labels:
    helm.sh/chart: openim-rpc-user-0.1.0
    app.kubernetes.io/name: openim-rpc-user
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-rpc-user
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-openim-api
  labels:
    helm.sh/chart: openim-api-0.1.16
    app.kubernetes.io/name: openim-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
    - port: 90
      targetPort: 90
      protocol: TCP
      name: metrics-port
  selector:
    app.kubernetes.io/name: openim-api
    app.kubernetes.io/instance: release-name
---
# Source: openim-api/charts/openim-msggateway-proxy/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-msggateway-proxy
  labels:
    helm.sh/chart: openim-msggateway-proxy-0.1.0
    app.kubernetes.io/name: openim-msggateway-proxy
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-msggateway-proxy
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-msggateway-proxy
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-msggateway-proxy
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway-proxy:v3.5.0"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: rpc
              containerPort: 88
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-msgtransfer/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-msgtransfer
  labels:
    helm.sh/chart: openim-msgtransfer-0.1.0
    app.kubernetes.io/name: openim-msgtransfer
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-msgtransfer
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-msgtransfer
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-msgtransfer
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msgtransfer:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-push/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-push
  labels:
    helm.sh/chart: openim-push-0.1.0
    app.kubernetes.io/name: openim-push
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-push
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-push
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-push
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-push:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-rpc-auth/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-rpc-auth
  labels:
    helm.sh/chart: openim-rpc-auth-0.1.0
    app.kubernetes.io/name: openim-rpc-auth
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-rpc-auth
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-rpc-auth
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-rpc-auth
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-auth:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-rpc-conversation/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-rpc-conversation
  labels:
    helm.sh/chart: openim-rpc-conversation-0.1.0
    app.kubernetes.io/name: openim-rpc-conversation
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-rpc-conversation
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-rpc-conversation
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-rpc-conversation
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-conversation:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-rpc-friend/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-rpc-friend
  labels:
    helm.sh/chart: openim-rpc-friend-0.1.0
    app.kubernetes.io/name: openim-rpc-friend
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-rpc-friend
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-rpc-friend
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-rpc-friend
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-friend:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-rpc-group/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-rpc-group
  labels:
    helm.sh/chart: openim-rpc-group-0.1.0
    app.kubernetes.io/name: openim-rpc-group
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-rpc-group
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-rpc-group
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-rpc-group
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-group:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-rpc-msg/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-rpc-msg
  labels:
    helm.sh/chart: openim-rpc-msg-0.1.0
    app.kubernetes.io/name: openim-rpc-msg
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-rpc-msg
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-rpc-msg
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-rpc-msg
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-msg:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-rpc-third/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-rpc-third
  labels:
    helm.sh/chart: openim-rpc-third-0.1.0
    app.kubernetes.io/name: openim-rpc-third
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-rpc-third
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-rpc-third
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-rpc-third
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-third:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-rpc-user/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-rpc-user
  labels:
    helm.sh/chart: openim-rpc-user-0.1.0
    app.kubernetes.io/name: openim-rpc-user
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-rpc-user
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-rpc-user
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-rpc-user
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-rpc-user:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-openim-api
  labels:
    helm.sh/chart: openim-api-0.1.16
    app.kubernetes.io/name: openim-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-api
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-api
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-api
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-api:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-msggateway/templates/deployment.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: release-name-openim-msggateway
  labels:
    helm.sh/chart: openim-msggateway-0.1.0
    app.kubernetes.io/name: openim-msggateway
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  serviceName: release-name-openim-msggateway-headless
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: openim-msggateway
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: openim-msggateway
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: openim-msggateway
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/openim-msggateway:release-v3.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
            - name: rpc
              containerPort: 88
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          env:
            - name: MY_MSGGATEWAY_REPLICACOUNT
              value: "1"
            - name: MY_POD_NAME
              valueFrom:
                fieldRef:
                  fieldPath: metadata.name
            - name: MY_POD_NAMESPACE
              valueFrom:
                fieldRef:
                  fieldPath: metadata.namespace
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-server/config/config.yaml
              name: config
              subPath: config.yaml
            - mountPath: /openim/openim-server/config/notification.yaml
              name: config
              subPath: notification.yaml
      volumes:
        - name: config
          configMap:
            name: openim-cm
---
# Source: openim-api/charts/openim-msggateway-proxy/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: release-name-openim-msggateway-proxy
  labels:
    helm.sh/chart: openim-msggateway-proxy-0.1.0
    app.kubernetes.io/name: openim-msggateway-proxy
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "openim1.server.top"
      secretName: webapitls
  rules:
    - host: "openim1.server.top"
      http:
        paths:
          - path: /msg_gateway(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: release-name-openim-msggateway-proxy
                port:
                  number: 80
---
# Source: openim-api/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: release-name-openim-api
  labels:
    helm.sh/chart: openim-api-0.1.16
    app.kubernetes.io/name: openim-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "openim1.server.top"
      secretName: webapitls
  rules:
    - host: "openim1.server.top"
      http:
        paths:
          - path: /api(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: release-name-openim-api
                port:
                  number: 80
openim templates get ./charts/openim-chat -f k8s-chat-server-config.yaml -f config-chatserver.yaml
---
# Source: admin-api/templates/app-cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: imchat-cm
data:
  config.yaml: |+
    adminApi:
      listenIP: null
      openImAdminApiPort:
      - 80
    adminList:
    - adminID: admin1
      imAdmin: openIM123456
      nickname: chat1
    - adminID: admin2
      imAdmin: openIM654321
      nickname: chat2
    - adminID: admin3
      imAdmin: openIMAdmin
      nickname: chat3
    chatApi:
      listenIP: null
      openImChatApiPort:
      - 80
    envs:
      discovery: k8s
    log:
      isJson: false
      isStdout: true
      remainLogLevel: 6
      remainRotationCount: 2
      rotationTime: 24
      storageLocation: ../logs/
      withStack: false
    mysql:
      address:
      - im-mysql:3306
      database: openim_enterprise
      logLevel: 4
      maxIdleConn: 100
      maxLifeTime: 60
      maxOpenConn: 1000
      password: openIM123
      slowThreshold: 500
      username: root
    openIMUrl: http://openimserver-openim-api
    redis:
      address:
      - im-redis-master:6379
      password: openIM123
      username: ""
    rpc:
      listenIP: null
      registerIP: null
    rpcPort:
      openImAdminPort:
      - 80
      openImChatPort:
      - 80
    rpcRegisterName:
      openImAdminName: openimchat-admin-rpc:80
      openImChatName: openimchat-chat-rpc:80
    secret: openIM123
    tokenPolicy:
      expire: 86400
    verifyCode:
      ali:
        accessKeyId: ""
        accessKeySecret: ""
        endpoint: dysmsapi.aliyuncs.com
        signName: ""
        verificationCodeTemplateCode: ""
      len: 6
      maxCount: 10
      superCode: "666666"
      uintTime: 86400
      use: ""
      validCount: 5
      validTime: 300
    zookeeper:
      password: ""
      schema: openim
      username: ""
      zkAddr:
      - 127.0.0.1:12181
---
# Source: admin-api/charts/admin-rpc/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-admin-rpc
  labels:
    helm.sh/chart: admin-rpc-0.1.0
    app.kubernetes.io/name: admin-rpc
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: admin-rpc
    app.kubernetes.io/instance: release-name
---
# Source: admin-api/charts/chat-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-chat-api
  labels:
    helm.sh/chart: chat-api-0.1.0
    app.kubernetes.io/name: chat-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: chat-api
    app.kubernetes.io/instance: release-name
---
# Source: admin-api/charts/chat-rpc/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-chat-rpc
  labels:
    helm.sh/chart: chat-rpc-0.1.0
    app.kubernetes.io/name: chat-rpc
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: chat-rpc
    app.kubernetes.io/instance: release-name
---
# Source: admin-api/templates/service.yaml
apiVersion: v1
kind: Service
metadata:
  name: release-name-admin-api
  labels:
    helm.sh/chart: admin-api-0.1.16
    app.kubernetes.io/name: admin-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  type: ClusterIP
  ports:
    - port: 80
      targetPort: http
      protocol: TCP
      name: http
  selector:
    app.kubernetes.io/name: admin-api
    app.kubernetes.io/instance: release-name
---
# Source: admin-api/charts/admin-rpc/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-admin-rpc
  labels:
    helm.sh/chart: admin-rpc-0.1.0
    app.kubernetes.io/name: admin-rpc
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: admin-rpc
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: admin-rpc
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: admin-rpc
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-admin:release-v1.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-chat/config/config.yaml
              name: config
              subPath: config.yaml
      volumes:
        - name: config
          configMap:
            name: imchat-cm
---
# Source: admin-api/charts/chat-api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-chat-api
  labels:
    helm.sh/chart: chat-api-0.1.0
    app.kubernetes.io/name: chat-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: chat-api
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: chat-api
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: chat-api
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-chat:release-v1.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-chat/config/config.yaml
              name: config
              subPath: config.yaml
      volumes:
        - name: config
          configMap:
            name: imchat-cm
---
# Source: admin-api/charts/chat-rpc/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-chat-rpc
  labels:
    helm.sh/chart: chat-rpc-0.1.0
    app.kubernetes.io/name: chat-rpc
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: chat-rpc
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: chat-rpc
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: chat-rpc
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-rpc-chat:release-v1.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-chat/config/config.yaml
              name: config
              subPath: config.yaml
      volumes:
        - name: config
          configMap:
            name: imchat-cm
---
# Source: admin-api/templates/deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: release-name-admin-api
  labels:
    helm.sh/chart: admin-api-0.1.16
    app.kubernetes.io/name: admin-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
spec:
  replicas: 1
  selector:
    matchLabels:
      app.kubernetes.io/name: admin-api
      app.kubernetes.io/instance: release-name
  template:
    metadata:
      labels:
        app.kubernetes.io/name: admin-api
        app.kubernetes.io/instance: release-name
    spec:
      serviceAccountName: default
      securityContext:
        {}
      containers:
        - name: admin-api
          securityContext:
            {}
          image: "registry.cn-hangzhou.aliyuncs.com/openimsdk/chat-api-admin:release-v1.5"
          imagePullPolicy: Always
          ports:
            - name: http
              containerPort: 80
              protocol: TCP
          #livenessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          #readinessProbe:
          #  httpGet:
          #    path: /
          #    port: http
          resources:
            {}
          volumeMounts:
            - mountPath: /openim/openim-chat/config/config.yaml
              name: config
              subPath: config.yaml
      volumes:
        - name: config
          configMap:
            name: imchat-cm
---
# Source: admin-api/charts/chat-api/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: release-name-chat-api
  labels:
    helm.sh/chart: chat-api-0.1.0
    app.kubernetes.io/name: chat-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "openim1.server.top"
      secretName: webapitls
  rules:
    - host: "openim1.server.top"
      http:
        paths:
          - path: /chat(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: release-name-chat-api
                port:
                  number: 80
---
# Source: admin-api/templates/ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
  name: release-name-admin-api
  labels:
    helm.sh/chart: admin-api-0.1.16
    app.kubernetes.io/name: admin-api
    app.kubernetes.io/instance: release-name
    app.kubernetes.io/version: "1.16.0"
    app.kubernetes.io/managed-by: Helm
  annotations:
    nginx.ingress.kubernetes.io/rewrite-target: /$2
    nginx.ingress.kubernetes.io/use-regex: "true"
spec:
  ingressClassName: nginx
  tls:
    - hosts:
        - "openim1.server.top"
      secretName: webapitls
  rules:
    - host: "openim1.server.top"
      http:
        paths:
          - path: /complete_admin(/|$)(.*)
            pathType: ImplementationSpecific
            backend:
              service:
                name: release-name-admin-api
                port:
                  number: 80

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
dependencies Pull requests that update a dependency file size/XS Denotes a PR that changes 0-9 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant