Skip to content

在 Google Cloud 架設 Production 環境

Hsin Yi, Chen edited this page Nov 30, 2018 · 1 revision

所需服務

  • Google Kubernetes Engine (GKE)
  • Google CloudSQL for PostgresSQL
  • Google Memorystore for Redis
  • Google Cloud Storage
  • Google Cloud Build & Container Registry

啟用資料庫

啟用 Cluster

設定 Network

digraph G {
    "node0" [
        label = "User"
        shape = "record"
        gradientangle="90"
    ];
    "node1" [
        label = "<f0>Cloud CDN | <f1>Cloud \lLoad Balancing"
        shape = "record"
        gradientangle="90"
    ];
    
    subgraph cluster_gke {
        label="GKE Cluster"
        labelloc="b"

        sensemap[label="Sensemap\nport/30600"]
        fileserver[label="File Server\nport/30480"]
        via[label="VIA\nport/30100"]
    }
    
    "node0" -> "node1":f0 [
        id = 0
    ];
    
    "node1":f1 -> sensemap,via,fileserver
}

Network Architecture

  • GKE Cluster
    • 參考 Firewall Rule 放行 30600, 30480, 30100 三個 Port
    • 參考 Instance Group 設置 Port Name Mapping 放行 30600, 30480, 30100 三個 Port
  • Cloud Load Balancing / Cloud CDN
  • 設定 DNS 到 Cloud Load Balancing IP Address

建立自動化編譯環境及部署

  • 自動化編譯環境會監控 Git Tag 來決定是否要自動編譯
  • 參考 Google Cloud Build 文件建立
  • SenseTW
    • 由 tag v.* Trigger
    • Cloudbuild.yaml 位於 builder/cloudbuild/sensemap-release.yaml
    • 環境變數
  • Client
    • 由 tag v.* Trigger
    • Cloudbuild.yaml 位於 gcloud/cloudbuild.release.yaml
  • via
    • 由 tag v.* Trigger
    • Cloudbuild.yaml 位於 gcloud/cloudbuild.release.yaml

附註:Cloud Build 部署流程

digraph G {
  "Docker Build SenseTW" -> "Commit SenseTW Docker"
  "Commit SenseTW Docker" -> "Docker Build SMO"
  "Docker Build SMO" -> "Commit SMO Docker"
  "Commit SMO Docker" -> "Generate K8s Config"
  "Generate K8s Config" -> "Deploy to GKE"
}

Cloud Build Procedures

附註:Kubernetes 架構

digraph G {
    "node0" [
        label = "Cloud \lLoad Balancing"
        shape = "record"
        gradientangle="90"
    ];
    subgraph cluster_kubernetes {
        label="GKE Cluster"
        labelloc="b"
    
        subgraph cluster_gkeservice {
            label="Service"
            labelloc="b"
    
            sensemapService[label="sensemap-release\nport/30600"]
            fileserverService[label="file-server\nport/30480"]
            viaService[label="viaserver\nport/30100"]
        }
        
        subgraph cluster_sensemap {
            label="Sensemap Workload (Port/6000)"
            labelloc="b"
            
            sensemapPods0
            sensemapPods1
        }
        
        subgraph cluster_via {
            label="via Workload (Port/19080)"
            labelloc="b"
            
            viaPods0
        }
    
        subgraph cluster_fileserver {
            label="File Server Workload (Port/4000)"
            labelloc="b"
            
            fileserverPods0
        }
    }

    
    node0 -> sensemapService,viaService,fileserverService
    viaService -> viaPods0
    sensemapService -> sensemapPods0,sensemapPods1
    fileserverService -> fileserverPods0
}

Inside Production Architecture

附註:SenseMap 內部架構

digraph G {
    graph [
        rankdir = "LR"
        gradientangle = 270
    ];
    
    nginx[
        label="<f0>nginx\nport/6000 | <f1>sensemap-release-\nweb-config | <f2>front-static"
        shape = "record"
        gradientangle="90"
    ]
    sensemap[
        label="<f0>SenseMap\nport/8000 | <f1>sensemap-release-env | <f2>front-static"
        shape = "record"
        gradientangle="90"
    ]
    smo[
        label="<f0>SMO\nport/8080 | <f1>sensemap-smo-\nrelease-env"
        shape = "record"
        gradientangle="90"
    ]
    
    outside -> nginx:f0
    
    nginx:f0 -> sensemap:f0
    nginx:f0 -> smo:f0
    nginx:f2 -> sensemap:f2[
        style=dashed
        dir=both
        label="shared volumn"
    ]
}

Inside SenseMap Pod

You can’t perform that action at this time.