Skip to content

Contains custom logic in Application.java. This was for the microservice game as part of Spring I/O bridge on 15th May 2020 - https://2020.springio.net/bridge/. For context read the discord channel - https://discord.com/channels/707998329677545535/709492048352510033. Game info: https://codelabs.developers.google.com/codelabs/battle-jamon/index.h…

License

Notifications You must be signed in to change notification settings

arvindkgs/cloudbowl-microservice-game

Repository files navigation

Cloud Bowl

A game where microservices battle each other in a giant real-time bowl.

Run Locally:

  1. Make sure you have docker installed and running

  2. Start Kafka

    ./sbt "runMain apps.dev.KafkaApp"
    
  3. Start the Battle

    TODO: player backends

    ./sbt "runMain apps.Battle"
    
  4. Start the apps.dev Kafka event viewer

    ./sbt "runMain apps.dev.KafkaConsumerApp"
    
  5. Start the sample service

    cd samples/scala-play
    ./sbt run
    
  6. Start the apps.dev Kafka event producer

    ./sbt "runMain apps.dev.KafkaProducerApp"
    

    You can send commands like

    ARENA/viewerjoin
    ARENA/playersrefresh
    ARENA/scoresreset
    
  7. Start the Viewer web app

    ./sbt run
    

    Check out the foo arena: http://localhost:9000/foo

Web UI Notes:

Pause the Arena refresh:

document.body.dataset.paused = true;

Testing:

For GitHub Player backend:

  1. Create a GitHub App with perm Contents - Read-Only
  2. Generate a private key
  3. export GITHUB_APP_PRIVATE_KEY=$(cat ~/somewhere/your-integration.2017-02-07.private-key.pem)
  4. export GITHUB_APP_ID=YOUR_NUMERIC_GITHUB_APP_ID
  5. export GITHUB_ORGREPO=cloudbowl/arenas
  6. Run the tests:
    ./sbt test
    

For Google Sheets Player backend:

  1. TODO

Run on Google Cloud

  1. Create GKE Cluster with Cloud Run
    gcloud config set core/project YOUR_PROJECT
    gcloud config set compute/region us-central1
    gcloud config set container/cluster cloudbowl
    gcloud container clusters create \
      --region=$(gcloud config get-value compute/region) \
      --addons=HorizontalPodAutoscaling,HttpLoadBalancing,CloudRun \
      --machine-type=n1-standard-4 \
      --enable-stackdriver-kubernetes \
      --enable-ip-alias \
      --enable-autoscaling --num-nodes=3 --min-nodes=0 --max-nodes=20 \
      --enable-autorepair \
      --cluster-version=1.15 \
      --scopes cloud-platform \
      $(gcloud config get-value container/cluster)
    
  2. Install Strimzi Kafka Operator
    kubectl create namespace kafka
    curl -L https://github.com/strimzi/strimzi-kafka-operator/releases/download/0.17.0/strimzi-cluster-operator-0.17.0.yaml \
      | sed 's/namespace: .*/namespace: kafka/' \
      | kubectl apply -f - -n kafka
    
  3. Setup the Kafka Cluster
    kubectl apply -n kafka -f .infra/kafka.yaml
    kubectl wait -n kafka kafka/cloudbowl --for=condition=Ready --timeout=300s
    
  4. Get your IP Address:
    export IP_ADDRESS=$(kubectl get svc istio-ingress -n gke-system -o 'jsonpath={.status.loadBalancer.ingress[0].ip}')
    echo $IP_ADDRESS
    
  5. Create a GitHub App, with a push event WebHook to your web app (i.e. https://IP_ADDRESS.nip.io/playersrefresh) and with a preshared key you have made up. For permissions, select Contents Read-only and for Events select Push.
  6. Generate a Private Key for the GitHub App
  7. Install the GitHub App on the repo that will hold the players
  8. Create a ConfigMap named cloudbowl-config:
    cat <<EOF | kubectl apply -f -
    apiVersion: v1
    kind: ConfigMap
    metadata:
      name: cloudbowl-config
    data:
      GITHUB_ORGREPO: # Your GitHub Org/Repo
      GITHUB_APP_ID: # Your GitHub App ID
      GITHUB_PSK: # Your GitHub WebHook's preshared key
      WEBJARS_USE_CDN: 'true'
      APPLICATION_SECRET: # Generated secret key (i.e. `head -c 32 /dev/urandom | base64`)
    EOF
    
    kubectl create configmap cloudbowl-config-github-app --from-file=GITHUB_APP_PRIVATE_KEY=FULLPATH_TO_YOUR_GITHUB_APP.private-key.pem
    
  9. Setup Cloud Build with a trigger on master, excluding samples/**, and with substitution vars _CLOUDSDK_COMPUTE_REGION and _CLOUDSDK_CONTAINER_CLUSTER. Running the trigger will create the Kafka topics, deploy the battle service, and the web app.
  10. Once the service is deployed, setup the domain name:
    export IP_ADDRESS=$(kubectl get svc istio-ingress -n gke-system -o 'jsonpath={.status.loadBalancer.ingress[0].ip}')
    echo "IP_ADDRESS=$IP_ADDRESS"
    
    gcloud beta run domain-mappings create --service cloudbowl-web --domain $IP_ADDRESS.nip.io --platform=gke --project=$(gcloud config get-value core/project) \
      --cluster=$(gcloud config get-value container/cluster) --cluster-location=$(gcloud config get-value compute/region)
    gcloud compute addresses create cloudbowl-ip --addresses=$IP_ADDRESS --region=$(gcloud config get-value compute/region)
    
  11. Turn on TLS support:
    kubectl patch cm config-domainmapping -n knative-serving -p '{"data":{"autoTLS":"Enabled"}}'
    kubectl get kcert
    

Troubleshooting

# Pick a topic:
export TOPIC=viewer-ping
export TOPIC=players-refresh
export TOPIC=arena-update

# Describe a topic:
kubectl -n kafka run kafka-consumer -ti --image=strimzi/kafka:0.17.0-kafka-2.4.0 --rm=true --restart=Never -- bin/kafka-topics.sh --describe --bootstrap-server cloudbowl-kafka-bootstrap.kafka:9092 --topic $TOPIC

# Consume messages on a topic:
kubectl -n kafka run kafka-consumer -ti --image=strimzi/kafka:0.17.0-kafka-2.4.0 --rm=true --restart=Never -- bin/kafka-console-consumer.sh --bootstrap-server cloudbowl-kafka-bootstrap.kafka:9092 --topic $TOPIC --from-beginning --property print.key=true --property key.separator=":"