Make sure that you provide "Read+Write" access to storage. As shown in figure below
Once cluster has been created, log in using either cloud shell or in your local terminal. By following instruction
for connecting to cluster. Start kubproxy in background by runnin kubectl proxy &
. Clone DeepVideoAnalytics
repo and go to deploy/kube.
1. Create config.py (copy & edit config_example.py) which contains values for secrets_template.yml
2. Run "create_bucket.py" to create and make Google Cloud storage bucket public.
Above command creates bucket to store media files (images, videos, indexes etc.) and makes it public. You might encounter error if the bucket name is already taken.
3. Run "create_secrets.py" to create secrets.yml
Above command creates secrets.yml which contains base64 encoded secrets.
4. Run "launch.sh" to launch containers.
This will launch all replication controllers, create secrets and persistent disk claims.
You can also get the IP address for Webserver load balancer by running
kubectl get svc
./delete.sh && python erase_bucket.py
Above command deletes all controllers, secrets and empties the bucket. Note that the bucket itself is not deleted, you can manually deleted bucket using gsutil.
Ensure that the cluster is shutdown, so that you don't end up getting charged for the GCE nodes.
[ ] Ensure that Postgres and RabbitMQ are "Stateful sets" / consider reusing a Helm Chart.
[ ] Enable GPU containters.
[ ] Enable / add example for HTTP/HTTPS ingress and create seperate multi-region bucket to serve static files.