Skip to content

Commit

Permalink
Frontend changes (#65)
Browse files Browse the repository at this point in the history
* [secrets][m]: script to update secrets automatically - fixe #59

* better way to split

* get rid of extra print

* automatic update of dhq-frontend

* [autodeploy][s]: script to tigger travis buils - refs datahubio/datahub-v2-pm#160
  • Loading branch information
anuveyatsu committed May 25, 2018
1 parent da12c23 commit 5bb6953
Show file tree
Hide file tree
Showing 6 changed files with 320 additions and 1 deletion.
82 changes: 82 additions & 0 deletions .env.template
@@ -0,0 +1,82 @@
# base domain to which prefixes are added such as STAGE and specific services
DOMAIN_BASE=datahub.io
# stage e.g. testing, production
STAGE=testing
# FQDN
DOMAIN=${STAGE}.${DOMAIN_BASE}
# API DOMAIN
# NB: we have api-${STAGE}.${DOMAIN_BASE} rather than api.${STAGE}.${DOMAIN_BASE}
# because cloudflare https certs only work with one leve of nesting not two
# So api-staging.datahub.io will work but api.staging.datahub.io won't work
DOMAIN_API=api-${STAGE}.${DOMAIN_BASE}

# ======================
# Object Storage e.g. S3
# ======================

# AWS Credentials - common across buckets
AWS_ACCESS_KEY=
AWS_SECRET_KEY=

# Bucket locations (used by various services)
PKGSTORE_BUCKET=pkgstore-${STAGE}.${DOMAIN_BASE}


# ============
# RDS service
# ============

# AWS Postgres Database URI. Should follow the general form for a postgresq connection URI:
# postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]
# Will be generated and displayed if creating RDS instance for first time
RDS_URI=

# ============
# ElasticSearch Service
# ============

# ES URL. If on AWS *must* include https and :443 or does not work ...
# https://....:443
# Will be generated and displayed when you create ES instance for first time
ELASTICEARCH_URI=

# ============
# auth service
# ============

# signing keys for JWT - tools for generating them in auth service repo
PRIVATE_KEY=
PUBLIC_KEY=

# OAUTH keys for social signin
GOOGLE_KEY=
GOOGLE_SECRET=
GITHUB_KEY=
GITHUB_SECRET=

# Email Marketing

INSTALLED_EXTENSIONS=
MAILCHIMP_USER=
MAILCHIMP_SECRET=
MAILCHIMP_LIST_ID=

# ============
# rawstore service
# ============

# NOTE: storage credentials are above in Object Storage
RAWSTORE_BUCKET=rawstore-${STAGE}.${DOMAIN_BASE}

# ============
# Plans admin secrets
# ============
PLANS_ADMIN_USERNAME=
PLANS_ADMIN_PASSWORD=
PLANS_SESSION_SECRET_KEY=

# ============
# Stripe payment system
# ============
STRIPE_PUBLISHABLE_KEY=
STRIPE_SECRET_KEY=
19 changes: 19 additions & 0 deletions README.md
Expand Up @@ -128,6 +128,25 @@ Finally, automation scripts write values to `values.auto-updated.yaml`

Secrets are stored and managed directly in kubernetes and are not managed via Helm.

You can modify them manually or automatically

### update automatically

- You will need to `.env` file to be placed in root directory (see `.env.template`).
- Install dotenv: `pip install python-dotenv`

```
# Update secrets for all services
python update_secrets.py update
# Or update secrets for specific servcie
python update_secrets.py update auth
```

Note: You may need to switch environment before updating `switch_environment.sh`

### Update manually

To update an existing secret, delete it first `kubectl delete secret SECRET_NAME`

After updating a secret you should update the affected deployments, you can use `./force_update.sh` to do that
Expand Down
13 changes: 13 additions & 0 deletions apps_travis_script.sh
Expand Up @@ -23,6 +23,19 @@ elif [ "${1}" == "deploy" ]; then
orihoch/github_yaml_updater
[ "$?" != "0" ] && echo failed github yaml update && exit 1

elif [ "${1}" == "trigger" ]; then
body='{
"request": {
"message": "rebuild because '${TRAVIS_REPO_SLUG}'updated with commit '${TRAVIS_COMMIT}'",
"branch":"master"
}}'
curl -s -X POST \
-H "Content-Type: application/json" \
-H "Accept: application/json" \
-H "Travis-API-Version: 3" \
-H "Authorization: token ${TRAVIS_TOKEN}" \
-d "$body" \
https://api.travis-ci.com/repo/datahq%2F${TRIGGER_REPO}/requests
fi

echo Great Success
Expand Down
71 changes: 71 additions & 0 deletions kubernetes-envs.yml
@@ -0,0 +1,71 @@
frontend:
environment:
SITE_URL: https://${DOMAIN}
API_URL: https://${DOMAIN_API}
BITSTORE_URL: https://${PKGSTORE_BUCKET}
AUTH_UR: http://auth:8000/
FLOWMANAGER_URL: http://specstore:8000/
METASTORE_URL: http://metastore:8000/
RESOLVER_URL: http://resolver:8000/
FILEMANAGER_URL: http://filemanager:8000
STRIPE_PUBLISHABLE_KEY: ${STRIPE_PUBLISHABLE_KEY}
STRIPE_SECRET_KEY: ${STRIPE_SECRET_KEY}
auth:
environment:
VIRTUAL_HOST: ${DOMAIN_API}/auth/*
GUNICORN_PORT: 8000
DATABASE_URL: ${RDS_URI}
EXTERNAL_ADDRESS: ${DOMAIN_API}
PRIVATE_KEY:
PUBLIC_KEY:
GOOGLE_KEY: ${GOOGLE_KEY}
GOOGLE_SECRET: ${GOOGLE_SECRET}
GITHUB_KEY:
GITHUB_SECRET:
MAILCHIMP_USER: ${MAILCHIMP_USER}
MAILCHIMP_SECRET: ${MAILCHIMP_SECRET}
MAILCHIMP_LIST_ID: ${MAILCHIMP_LIST_ID}
ALLOWED_SERVICES: ${ALLOWED_SERVICES}
INSTALLED_EXTENSIONS: ${INSTALLED_EXTENSIONS}
plans:
environment:
DATABASE_URL: ${RDS_URI}
GUNICORN_PORT: 8000
# Pick something secret for production!
BASIC_AUTH_USERNAME: ${PLANS_ADMIN_USERNAME}
BASIC_AUTH_PASSWORD: ${PLANS_ADMIN_PASSWORD}
SESSION_SECRET_KEY: ${PLANS_SESSION_SECRET_KEY}
rawstore:
environment:
VIRTUAL_HOST: ${DOMAIN_API}/rawstore/*
AUTH_SERVER: http://auth:8000
STORAGE_ACCESS_KEY_ID: ${AWS_ACCESS_KEY}
STORAGE_SECRET_ACCESS_KEY: ${AWS_SECRET_KEY}
STORAGE_BUCKET_NAME: ${RAWSTORE_BUCKET}
STORAGE_PATH_PATTERN: '{md5_hex}{extension}'
DATABASE_URL: ${RDS_URI}
specstore:
environment:
VIRTUAL_HOST: ${DOMAIN_API}/source/*
DATASETS_INDEX_NAME: datahub
EVENTS_INDEX_NAME: events
AUTH_SERVER: auth:8000
DATABASE_URL: ${RDS_URI}
EVENTS_ELASTICSEARCH_HOST: ${ELASTICEARCH_URI}
FILEMANAGER_DATABASE_URL: ${RDS_URI}
DPP_ELASTICSEARCH: ${ELASTICEARCH_URI}
PKGSTORE_BUCKET: ${PKGSTORE_BUCKET}
AWS_ACCESS_KEY_ID: ${AWS_ACCESS_KEY}
AWS_SECRET_ACCESS_KEY: ${AWS_SECRET_KEY}
resolver:
environment:
VIRTUAL_HOST: ${DOMAIN_API}/resolver/*
AUTH_SERVER: http://auth:8000
metastore:
environment:
VIRTUAL_HOST: ${DOMAIN_API}/metastore/*
DATAHUB_ELASTICSEARCH_ADDRESS: ${ELASTICEARCH_URI}
PRIVATE_KEY:
filemanager:
environment:
DATABASE_URL: ${RDS_URI}
134 changes: 134 additions & 0 deletions update_secrets.py
@@ -0,0 +1,134 @@
import copy
import inspect
import os
import subprocess
import optparse
import sys
from time import sleep
import dotenv
import yaml
import re

class Updater(object):

def __init__(self, configfile='.env', envfile='kubernetes-envs.yml'):
'''Initialize.
@param: configfile a .env style config file. See README for more.
'''
if os.path.exists(configfile):
# we set ourselves as load_dotenv has system env variables to take
# precedence which in our experience is confusing as a user changes a
# var and re-runs and nothing happens
# dotenv.load_dotenv('.env')
out = dotenv.main.dotenv_values(configfile)
# we need stuff in the environment for docker
os.environ.update(out)
self.envs = os.environ
self.configs = yaml.load(open(envfile).read())
self.configs = self._update_with_envs(self.configs, self.envs)

def _update_with_envs(self, configfile, env):
for service in configfile:
for env_ in configfile[service]['environment']:
if not configfile[service]['environment'].get(env_):
configfile[service]['environment'][env_] = env.get(env_, '')
else:
pattern = re.compile(r'\${.*}')
env_value = configfile[service]['environment'][env_]
to_replace = pattern.findall(str(env_value))
for rpl in to_replace:
configfile[service]['environment'][env_] = env.get(rpl[2:-1], '')
return configfile


def _run_commands(self, cmd, options=''):
out = ''
cmd = cmd.split('') + [options]
cmd.remove('')
try:
output = subprocess.check_output(cmd, stderr=subprocess.STDOUT)
except subprocess.CalledProcessError as process_error:
all_output = filter(None, process_error.output.split('\n'))
for output in all_output:
if not 'Error:' in output:
output = 'Error: ' + str(output.strip())
print(output)
return output

def update(self, service=None):
stage = self.envs.get('STAGE', 'testing')
cmd = 'kubectl create secret generic %s-%s '
cmd_del = 'kubectl delete secret %s-%s'
if service:
if not self.configs.get(service):
print('Error: Service ["%s"] Not found' % service)
return
envs = self.configs[service]['environment']
cmd = cmd % (service, stage)
options = ''
for env in envs:
options += '--from-literal=%s="%s" ' % (env, envs.get(env, ''))

print('Deleting old secrets for %s' % service)
out = self._run_commands(cmd_del % (service, stage))
print('Creating new secrets for %s' % service)
out = self._run_commands(cmd, options)
return

for serv in self.configs:
cmd = cmd % (serv, stage)
options = ''
for env in self.configs[serv]['environment']:
options += '--from-literal=%s="%s" ' % (env, self.configs[serv]['environment'].get(env, ''))

print('Deleting old secrets for %s' % serv)
self._run_commands(cmd_del % (serv, stage))
print('Creating new secrets for %s' % serv)
self._run_commands(cmd, options)
cmd = 'kubectl create secret generic %s-%s '


# ==============================================
# CLI

def _object_methods(obj):
methods = inspect.getmembers(obj, inspect.ismethod)
methods = filter(lambda (name, y): not name.startswith('_'), methods)
methods = dict(methods)
return methods

def _main(functions_or_object):
is_object = inspect.isclass(functions_or_object)

_methods = _object_methods(functions_or_object)
## this is not working if some options are passed to Deployer
# if is_object:
# _methods = _object_methods(functions_or_object)
# else:
# _methods = _module_functions(functions_or_object)

usage = '''%prog {action}
Actions:
'''
usage += '\n '.join(
['%s %s: %s' % (name, (' ' * (25-len(name))), m.__doc__.split('\n')[0] if m.__doc__ else '') for (name, m)
in sorted(_methods.items())])
parser = optparse.OptionParser(usage)
# Optional: for a config file
# parser.add_option('-c', '--config', dest='config',
# help='Config file to use.')
options, args = parser.parse_args()

if not args or not args[0] in _methods:
parser.print_help()
sys.exit(1)

method = args[0]
if is_object:
getattr(functions_or_object(), method)(*args[1:])
else:
_methods[method](*args[1:])


if __name__ == '__main__':
_main(Updater)
2 changes: 1 addition & 1 deletion values.auto-updated.yaml
Expand Up @@ -3,7 +3,7 @@ auth:
filemanager:
image: datopian/filemanager:1de4b8505ca810e69f3f1efd70c2fe0c2620de61
frontend:
image: datopian/datahub-frontend:e7b3d6b0aa2c5e9a49bf3cb24f194250ae3d86e6
image: datopian/datahub-frontend:5c56eca505b0ca276d59d797d174eb195a1bee18
metastore:
image: datopian/metastore:9bb3611fb1ed1053ee21cb4187ca15e6407054fb
nginx:
Expand Down

0 comments on commit 5bb6953

Please sign in to comment.