Source code for the server managing the emails from the Lokole remote devices
Clone or download
Permalink
Failed to load latest commit information.
docker Add celery multiqueue support (#77) Oct 31, 2018
helm Add celery multiqueue support (#77) Oct 31, 2018
opwen_email_server Add celery multiqueue support (#77) Oct 31, 2018
secrets Add deployment automation for kubernetes May 27, 2018
tests Replace direct use of ServiceBus with Celery Oct 2, 2018
travis Wire-up local mode (#72) Oct 3, 2018
.dockerignore Replace direct use of ServiceBus with Celery Oct 2, 2018
.env Add documentation for running with prod secrets (#76) Oct 4, 2018
.gitignore Replace direct use of ServiceBus with Celery Oct 2, 2018
.pyup.yml Reduce PyUp interval May 28, 2018
.travis.yml Replace azure-storage-blob with libcloud (#47) Sep 11, 2018
LICENSE.txt Import opwen-cloudserver python project files Feb 18, 2017
MANIFEST.in Include all files referenced in setup.py Nov 26, 2017
README.rst Add documentation for running with prod secrets (#76) Oct 4, 2018
docker-compose.secrets.yml Add documentation for running with prod secrets (#76) Oct 4, 2018
docker-compose.yml Add celery multiqueue support (#77) Oct 31, 2018
makefile Add documentation for running with prod secrets (#76) Oct 4, 2018
registerclient.py Remove implicit constructors (#62) Sep 16, 2018
requirements-dev.txt Scheduled weekly dependency update for week 45 (#82) Nov 12, 2018
requirements-prod.txt Wire-up local mode (#72) Oct 3, 2018
requirements.txt Update to latest libcloud release (#83) Nov 12, 2018
runserver.py Run Bandit during CI (#40) Aug 4, 2018
setup.py Remove outdated project name from description Jun 24, 2018
version.txt Publish PyPI package during CD Nov 17, 2017

README.rst

Opwen cloudserver

https://travis-ci.org/ascoderu/opwen-cloudserver.svg?branch=master

What's this?

This repository contains the source code for the Lokole cloud server. Its purpose is to connect the application running on the Lokole devices to the rest of the world. Lokole is a project by the Canadian-Congolese non-profit Ascoderu.

The server is implemented using Connexion and has two main responsibilities:

  1. Receive emails from the internet that are addressed to Lokole users and forward them to the appropriate Lokole device.
  2. Send new emails created by Lokole users to the rest of the internet.

More background information can be found in the opwen-webapp README.

System overview

Overview of the Lokole system

Data exchange format

In order to communicate between the Lokole cloud server and the Lokole email application, a protocol based on gzipped jsonl files uploaded to Azure Blob Storage is used. The files contains a JSON object per line. Each JSON object describes an email, using the following schema:

{
  "sent_at": "yyyy-mm-dd HH:MM",
  "to": ["email"],
  "cc": ["email"],
  "bcc": ["email"],
  "from": "email",
  "subject": "string",
  "body": "html",
  "attachments": [{"filename": "string", "content": "base64"}]
}

Development setup

First, get the source code.

git clone git@github.com:ascoderu/opwen-cloudserver.git
cd opwen-cloudserver

Second, install the system-level dependencies using your package manager, e.g. on Ubuntu:

sudo apt-get install -y make python3-venv shellcheck

You can use the makefile to verify your checkout by running the tests and other CI steps such as linting. The makefile will automatically install all required dependencies into a virtual environment.

make tests
make lint

This project consists of a number of microservices and background jobs. You can run all the pieces via the makefile, however, it's easiest to run and manage all of the moving pieces via Docker, so install Docker on your machine by following the Docker setup instructions for your platform.

After installing Docker, you can run the application stack with one command:

docker-compose up --build

There are OpenAPI specifications that document the functionality of the application and provide references to the entry points into the code (look for "some-api-name-spec.yaml" files in the repository). The various APIs can also be easily called via the testing console that is available by adding /ui to the end of the API's URL.

Note that by default the application is run in a fully local mode, without leveraging any cloud services. For most development purposes this is fine but if you wish to set up the full end-to-end stack that leverages the same services as we use in production, keep on reading.

The project uses Sendgrid, so to emulate a full production environment, follow these Sendgrid setup instructions to create a free account and take note of you API key for sending emails.

The project also makes use of a number of Azure services such as Blobs, Tables, Queues, Application Insights, and so forth. To set up all the required cloud resources programmatically, you'll need to create a service principal by following these Service Principal instructions. After you created the service principal, you can run the Docker setup script to initialize the required cloud resources.

docker build -t setup -f docker/setup/Dockerfile .

docker run \
  -e SP_APPID={appId field of your service principal} \
  -e SP_PASSWORD={password field of your service principal} \
  -e SP_TENANT={tenant field of your service principal} \
  -e SUBSCRIPTION_ID={subscription id of your service principal} \
  -e LOCATION={an azure location like eastus} \
  -e RESOURCE_GROUP_NAME={the name of the resource group to create or reuse} \
  -e SENDGRID_KEY={the sendgrid key you created earlier} \
  -v ${PWD}/secrets:/secrets \
  setup

The secrets to access the Azure resources created by the setup script will be stored in files in the secrets directory. Other parts of the project's tooling (e.g. docker-compose) depend on these files so make sure to not delete them.

To run the project using the Azure resources created by the setup, use the following command:

docker-compose -f docker-compose.yml -f docker-compose.secrets.yml up --build

Production setup

To set up a production-ready deployment of the system, follow the development setup scripts described above, but additionally also pass the following environment variables to the Docker setup script:

  • KUBERNETES_RESOURCE_GROUP_NAME: The resource group into which to provision the Azure Kubernetes Service cluster.
  • KUBERNETES_NODE_COUNT: The number of VMs to provision into the cluster. This should be an odd number and can be dynamically changed later via the Azure CLI.
  • KUBERNETES_NODE_SKU: The type of VMs to provision into the cluster. This should be one of the supported Linux VM sizes.

The script will then provision a cluster in Azure Kubernetes Service and install the project via Helm. The secrets to connect to the provisioned cluster will be stored in the secrets directory.