Conferences and conventions are hotspots for making connections. Professionals in attendance often share the same interests and can make valuable business and personal connections with one another. At the same time, these events draw a large crowd and it's often hard to make these connections in the midst of all of these events' excitement and energy. To help attendees make connections, we are building the infrastructure for a service that can inform attendees if they have attended the same booths and presentations at an event.
You work for a company that is building a app that uses location data from mobile devices. Your company has built a POC application to ingest location data named UdaTracker. This POC was built with the core functionality of ingesting location and identifying individuals who have shared a close geographic proximity.
Management loved the POC so now that there is buy-in, we want to enhance this application. You have been tasked to enhance the POC application into a MVP to handle the large volume of location data that will be ingested.
To do so, you will refactor this application into a microservice architecture using message passing techniques that you have learned in this course. It’s easy to get lost in the countless optimizations and changes that can be made: your priority should be to approach the task as an architect and refactor the application into microservices. File organization, code linting -- these are important but don’t affect the core functionality and can possibly be tagged as TODO’s for now!
- Flask - API webserver
- SQLAlchemy - Database ORM
- PostgreSQL - Relational database
- PostGIS - Spatial plug-in for PostgreSQL enabling geographic queries]
- Vagrant - Tool for managing virtual deployed environments
- VirtualBox - Hypervisor allowing you to run multiple operating systems
- K3s - Lightweight distribution of K8s to easily develop against a local cluster
The project has been set up such that you should be able to have the project up and running with Kubernetes.
We will be installing the tools that we'll need to use for getting our environment set up properly.
- Install Docker
- Set up a DockerHub account
- Set up
kubectl
- Install VirtualBox with at least version 6.0
- Install Vagrant with at least version 2.0
- Helm with at least version 3.2.1
To run the application, you will need a K8s cluster running locally and to interface with it via kubectl
. We will be using Vagrant with VirtualBox to run K3s.
In this project's root, run vagrant up
.
$ vagrant up
The command will take a while and will leverage VirtualBox to load an openSUSE OS and automatically install K3s. When we are taking a break from development, we can run vagrant suspend
to conserve some ouf our system's resources and vagrant resume
when we want to bring our resources back up. Some useful vagrant commands can be found in this cheatsheet.
After vagrant up
is done, you will SSH into the Vagrant environment and retrieve the Kubernetes config file used by kubectl
. We want to copy the contents of this file into our local environment so that kubectl
knows how to communicate with the K3s cluster.
$ vagrant ssh
You will now be connected inside of the virtual OS. Run sudo cat /etc/rancher/k3s/k3s.yaml
to print out the contents of the file. You should see output similar to the one that I've shown below. Note that the output below is just for your reference: every configuration is unique and you should NOT copy the output I have below.
Copy the contents from the output issued from your own command into your clipboard -- we will be pasting it somewhere soon!
$ sudo cat /etc/rancher/k3s/k3s.yaml
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJWekNCL3FBREFnRUNBZ0VBTUFvR0NDcUdTTTQ5QkFNQ01DTXhJVEFmQmdOVkJBTU1HR3N6Y3kxelpYSjIKWlhJdFkyRkFNVFU1T1RrNE9EYzFNekFlRncweU1EQTVNVE13T1RFNU1UTmFGdzB6TURBNU1URXdPVEU1TVROYQpNQ014SVRBZkJnTlZCQU1NR0dzemN5MXpaWEoyWlhJdFkyRkFNVFU1T1RrNE9EYzFNekJaTUJNR0J5cUdTTTQ5CkFnRUdDQ3FHU000OUF3RUhBMElBQk9rc2IvV1FEVVVXczJacUlJWlF4alN2MHFseE9rZXdvRWdBMGtSN2gzZHEKUzFhRjN3L3pnZ0FNNEZNOU1jbFBSMW1sNXZINUVsZUFOV0VTQWRZUnhJeWpJekFoTUE0R0ExVWREd0VCL3dRRQpBd0lDcERBUEJnTlZIUk1CQWY4RUJUQURBUUgvTUFvR0NDcUdTTTQ5QkFNQ0EwZ0FNRVVDSVFERjczbWZ4YXBwCmZNS2RnMTF1dCswd3BXcWQvMk5pWE9HL0RvZUo0SnpOYlFJZ1JPcnlvRXMrMnFKUkZ5WC8xQmIydnoyZXpwOHkKZ1dKMkxNYUxrMGJzNXcwPQotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0tCg==
server: https://127.0.0.1:6443
name: default
contexts:
- context:
cluster: default
user: default
name: default
current-context: default
kind: Config
preferences: {}
users:
- name: default
user:
password: 485084ed2cc05d84494d5893160836c9
username: admin
Type exit
to exit the virtual OS and you will find yourself back in your computer's session. Create the file (or replace if it already exists) ~/.kube/config
and paste the contents of the k3s.yaml
output here.
Afterwards, you can test that kubectl
works by running a command like kubectl describe services
. It should not return any errors.
Each microservice is located into the root directory, more specifically into the modules folder
Deploy each microservice in the following order:
1. Now, you can add the chart repository below.
$ helm repo add bitnami https://charts.bitnami.com/bitnami
$ helm repo list
NAME URL
bitnami https://charts.bitnami.com/bitnami
2. Install the kafka helm chart
$ helm install kafka-release bitnami/kafka
You'll see an output like that.
NAME: kafka-release
LAST DEPLOYED: Wed Dec 23 19:33:16 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
...
3. After a while, check that kafka is running inside Kubernetes cluster entering the commands below:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE
kafka-release-zookeeper-0 1/1 Running 0 7m30s
kafka-release-0 1/1 Running 1 7m30s
Now, you will be able to deploy the services. For a better understanding, check the architectural diagram
- Get into the '01-person-microservice' folder and run
$ kubectl apply -f deployment/
- When the pods are running, execute the script located in
01-person-microservice/scripts/run_db_command.sh
with the pod identifiersh scripts/run_db_command.sh <POSTGRES_DB_POD_NAME>
The step 2 will populate the postgres database (The pod name will be something like (postgres-person-xxxxxid-pod)) - Access the http://localhost:30001/api/persons for testing
- Get into the '02-connection-microservice' folder and run
$ kubectl apply -f deployment/
- after you have the pods running, execute the script located in
02-connection-microservice/scripts/run_db_command.sh
with the pod identifiersh scripts/run_db_command.sh <POSTGRES_DB_POD_NAME>
. (The pod name will be something like (postgres-geoconnections-xxxxxid-pod)) The step 2 will populate the postgres database - Access the
http://localhost:30002/api/persons/600/connection?start_date=2020-01-01&end_date=2020-12-30&distance=5
for testing - You will not see any response once we have no records yet into database
- Get into the '03-location-event-microservice' folder and run `$ kubectl apply -f deployment/
- Get into the '04-location-processor-microservice' folder and run `$ kubectl apply -f deployment/
- Get into the '05-frontend' folder and run `$ kubectl apply -f deployment/
Wait until you have every pod running and access the http://localhost:30000/
Once the project is up and running, you should be able to send requests to grpc location-event-microservice
kubectl get pods
and kubectl get services
- should both return udaconnect-app
, udaconnect-api
, and postgres
These pages should also load on your web browser:
http://localhost:30002/
- OpenAPI Documentationhttp://localhost:30002/api/persons/1/connection?start_date=2020-01-01&end_date=2020-12-30&distance=5
http://localhost:30001/api/persons
- Base path for person microservice APIhttp://localhost:30000/
- Frontend ReactJS Application
To send records, please execute the python file for location-event-microservice grpc-client