Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Pili is a Python Flask application with a strong inclination for social network and blogging features.


  1. Users

    • Registration
    • Authentication
    • Resetting/changing password
    • Creating/changing profile
  2. Roles

    • Each user assigned one of the 7 predefined roles
    • Each role has a set of permissions
  3. Posts

    • Tagging
    • Categorization
    • File upload

    • Write/delete comments
    • Screen/unscreen comments (not seen by the non-moderator users, all new comments can be set screened by default)
    • Disable/enable comments (non-moderators that disable comment exists, but its content disabled by the moderator)
    • Replies to the comments (users see replies to their comments)
  5. Following

    • Follow/Unfollow other users to customize a feed

    • Authenticated users with FOLLOW permission can like/unlike posts or comments
  7. Notifications

    • Get notifications from platform administrators
    • See replies to your comments


Pili App could not be possible without Miguel Grinberg's Flasky App developed as an example project for his excellent Flask Web Development book published by O'Reilly Media in 2014.

Application comes with 3rd party libraries preinstalled:

  1. Bootstrap 3 Datetimepicker
  2. Typeahead.js
  3. Bootstrap Tagsinput

These libraries are found under:


The libraries belong to their owners and should not be considered as a part of the application.

(Optional) Kubernetes Cluster Setup



Helm is a package manager for Kubernetes. Install helm, initialize it with:

helm init --history-max 200

Resource configuration in Minikube

Minikube starts with 2 CPU, 2Gb RAM and 20GB disk by default. Although it's sufficient in the most cases, sometimes more or less resources needed. You may start your local cluster with arguments (see more options with minijube start -h:

minikube start --cpus 4 --memory 4096 --disk-size 20g

To make config options permanent you may edit ~/.minikube/config/config.json file or set the options from minikube cli (see more with minikube config -h):

minikube config set cpus 4
minikube config set memory 4096
minikube config set disk-size 20g

Virtual Machine Driver

On GNU/Linux machine install kvm2 driver and use it as a VM driver:

minikube config set vm-driver kvm2

Beware! In order to improve VM performance further optimizations for kvm may be needed, e.g. enabling huge pages. See KVM article for more information.

Monitoring in Minikube

Running k8s with a bunch of bloodthirsty services may require a tool for resource monitoring. In case of minikube a heapster and metrics-server monitoring should be activated:

# alternatively use minikube addons enable <addon-name>
minikube config set heapster true
minikube config set metrics-server true

Open heapster with:

# credentials: admin/admin, open dashboard needed, e.g. cluster
minikube addons open heapster

Deployment with Kubernetes

See Kubernetes configs in etc/k8s/ directory. Assume the following commands are run within that directory.


Install Helm, a package manager for Kubernetes. It's used to set up Redis, RabbitMQ and PostgreSQL.


  1. Create config file under etc/k8s/dev/
  2. Install stable/redis helm chart:
# omit --name option or use SemVer for versioning # make sure to specify redis hosts correctly in application's config files and config maps: # <your-release-name>-redis-master # <your-release-name>-redis-slave helm install --name redis stable/redis --values etc/k8s/dev/


  1. Create config file under etc/k8s/dev/
  2. Install stable/rabbitmq helm chart:
# Be patient! It may take time helm install --name messages -f etc/k8s/dev/ stable/rabbitmq

#. Make sure everything is okay by forwarding RabbitMQ's Management Plugin port to the host machine and cheking service status:

kubectl port-forward --namespace default svc/pili-rabbitmq 15672:15672
echo "URL :"


  1. (Optionally) Apply PersistentVolume and PersistentVolumeClaim for persistent queue storage:
# If PV/PVC not created explicitly, helm creates its own resources for persistent storage. # Beware! stable/postgresql helm chart ignores existing PVC for replication nodes and creates its own kubectl apply -f etc/k8s/ kubectl apply -f etc/k8s/
  1. Create config file under etc/k8s/dev/
  2. Install stable/postgresql helm chart:
# Be patient! It may take some time # PV/PVC created automatically by helm helm install --name db -f etc/k8s/dev/ stable/postgresql # Existing PV/PVC helm install --name db -f etc/k8s/dev/ stable/postgresql
  1. Make sure everything is okay by connecting to the database:

# Get password export POSTGRES_PASSWORD=$(kubectl get secret --namespace default db-postgresql

-o jsonpath="{.data.postgresql-password}" | base64 --decode)

# Connect to a master node (read/write) kubectl run db-postgresql-client --rm --tty -i --restart='Never' --namespace default

--image --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host db-postgresql -U pili -d pili

# Connect to a slave node (read only) kubectl run db-postgresql-client --rm --tty -i --restart='Never' --namespace default

--image --env="PGPASSWORD=$POSTGRES_PASSWORD" --command -- psql --host db-postgresql-read -U pili -d pili


  1. Add environment variables as a ConfigMap:
kubectl create configmap pili-config --from-env-file=etc/k8s/dev/
  1. Make sure config is added correctly:
kubectl get configmap pili-config -o yaml kubectl describe configmap pili-config
  1. Add private docker registry credentials as a Secret using local ~/.docker/config.json:
kubectl create secret generic registry-credentials
  1. Make sure secret's added correctly:
kubectl get secret registry-credentials --output="jsonpath={.data..dockerconfigjson}" | base64 --decode

Persistent storage

  1. Create a mount point in the cluster:
minikube ssh sudo mkdir -p /mnt/data/uploads
  1. Create PersistentVolume:
kubectl apply -f etc/k8s/dev/
  1. Create PersistentVolumeClaim:
kubectl apply -f etc/k8s/dev/

Pili backend app

  1. Apply Deployment:
kubectl apply -f etc/k8s/dev/
  1. Make sure deployment's applied:
kubectl get pods
  1. Apply Service:
kubectl apply -f etc/k8s/dev/
  1. Make sure services has started:
kubectl describe service pili minikube service pili


  1. Apply Deployment:
kubectl apply -f etc/k8s/dev/


  1. Apply Deployment:
kubectl apply -f etc/k8s/dev/
  1. Apply Service:
kubectl apply -f etc/k8s/dev/
  1. Check service is working:
minikube service flower

Nginx Ingress

  1. Enable Ingress addon on minikube:
minikube addons enable ingress
  1. Apply Ingress manifest:
kubectl apply -f etc/k8s/dev/
  1. After a while get ingress IP-address:
kubectl get ingress
  1. Add IP-address to /etc/hosts:
  1. Go to check everything works as expected

Deployment with Docker

Local development

  1. Install docker>=18.06 and docker-compose>=1.23.0

  2. Set environment variable PILI_CONFIG=development (you can place it to .env file in the root directory of the project)

  3. Create file /etc/env/development.env and save environment variables needed for the app, e.g.:

    SSL_DISABLE=1  # you don't need this in localhost
    DATABASE_URL=postgresql://pili:pili@db/pili  # use DB as docker-compose service
    CELERY_INSTEAD_THREADING=True  # use celery cervice
    CELERY_BROKER_URL=amqp://guest:guest@rabbitmq:5672/  # use RabbitMQ as celery's broker
    CELERY_RESULT_BACKEND=redis://redis:6379/10  # celery result backend
    FLOWER_PORT=5678  # monitoring tool for celery
  4. Run services with docker-compose up

  5. Open service with browse http://localhost:8080

  6. Open celery monitoring with browse http://localhost:5678

Use make for the routine operations like:

  1. Start/stop docker services with make up and make down respectively
  2. Run linters with make lint
  3. Run mypy static analysis tool with make mypy
  4. Format code with black formatter


The project uses Circle CI for CI/CD. As its final step CI/CD pushes docker image to a private docker registry. The image can be used then in docker run, docker-compose or in a Kubernetes cluster.

(Deprecated) Local deployment

This section considered deprecated, see k8s or DockerDeployment for the suggested deployment model.

Environment setup

Application's deployment follows the same steps as any other large Flask application.

Setting up environment basically means:

  1. Installing dependencies (Python packages)
  2. Editing application's configurations files
  3. Exporting shell environment variables

List of dependencies is made up of several parts:

  1. Common dependencies
  2. Dependencies specific for the environment (built upon common dependencies):
    • Development
    • Production (Unix server)
    • Heroku

Dependencies lists are found under:


virtualenv can be used for creating a virtual environment in the app's working directory in order to install aforementioned dependencies:

$ virtualenv --python=python3 venv

Then virtual environment can be activated/deactivated:

$ source venv/bin/activate
(venv) $ deactivate

Dependencies can be installed then using pip:

(venv) $ pip install -r requirements/unix[prod|dev|...].txt

App's config file

Application gets use of environment variables. The whole list of such variables can be found in

These environment variables are set using shell-specific commands, such as export in bash or setenv in csh:

(venv) $ export VARIABLE=value

IMPORTANT! Application also relies on .hosting.env file that is to be created by the user in the app's working directory. File format is the following:

ENVVARIABLE=value of the environment variable

.hosting.env is mandatory for file. It can also be used in production when writing systemd service files (with EnvironmentFile directive).

IMPORTANT! Although sets environment variables found in .hosting.env users cannot rely on it when working with Celery workers. In this case environment variables are to be set in Celery's own configuration (production) or with the shell's export command (development).

Database deployment

Application uses Flask-Migrate for database migrations with Alembic. Database deployment is made up of the following steps:

  1. Create all databases used by the application, create migration repository:

    (venv) $ python initialize
  2. Generate an initial migration, apply it to the database, then insert roles and add application's administrator:

    (venv) $ python deploy

Run application

Now that the application is configured, DB created and migration repo is created, the last two steps are needed in order to get the application running:

  1. Start Celery workers with:

    (venv) $ celery worker -A celery_worker.celery --loglevel=info
  2. Start development server:

    (venv) $ python runserver
  3. Go to and enjoy!

When application models changed

Every time the database models (app/ change do the following:

(venv) $ python db migrate [--message MESSAGE]
(venv) $ emacs $( ls -1th migrations/versions/*.py | head -1 ) # check and edit migration
(venv) $ python db upgrade

Deployment in production

This section considered deprecated, see DockerDeployment for the suggested deployment model.

Reverse-proxy and Application server

Flask's built-in server is not suitable for production. There are quite a few deployment options for production environment, both self-hosted and PaaS.

Being WSGI application, Flask requires WSGI application server (such as uWSGI or Gunicorn), which usually works in conjunction with a reserve-proxy server such as Nginx that serves static files and manages requests. That takes the load off the application server and guarantees better performance:

Client request <-> Reverse-Proxy <-> Application Server ( OR socket)
    ^                   |
    └--- static files --┘

Configuration examples

There are configuration examples under:


These examples include:

  1. Celery systemd service file:
    • pili-celery.conf
    • pili-celery.service
  2. Nginx configuration:
    • pili-nginx.conf
  3. uWSGI systemd service file, uWSGI ini-config file:
    • pili-uwsgi.conf
    • pili-uwsgi.ini
    • pili-uwsgi.service
  4. Git hooks for deployment from a repository:


Aforementioned systemd service file examples get use of two directories:


The best way to create these directories is using the following systemd directives:

PermissionsStartOnly=true # run ExecStartPre with root permissions
ExecStartPre=-/usr/bin/mkdir -p /var/log/pili
ExecStartPre=-/usr/bin/mkdir -p /var/run/pili

Using systemd service files

When tailored to your needs, provided systemd service files can be used this way:

  1. Go to systemd's directory for custom unit files:

    $ cd /etc/systemd/system
  2. Create a symlink to a unit file:

    $ ln -s /var/www/pili/your.service your.service
  3. Reload systemd daemon:

    $ sudo systemctl daemon-reload
  4. Start your service with:

    $ sudo systemctl start your.service
  5. Make sure it's running:

    $ sudo systemctl status your.service
  6. If service has failed, take a look at systemd's logs:

    $ sudo journalctl -xe


App CLI entrypoint

Containerized application gets installed as an editable package with pip install -e .. It ensures that a click entrypoint pili also gets registered in the $PATH. Execute pili --help in that container to get some help:

$ pili
Usage: pili [OPTIONS] COMMAND [ARGS]...

  Pili App command line tool

  --config TEXT  Configuration name
  --help         Show this message and exit.

  provision  Provision Application
  server     Run Flask Development Server
  shell      Run Python Shell
  test       Run Tests
  uwsgi      Run uWSGI Application server

Each command in turn also has --help argument, e.g.:

pili  --config=production provision --help
  Usage: pili provision [OPTIONS]

  Provision Application

  --db_init / --no-db_init        Initialize migration repository for DB
  --db_migrate / --no-db_migrate  Generate initial DB migration
  --db_upgrade / --no-db_upgrade  Apply migration to the DB
  --db_prepopulate / --no-db_prepopulate
                               Prepopulate DB with essential data
  --help                          Show this message and exit.

Running Shell in Application Context

Some routine operations are much easily done using Python shell with application context loaded: a database session, ORM models, etc. You can run Python shell as follows:

$ pili --config=<your-config> shell

Operation examples

Look up a body of the comment with id 10:

>>> Comment.query.filter(

Get a list of users with the role 'Writer':

>>> [u for u in Role.query.filter( == 'Writer').first().users]

Get a list of comments to the post with id 111:

>>> [c for c in Post.query.filter( == 111).first().comments]

Get a list of replies to the comment contining a word 'flask':

>>> [r for r in Comment.query.filter("%flask%")).first().replies]

Get a parent comment of the reply with id 29 (parent attribute exists due to backref='parent' in models):

>>> Comment.query.filter( == 29).first().parent

Get all replies written by the user 'Pilosus' in descending order (sort by the time of publication):

>>> user = User.query.filter(User.username == 'Pilosus').first()
>>> Comment.query.join(Reply, Comment.author_id ==\
... filter(Comment.parent_id.isnot(None), ==\
... order_by(Comment.timestamp.desc()).all()
>>> # the same but more concise
>>> Comment.query.filter(Comment.parent_id.isnot(None), == user).\
... order_by(Comment.timestamp.desc()).\
... all()

Get all replies to the comment with id 23:

>>> Comment.query.get(23).replies

Get a thread of all replies to the certain comment:

|- Comment 1
|- Comment 2
|    |- Comment 4
|    |    |- Comment 6
|    |
|    |- Comment 5
|- Comment 3

>>> # Use Depth-First Search algorithm for graphs,
>>> #              implemented as a static method
>>> Comment.dfs(Comment.query.get(2), print)
>>> <Comment 4>
>>> <Comment 6>
>>> <Comment 5>

Get all post likes by the user with id 1, exclude comment likes:

>>> Like.query.filter(Like.user_id==1, Like.comment_id == None).all()
>>> Like.query.filter((Like.user_id==1) & (Like.comment_id == None)).all()

Get information about 'users' table:

>>> User.__table__.columns
>>> User.__table__.foreign_keys
>>> User.__table__.constraints
>>> User.__table__.indexes


Python Flask application with a strong inclination for social network and blogging features




No packages published