Skip to content

Operations

Tim Lam edited this page Dec 23, 2022 · 23 revisions

Introduction

This document provides details of several features assocated with tagbase-server operations.

Audience

It is written primarily for system administrators rather than developers or users.

Features

Logging

tagbase-server uses a very traditional logging convention where application, HTTP server events and logs are captured. Upon initialization, the tagbase-server stack will begin to capture events and log them in the application container. If this were all that were configured then these logs would be lost if the tagbase-server application container were destroyed! Don't worry, keep reading below.

With log preservation in mind, we therefore configured a volume mount within the docker-compose.yml configuration file which persists an exact copy of the logging artifacts to the physical host machine running tagbase-server virtual host. See below for more details on the logging artifacts.

What are the log artifacts

When you start tagbase-server, on the physical host machine you will now see a logs directory which contains

logs % tree
.
├── gunicorn_access_log.txt        <<<<< gunicorn WSGI HTTP server access requests
├── gunicorn_error_log.txt         <<<<< gunicorn WSGI HTTP server errors
└── tagbase_server_log.txt         <<<<< tagbase-server Flask application logs

This log separation is aimed to provide an intuitive and simple mechanism for understanding how tagbase-server is being used, what the application is being asked to do, how it is doing it and maybe most importantly what is going wrong!

If you have suggestions or requests concerning logging, please open an issue.

TagbaseDB Backup and Recovery

You will be glad to know that similar to how logs (see above) are streamed to a persistent volume claim on the physical host, so is all PostgreSQL data. A physical duplicate is created upon the PostgreSQL container initialization and is stored on the physical host in a directory named postgres-data which contains similar to the following...

postgres-data % tree -L 2
.
└── pgdata
    ├── PG_VERSION
    ├── base
    ├── global
    ├── pg_commit_ts
    ├── pg_dynshmem
    ├── pg_hba.conf
    ├── pg_ident.conf
    ├── pg_logical
    ├── pg_multixact
    ├── pg_notify
    ├── pg_replslot
    ├── pg_serial
    ├── pg_snapshots
    ├── pg_stat
    ├── pg_stat_tmp
    ├── pg_subtrans
    ├── pg_tblspc
    ├── pg_twophase
    ├── pg_wal
    ├── pg_xact
    ├── postgresql.auto.conf
    ├── postgresql.conf
    ├── postmaster.opts
    └── postmaster.pid

18 directories, 7 files

Duplicating your backup

We cannot stress enough how important it is to create another duplicate copy of the physical postgres-data and logs directories. Ideally they would be stored on a separate physical machine or maybe in the cloud. It would be wise to back up these artifacts frequently to avoid data loss. For example, one could easily write a cron job which uses scp to duplicate the artifacts to another machine every hour. This will give you confidence, improve your operational readiness and give you a fighting chance of recovering quickly in a disaster recovery situation.

Real-time metadata anomaly notifications

tagbase-server can be (optionally) very easily configured to post real-time notifications to a Slack workspace channel. With Slack now being the defacto workplace messaging platform, integration with tagbase-server can really streamline operations tasks because

  1. all tagbase metadata anomalies are sent to a centralized Slack channel which Admins can also join
  2. events and associated notifications are sent in real-time and can be acted upon in a prompt fashion

Prerequisite

If you don't already have one, create a Slack workspace. Also create the following two channels

  • metadata_ops: used to report metadata anomalies
  • deploy_ops: used to report (docker) deployment events

Create and install an app

Create and install an app; specifically follow the prompts (click on "Create an app" > "From an app manifest")

Add features and functionality; always save your work so it doesn't get lost!

If you wish, you can grab the Tagbase logo

Note the App Credentials

You need to add at least one feature or permission scope before you can install your app. Until you do that the Install to Workspace button will be grey as follows, Simply follow the permission scope hyperlink

Add the following scopes to the Scopes section

The Install to Workspace button will now be green so click it and add the app to the relevant channel (e.g., deploy_ops) you created earlier.

Once you complete the installation you will be rewarded with a OAUTH TOKEN as below. Make note of the one you generate as you will use it later on.

Back in your Slack client, on the left hand side, you will now see that your App is registered

Invite your app to the channel specified above. Simply use the @ in the channel and look for the app name then tag the app. This will prompt you to add the app to the channel as follows

Send a test message to make sure everything is working fine. Note this operation will require working knowledge of Python (refer to the Python Slack API for details).

import os
from slack_sdk import WebClient
from slack_sdk.errors import SlackApiError

slack_token = os.environ["SLACK_BOT_TOKEN"]     <<<<<<<<< OAUTH token you created above
client = WebClient(token=slack_token)

try:
    response = client.chat_postMessage(
        channel="C0XXXXXX",                    <<<<<<<<<<< target channel you wish to post to
        text="Hello from your app! :tada:"
    )
except SlackApiError as e:
    # You will get a SlackApiError if "ok" is False
    assert e.response["error"]    # str like 'invalid_auth', 'channel_not_found'

If everything went well you will see something similar to the following

Configure tagbase-server with the the App credentials

Simply add the following parameters to the .env configuration file

  1. SLACK_BOT_TOKEN - OAUTH token (from above)
  2. SLACK_BOT_CHANNEL - Slack channel you wish to post your messages to

Real-time docker event notifications

Configure tagbase-server with the the Webhook credentials

This can be very useful if administrators desire insight into docker events.

  1. Create a new channel (deploy_ops) in the existing Slack workspace
  2. Setup an Incoming WebHook on the desired Slack workspace and obtain the WebHook URL.
  3. Simply add a webhook parameter to the .env configuration file with the value being the Webhook URL.
  4. Once docker-compose up is executed all docker events will now be sent to the Slack channel.

Logging in to PostgreSQL container

Obtain the container identifier

tagbase-server % docker ps
CONTAINER ID   IMAGE                           COMMAND                  CREATED         STATUS                   PORTS                                     NAMES
...
3962ed298641   tagbase-server_postgres         "docker-entrypoint.s…"   2 minutes ago   Up 2 minutes (healthy)   0.0.0.0:5432->5432/tcp                    tagbase-server_postgres_1

SSH into the container

docker exec -it 3962ed298641 /bin/bash

You can now access PostgreSQL via the psql interactive terminal.

To access any other PostgreSQL database you can simply use the -h (hostname) parameters.