-
Notifications
You must be signed in to change notification settings - Fork 0
Quickstart
- Docker and Docker Compose
- Python 3
- AWS CLI (for local cluster only)
The local dev environment runs a PostgreSQL 16 instance and a development container with all build tooling pre-installed.
From the repository root:
cd docker
./docker_base.shThis builds the springtail:base image from Dockerfile.base, which includes a patched PostgreSQL 16 (built from source with RLS support for foreign tables), Redis, the C++ toolchain, and all Ansible-provisioned dependencies.
export SPRINGTAIL_SRC=/path/to/springtail # absolute path to your repo checkout
docker compose up -dThis starts two services:
| Service | Container | Description | Host Port |
|---|---|---|---|
postgres |
pg16 |
PostgreSQL 16 with logical replication enabled | 5432 |
dev |
dev-springtail |
Development container with build tools | 2222 (SSH) |
The dev container mounts your source tree at /home/dev/springtail and starts PostgreSQL, Redis, and SSH automatically via its entrypoint.
Shell into the dev container and run the debug build:
docker exec -it dev-springtail bash
# Inside the container:
cd ~/springtail
./vcpkg.sh # one-time: install C++ dependencies
./debug.sh # build debug binaries into ./debug/cd ~/springtail/debug
make build_tests
ctestOr build and run in one step:
cmake --build debug --target checkThe check target kills any running Springtail processes, installs SQL triggers, builds the tests, and runs them via CTest.
The integration test runner is a Python script that exercises Springtail end-to-end against a real PostgreSQL instance. It must be run from its own directory:
cd ~/springtail/python/testing
python3 test_runner.pyThis runs the default test configuration, which includes the test sets basic, framework, preload, enum_bits, complex, numeric, query_benchmark, and recovery (with various overlay configurations).
# Run the default configuration (same as no arguments)
python3 test_runner.py
# Run a specific named configuration (e.g., nightly, github_ci_p1)
python3 test_runner.py -c nightly
# Run a single test set
python3 test_runner.py basic
# Run specific test cases within a test set
python3 test_runner.py basic test_create.sql test_insert.sql
# Run with a specific overlay
python3 test_runner.py -o small_log_rotate recovery
# Skip downloading test data from S3 (useful offline or in CI)
python3 test_runner.py --skip-downloads
# Output JUnit XML report
python3 test_runner.py -j results.xmlAvailable test sets: basic, complex, enum_bits, framework, include_schema, large_data, live_startup, numeric, policy_roles, preload, query_benchmark, recovery, text_tables.
Available overlays: small_log_rotate, small_log_rotate_with_streaming, small_cache_size, streaming_postgres_config, integration_test_config, include_schema_config.
cd /path/to/springtail/docker
docker compose down -vThe local cluster simulates a full multi-node Springtail deployment using Docker Compose. It runs a primary database, Redis, a mock AWS environment, and the full set of Springtail services (proxy, ingestion, FDW nodes, and controller).
All cluster commands are run from the local-cluster/ directory:
cd local-clusterFrom the repository root, build the base service image if it doesn't already exist:
docker build -t local-cluster-img:latest -f docker/Dockerfile.local-cluster .The cluster up command will also build the controller image (local-cluster-controller:latest) and custom PostgreSQL image (postgres-custom:16) automatically if needed.
./cluster build-package /tmp/springtail-packagesThis runs the full build and packaging process inside the base image and outputs a tarball named springtail-<date>-<gitsha>.tar.gz into the specified directory.
./cluster up /tmp/springtail-packages/springtail-<date>-<gitsha>.tar.gzOptionally disable SSL for inter-service connections:
./cluster up /path/to/package.tar.gz --disable-sslStartup takes 1-2 minutes. The cluster will:
- Start the mock AWS service (Moto), Redis, and primary PostgreSQL
- Upload the package to mock S3
- Run the bootstrap container to configure secrets, Redis auth, and shared environment
- Launch the proxy, ingestion, and FDW services
./cluster status# Shell into a service
./cluster sh proxy
./cluster sh ingestion
./cluster sh fdw1
./cluster sh controller
# View logs
./cluster logs proxy
./cluster logs ingestion
# Restart a service
./cluster restart proxy
# List all services
./cluster lsServices are exposed on the host at these ports:
| Service | Host Port |
|---|---|
| Primary DB | 15432 |
| Redis | 16379 |
| Proxy | 55432 |
| FDW 1 | 45432 |
| FDW 2 | 45433 |
| AWS Mock (Moto) | 29999 |
| Controller API | 19824 |
Connect to the proxy from the host with any PostgreSQL client:
psql -h localhost -p 55432 -U postgres# Stop Springtail services but keep the primary DB and dependencies
./cluster down
# Stop a specific service
./cluster down proxy
# Stop everything and remove all data volumes and networks
./cluster down all