A lightweight, privacy-focused web app that demonstrates how to provision application accounts: generate bcrypt hashes, build SQL statements, run quick duplicate checks and optionally execute the statements against sandbox PostgreSQL environments. This repository is a safe rewrite of an internal tool and runs entirely on your machine with dummy data. It is built for internal teams and can be deployed on standard CPU-only infrastructure—no GPU required. The app demonstrates principles of AI Ops and integration of LLM in infrastructure processes.
- Single-page UI with copy-ready SQL transactions and password previews.
- Real bcrypt hashing via the
/hashbackend endpoint. - AI-powered field extraction via Ollama (login, full name, email in one click).
- Multiple demo environments (
PROD,STAGING,DEV) powered by Docker. - Log rotation and password-hash masking for safer diagnostics.
- Pluggable auth: demo mode out-of-the-box, OpenID Connect when needed.
- CPU-friendly deployment: the default Ollama setup runs comfortably without a GPU.
- Jenkins/Ansible-ready design for real-world automation pipelines.
A short animation showing the extraction, SQL generation, and execution flow:
This section explains the overall workflow of the app and what happens under the hood.
- Launch the stack with
docker-compose up --build— this brings up a Flask backend and three isolated PostgreSQL environments. - The AI extractor parses a request like “Need access for Jana Novaková (jana.novakova@example.com)” and automatically fills in login, full name, and email fields.
- The app then generates secure credentials, produces a ready-to-run SQL transaction, and runs a quick duplicate check in the target database.
- Once validated, the SQL transaction can be executed directly against the demo databases to simulate account creation.
- Application logs are automatically rotated and sensitive data such as password hashes are masked to reflect production-grade security and reliability.
- Python 3.11+
- Docker & Docker Compose (for the full stack)
- Ollama running locally with a pulled model:
ollama pull gemma3:4b- Ollama’s CPU backend is sufficient; GPU acceleration is optional.
python -m venv .venv
source .venv/bin/activate
pip install -r requirements.txt
export DB_USER=admin
export DB_PASSWORD=admin123
python server_with_extraction.pyOpen http://localhost:5000 to load the UI.
export DB_USER=admin
export DB_PASSWORD=admin123
docker-compose up --buildThe Flask server is available at http://localhost:5000 and three isolated PostgreSQL instances run on:
- PROD:
postgresql://admin:admin123@localhost:5542/access_portal_prod - STAGING:
postgresql://admin:admin123@localhost:5433/access_portal_staging - DEV:
postgresql://admin:admin123@localhost:5434/access_portal_dev
Run quick smoke checks after the server starts:
curl -s localhost:5000/api/config
curl -s "localhost:5000/api/db/test?app=ACCESS_PORTAL&env=PROD"
curl -s -X POST localhost:5000/api/db/execute \
-H 'Content-Type: application/json' \
-d '{"app":"ACCESS_PORTAL","env":"DEV","query":"SELECT 1;"}'config.pyholds app/env definitions. UpdateAPPSto point at your own databases (keepdocker-compose.ymlin sync).- Authentication: by default
AUTH_MODE=demoauto-signs a dummy user. SetAUTH_MODE=oidctogether with the Keycloak variables (KEYCLOAK_SERVER_URL,KEYCLOAK_REALM,KEYCLOAK_CLIENT_ID,KEYCLOAK_CLIENT_SECRET) to enable SSO. - AI extraction: defaults to the
gemma3:4bmodel. AdjustOLLAMA_BASE_URLandLLM_MODEL_NAMEif you run a different model/server.- For CPU-only servers set
OLLAMA_NUM_PARALLEL=1(or similar) to keep inference responsive.
- For CPU-only servers set
- Logging:
LOG_FILEandLOG_LEVELconfigure application logs.
GET /api/config– list demo apps, environments and auth mode.POST /hash– return a bcrypt hash for the provided password.POST /api/extract– use Ollama to parse login/name/email.GET /api/db/test– run a connectivity check and list sample users.POST /api/db/execute– execute arbitrary SQL (intended for sandbox use).POST /api/db/user_exists– check duplicates by login and/or email.
server_with_extraction.py– Flask backend (auth, hashing, Ollama orchestration, DB helpers).index_with_extraction.html/script.js/extraction.js– single-page UX for extraction, duplication checks, SQL previews, and execution.docker-compose.yml– three isolated Postgres services plus the Flask app.init.sql– schema + seed users copied into each database at container start.config.py– environment overrides, Keycloak toggle, Ollama settings, logging targets.- The stack easily integrates into existing CI/CD pipelines (e.g., Jenkins or GitLab CI) and Keycloak/Active Directory environments.
- Ollama connection refused: start the Ollama service (
ollama serve) and pull the configured model (ollama pull gemma3:4b). - Database errors: ensure
docker-composeis up and that the ports above are free; verifyDB_USER/DB_PASSWORD. - Auth redirect loops: if you set
AUTH_MODE=oidc, double-check all Keycloak settings and redirect URLs.
- Shipping demo credentials is intentional. Replace them before targeting non-demo infrastructure.
/api/db/executeruns whatever SQL you send. Keep it behind a firewall and use trusted input only.- Password hashes and similar secrets are masked in application logs, and log rotation is enabled by default.
