System drift analysis service
This is a flask app that provides an API for drift-frontend. It listens on port 8080 by default with gunicorn. Prometheus stats will be stored in a temp directory.
- All python code must be python 3.8 compatible
- The code should follow linting from pylint
- The code should follow formatter from black
- The code should follow imports order from isort
- pipenv
- pre-commit
# installs pre-commit hooks into the repo
pre-commit install --install-hooks
# run pre-commit hooks for staged files
pre-commit run
# run pre-commit hooks for all files in repo
pre-commit run --all-files
# bump versions of the pre-commit hooks automatically
pre-commit autoupdate
# bypass pre-commit check
git commit --no-verify
yum install -y pipenv
pipenv sync # will pull in deps and create virtualenv, and will print next steps to run
pipenv sync --dev # or this one to install also development dependencies
With your pipenv shell
activated.
Run: ./run_unit_tests.sh
to run all unit test. Since we use pytest
we can pass all pytest args to this script, for example:
- Run with verbose
./run_unit_tests.sh -vv
- Run only one test
./run_unit_tests.sh -k TEST_NAME
you can aggregate pytest args in same command, eg: ./run_unit_tests.sh -k TEST_NAME -vv
mktemp -d
prometheus_multiproc_dir=
mktemp -d INVENTORY_SVC_URL=<inventory service url> ./run_app.sh
LOG_LEVEL=debug prometheus_multiproc_dir=/tmp/tempdir INVENTORY_SVC_URL=<inventory service url> ./run_app.sh
The prometheus_multiproc_dir should be a path to a directory for sharing info between app processes. If the dir does not already exist, the app will create one.
The same info as above, but in handy table form:
env var name | required? | expected values | description |
---|---|---|---|
INVENTORY_SVC_URL | yes | URL | URL for inventory service (do not include path) |
LOG_LEVEL | no | string | lowercase log level (info by default) |
prometheus_multiproc_dir | yes | string | path to dir for sharing stats between processes |
PATH_PREFIX | no | string | API path prefix (default: /api ) |
APP_NAME | no | string | API app name (default: drift ) |
If you would like to use this service with insights-proxy, you can use the
included local-drift-backend.js
like so:
SPANDX_CONFIG=drift-backend/local-drift-backend.js bash insights-proxy/scripts/run.sh
If you use run_app.sh
, drift app will be invoked via gunicorn. This should be
OK in most cases; even pdb
runs fine inside of gunicorn.
However, if you want to use flask's server, use python3 standalone_flask_server.py
with the aforementioned environment vars.
We are using the structure used in Clowder to run our app locally. So we created a file called local_cdappcofig.json
and a script run_app_locally
to automate the spin up process.
To run follow below process:
- Make sure you have Ephemeral Envinroment running (https://github.com/RedHatInsights/drift-dev-setup#run-with-clowder)
- Add a file with the following name and content to the app folder (this is needed just once). File name:
local_cdappconfig.json
- Content to be added into
local_cdappconfig.json
:
{
"endpoints": [
{
"app": "system-baseline",
"hostname": "localhost",
"name": "backend-service",
"port": 8003
},
{
"app": "host-inventory",
"hostname": "localhost",
"name": "service",
"port": 8082
},
{
"app": "rbac",
"hostname": "localhost",
"name": "service",
"port": 8086
},
{
"app": "historical-system-profiles",
"hostname": "localhost",
"name": "backend-service",
"port": 8004
}
],
"kafka": {
"brokers": [
{
"hostname": "localhost",
"port": 9092
}
],
"topics": [
{
"name": "platform.notifications.ingress",
"requestedName": "platform.notifications.ingress"
},
{
"name": "platform.payload-status",
"requestedName": "platform.payload-status"
}
]
},
"featureFlags": {
"hostname": "non-use-for-now",
"port": 4242,
"scheme": "http"
},
"logging": {
"cloudwatch": {
"accessKeyId": "",
"logGroup": "",
"region": "",
"secretAccessKey": ""
},
"type": "null"
},
"metricsPath": "/metrics",
"metricsPort": 9000,
"privatePort": 10000,
"publicPort": 8000,
"webPort": 8000
}
- Run virtual environment
source .venv/bin/activate
- Run below command
sh run_app_locally.sh
- Run below command passing your quay username. In the example
jramos
.
sh ephemeral_build_image.sh jramos
- Make sure that you have SonarQube scanner installed.
- Duplicate the
sonar-scanner.properties.sample
config file.
cp sonar-scanner.properties.sample sonar-scanner.properties
- Update
sonar.host.url
,sonar.login
insonar-scanner.properties
. - Run the following command
java -jar /path/to/sonar-scanner-cli-4.6.0.2311.jar -D project.settings=sonar-scanner.properties
- Review the results in your SonarQube web instance.