This repository implements a common inventory system with eventing.
main | v1beta2 |
---|---|
- Go 1.23.6
- Make
See DEBUG for instructions on how to debug
When running locally, the default settings file is used. By default, this configuration does the following:
- Exposes the inventory API in
localhost
and using port8000
for http and port9000
for grpc. - Sets authentication mechanism to
allow-unauthenticated
, allowing users to be authenticated with their user-agent value. - Sets authorization mechanism to
allow-all
. - Sets database implementation to sqlite3 and the database file to
inventory.db
- Sets the Inventory Consumer service to disabled
- Configures log level to
INFO
.
NOTE: You can update the default settings file as required to test different scenarios. Refer to the command line help (make run-help
) or leverage one of the many pre-defined Docker Compose Test Setups
for information on the different parameters.
-
Clone the repository and navigate to the directory.
-
Install the required dependencies
make init
-
Build the project
# when building locally, use the local-build option as FIPS_ENABLED is set by default for builds make local-build
-
Run the database migration
make migrate
-
Start the development server
make run
Due to various alternatives to running some images, we accept some arguments to override certain tools
Since there are official instructions on how to manage multiple installs
We accept the GO
parameter when running make. e.g.
GO=go1.23.1 make run
or
export GO=go1.23.1
make run
We will use podman
if it is installed, and we will fall back to docker
. You can also specify if you want to ensure a particular binary is used
by providing DOCKER
parameter e.g.
DOCKER=docker make api
or
export DOCKER=docker
make api
Important
Note: The podman-compose
provider struggles with compose files that leverage depends_on
as it can't properly handle the dependency graphs. You can fix this issue on Linux by installing the docker-compose-plugin
or also having docker-compose
installed. When installed, podman uses the docker-compose
provider by default instead. The benefit of the docker-compose-plugin
is that it doesn't require the full Docker setup or Docker daemon!
Testing locally is fine for simple changes but in order to test the full application, it requires all the dependent backing services.
The Full Setup option for Docker Compose:
- Exposes the inventory API in
localhost
and using port8000
for http and port9000
for grpc. - Sets both AuthN and AuthZ to Allow
- Deploys and configures Inventory to leverage postgres
- Deploys and configures Kafka, Zookeeper and Kafka Connect with Debezium configured for the Outbox table
- Enables and configures the Inventory Consumer for the local Kafka cluster
- Configures Inventory API using the Full-Setup config file
This setup allows testing the full inventory stack, but does not require Relations API. Calls that would get made to Relations API are just logged by the consumer.
To start with Full Setup configuration:
make inventory-up
To stop:
make inventory-down
There are numerous ways to run Inventory API using Docker Compose for various testing and debugging needs. For more options, see our guide
See Testing Inventory in Ephemeral for instructions on how to deploy Kessel Inventory in the ephemeral cluster.
Once there is any change in the proto
files (under /api/kessel) an update is required.
This command will generate code and an openapi file from the proto files
.
make api
We can run the following command to update if there are expected breaking changes.
make api_breaking
Once there are any changes in the schema
files under /data/schema/resources an update is required.
The schemas are loaded in as a tarball in a configmap, to generate the tarball execute:
make build-schemas
The command will output the binaryData
for resources.tar.gz
.
binaryData:
resources.tar.gz: H4sIAEQ1L2gAA+2d3W7juBXHswWKoil62V4LaYG9mVEoUiTtAfbCmzg7xiRxNnZmd1ssDI2jJNqxpawkz05...
Copy this data to update the configmap resources-tarball
in the ephemeral deployment file with the latest schema changes.
By default, the quay repository is quay.io/cloudservices/kessel-inventory
. If you wish to use another for testing, set IMAGE value first
export IMAGE=your-quay-repo # if desired
make docker-build-push
This is an alternative to the above command for macOS users, but should work for any arch
export QUAY_REPO_INVENTORY=your-quay-repo # required
podman login quay.io # required, this target assumes you are already logged in
make build-push-minimal
All these examples use the REST API and assume we are running the default local version adjustments needs to be made to the curl requests if running with different configuration, such as port, authentication mechanisms, etc.
Note: When testing in Stage, the current schema leveraged by Relations only supports notifications integrations and not any of the infra we have in our API (RHEL hosts, K8s Clusters, etc). Testing with any other resource type will throw errors from Relations API but will still succeed in Inventory API
The Kessel Inventory includes health check endpoints for readiness and liveness probes.
The readyz endpoint checks if the service is ready to handle requests.
curl http://localhost:8000/api/inventory/v1/readyz
The livez endpoint checks if the service is alive and functioning correctly.
curl http://localhost:8000/api/inventory/v1/livez
Resources can be added, updated and deleted to our inventory. Right now we support the following resources:
rhel-host
notifications-integration
k8s-cluster
k8s-policy
To add a rhel-host to the inventory:
To hit the REST endpoint use the following curl
command
curl -H "Content-Type: application/json" --data "@data/testData/v1beta1/host.json" http://localhost:8000/api/inventory/v1beta1/resources/rhel-hosts
To hit the gRPC endpoint use the following grpcurl
command
grpcurl -plaintext -d @ localhost:9000 kessel.inventory.v1beta1.resources.KesselRhelHostService.CreateRhelHost < data/testData/v1beta1/host.json
To update it:
To hit the REST endpoint
curl -XPUT -H "Content-Type: application/json" --data "@data/testData/v1beta1/host.json" http://localhost:8000/api/inventory/v1beta1/resources/rhel-hosts
To hit the gRPC endpoint
grpcurl -plaintext -d @ localhost:9000 kessel.inventory.v1beta1.resources.KesselRhelHostService.UpdateRhelHost < data/testData/v1beta1/host.json
and finally, to delete it, note that we use a different file, as the only required information is the reporter data.
To hit the REST endpoint
curl -XDELETE -H "Content-Type: application/json" --data "@data/testData/v1beta1/host-reporter.json" http://localhost:8000/api/inventory/v1beta1/resources/rhel-hosts
To hit the gRPC endpoint
grpcurl -plaintext -d @ localhost:9000 kessel.inventory.v1beta1.resources.KesselRhelHostService.DeleteRhelHost < data/testData/v1beta1/host-reporter.json
To add a notifications integration (useful for testing in stage)
# create the integration (auth is required for stage -- see internal docs)
curl -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -d @data/testData/v1beta1/notifications-integrations.json localhost:8000/api/inventory/v1beta1/resources/notifications-integrations
# delete the integration
curl -X DELETE -H "Content-Type: application/json" -H "Authorization: Bearer $TOKEN" -d @data/testData/v1beta1/notifications-integration-reporter.json localhost:8000/api/inventory/v1beta1/resources/notifications-integrations
To add a k8s-policy_ispropagatedto-k8s-cluster
relationship, first lets add the related resources k8s-policy
and k8s-cluster
.
curl -H "Content-Type: application/json" --data "@data/testData/v1beta1/k8s-cluster.json" http://localhost:8000/api/inventory/v1beta1/resources/k8s-clusters
curl -H "Content-Type: application/json" --data "@data/testData/v1beta1/k8s-policy.json" http://localhost:8000/api/inventory/v1beta1/resources/k8s-policies
And then you can create the relation:
curl -H "Content-Type: application/json" --data "@data/testData/v1beta1/k8spolicy_ispropagatedto_k8scluster.json" http://localhost:8000/api/inventory/v1beta1/resource-relationships/k8s-policy_is-propagated-to_k8s-cluster
To update it:
curl -X PUT -H "Content-Type: application/json" --data "@data/testData/v1beta1/k8spolicy_ispropagatedto_k8scluster.json" http://localhost:8000/api/inventory/v1beta1/resource-relationships/k8s-policy_is-propagated-to_k8s-cluster
And finally, to delete it, notice that the data file is different this time. We only need the reporter data.
curl -X DELETE -H "Content-Type: application/json" --data "@data/testData/v1beta1/relationship_reporter_data.json" http://localhost:8000/api/inventory/v1beta1/resource-relationships/k8s-policy_is-propagated-to_k8s-cluster
The default development config has this option disabled. You can check Alternatives way of running this service for configurations that have Kessel relations enabled.
Supposing Kessel relations is running in localhost:9000
, you can enable it by updating the config as follows:
authz:
impl: kessel
kessel:
insecure-client: true
url: localhost:9000
enable-oidc-auth: false
If you want to enable OIDC authentication with SSO, you can use this instead:
authz:
impl: kessel
kessel:
insecure-client: true
url: localhost:9000
enable-oidc-auth: true
sa-client-id: "<service-id>"
sa-client-secret: "<secret>"
sso-token-endpoint: "http://localhost:8084/realms/redhat-external/protocol/openid-connect/token"
Tests can be run using:
make test
For end-to-test info see here.
Inventory API is configured to build with FIPS capable libraries and produce FIPS capaable binaries when running on FIPS enabled clusters.
To validate the current running container is FIPS capable:
# exec or rsh into running pod
# Reference the fips_enabled file that ubi9 creates for the host
cat /proc/sys/crypto/fips_enabled
# Expected output:
1
# Check go tool for the binary
go tool nm /usr/local/bin/inventory-api | grep FIPS
# Expected output should reference openssl FIPS settings
# Ensure openssl providers have a FIPS provider active
openssl list -providers | grep -A 3 fips
# Expected output
fips
name: Red Hat Enterprise Linux 9 - OpenSSL FIPS Provider
version: 3.0.7-395c1a240fbfffd8
status: active