Archived version --> please click here for updated document.
git clone --recursive git@github.com:racker/salus-telemetry-bundle.git
Running git submodule update --recursive
in the top level directory will set each submodule to the commit tagged in this repo.
$ git submodule update --recursive
Submodule path 'apps/ambassador': checked out '8fd6d17993001a0d5555f88dc1593ba56ff1ca4c'
Submodule path 'apps/salus-app-base': checked out 'c3b64afad4e2d31c775a7ce40803df1a7dc95630'
$ git submodule status
8fd6d17993001a0d5555f88dc1593ba56ff1ca4c apps/ambassador (8fd6d17)
0d434d08dc31c73918dc5a5b09d11ae503ae6f13 apps/api (heads/master)
7fec0125daaa1d6554a405dab5e761ebfa98df0a apps/auth-service (heads/master)
c999f7e314c3e267a8a9a343c60b8d7a23523e2e apps/envoy (0.1.1-39-gc999f7e)
c3b64afad4e2d31c775a7ce40803df1a7dc95630 apps/salus-app-base (c3b64af)
419171514ce8dc0d823ffe32431d255cdc684de6 libs/etcd-adapter (heads/master)
8d3b86933450af562dc57c1fc2a9bf7005bf65b3 libs/model (heads/master)
For most development activity, the latest revision on master
should be used for each submodule. The following can be used to ensure all of the submodules and the bundle repo itself are checked out at master
:
git pull
git submodule foreach git checkout master
git submodule foreach git pull
This is the easy part of dealing with submodules. Typically you want to ensure all submodules are on their master branch and up to date, then it is just a matter of creating a new PR as normal.
Checking out the master branch of all submodules can be done via git submodule foreach --recursive git checkout master
and then you can update to the latest commit via git submodule foreach --recursive git pull
.
A git status
will then show something like:
modified: apps/ambassador (new commits)
modified: apps/api (new commits)
modified: apps/auth-service (new commits)
modified: apps/envoy (new commits)
and you can simply add those to your branch like any other file, add a commit message, and push your branch, followed by creating a PR.
Once the PR is merged you can then create a new release via GitHub, enter the new tag version, and fill in the other details with relevant information - see the previous releases for examples.
Publish the release and you are done.
The supporting infrastructure, such as etcd and kafka, can be started by running
cd dev/telemetry-infra
docker-compose up -d
You can stop the infrastructure by running the following in that same directory:
docker-compose down
Add -v
to that invocation to also remove the volumes allocated for the services.
You can watch the logs of any (or all) of the services in the composition using:
docker-compose logs -f service ...
such as
docker-compose logs -f kafka
Some integration points in the system, especially telegraf's usage of rendered monitor content, assume certain database content is present. In the deployed clusters, the data-loader is integrated with Github webhooks; however, for local development, a fresh database volume needs to be pre-loaded manually using the data-loader. With all minimum-required apps (see below), admin api, and public api are running, the data loader can be run locally using the following:
cd tools/data-loader
go run ./... --debug --admin-url http://localhost:8888 \
load-from-git https://github.com/Rackspace-Segment-Support/salus-data-loader-content.git
NOTE: if you ever need to wipe the MySQL database and/or Docker volume, be sure to re-run the command above since the content will need to be re-loaded.
At a minimum you need to start the following applications from apps
:
- agent-catalog-management
- ambassador
- monitor-management
- policy-management
- resource-management
Depending on your development task, additional applications can be started as needed.
The following procedure is IntelliJ specific but the process will be similar for other IDEs.
To open the project in IntelliJ, use the "open" option from the intro window, (or the File->Open dropdown). Do not use either "Create New Project" or "Import Project" options, as those will misconfigure the project. Open the root directory of this project, (the same one this readme is located in.)
In the "Maven Projects" tab (usually on right side of IDE window), click the "Generate Sources and Update Folders"
button to generate the protobuf/grpc code that is located in the libs/protocol
module. That button is shown here:
IntelliJ, as of at least 2019.3, will auto-create run configurations for each of our Spring Boot applications under the app
directory; however, you will need to add dev
to the active Spring profiles as shown here:
The run configuration for apps/ambassador
also needs the "Working directory" set to the dev
directory of this bundle module. That will ensure it can read the development-time certificates from the certs
directory contained there.
Go based modules, such as apps/envoy
, may not be auto-detected as modules initially. If that's the case, then open the Project Structure configuration and perform an "Import Module" operation as shown here:
With that, when opening any of the *.go files within those modules, you should be prompted to setup the GOROOT, such as:
If not, you can manually enable Go support for modules in the project preferences as shown here:
Finally, IntelliJ should auto-enable support for Go modules; however, if it doesn't you can set (or check) that setting also in the project preferences, as shown here:
Launch each of the run configurations by choosing it from the drop down in the top-right of the IDE window and clicking the "Run" or "Debug" button to launch in the respective mode. It is recommended you use debug mode in most cases since you can add breakpoints on the fly:
OPTIONAL: If you have IntelliJ Ultimate, it is recommended to run the apps via run profiles rather via Maven command-line.
The app base README contains information about how to build and run the application modules with Maven.
OPTIONAL: Vault only needs to be setup if you are making changes to auth-service and need to test its interaction with Vault and ambassador.
The Vault server itself is already included in the Docker "infra" composition.
With that container already running, run the following to setup app-role authentication to be used by the Salus applications:
docker exec -it telemetry-infra_vault_1 setup-app-role
Using the vault.app-role.role-id
and vault.app-role.secret-id
provided by that script, update the run configuration for apps/auth-service
in the "Override Parameters" section, such as
OPTIONAL: Repose only needs to be used locally when working on authentication related changes to the public and admin APIs or investigating Repose configurations.
There are a few options for running Repose depending on how extensive the scenario:
- To run all services via docker, optionally including Repose, see here
- To run Repose in front of the public api see here
- To run Repose in front of the admin api see here
- To run Repose in front of the auth api see here
While debugging issues it can be helpful to view what the applications running it docker are doing.
To verify topic content is flowing and correct, you can exec into the kafka container and run the standard console consumer, such as
docker exec -it telemetry-infra_kafka_1 \
kafka-console-consumer --bootstrap-server localhost:9093 --topic telemetry.metrics.json
NOTE the use of port 9093 instead of 9092
The etcd container includes the etcdctl
command-line tool and is pre-configured to use the v3
API. You can perform operations with etcdctl
via docker exec
, such as:
docker exec -it telemetry-infra_etcd_1 etcdctl get --prefix /
The MySQL container contains a database named default
with username dev
and password pass
.
Once running, in addition to connecting services to it, you can connect to the instance and query the database manually:
$ docker ps
docker CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
fd12115575bd mysql:5.7 "docker-entrypoint.s…" 12 days ago Up 12 days 0.0.0.0:3306->3306/tcp, 33060/tcp telemetry-infra_mysql_1
$ docker exec -it telemetry-infra_mysql_1 sh
# mysql -u dev -ppass
mysql> use default;
mysql> show tables;
+--------------------+
| Tables_in_default |
+--------------------+
| hibernate_sequence |
| labels |
| resource_labels |
| resources |
+--------------------+
mysql> select * from resources;
The Event Engine applications (event-engine-ingest
and event-engine-management
) can be run locally
along with the entire end-to-end of Salus applications, Ambassador, Envoy, etc; however, for
development and testing of just the Event Engine piece it is easier just to simulate the
metrics that would normally be routed to Kafka/UMB by the Ambassador.
To start with, ensure you have the infrastructure components runnings by bringing up the
Docker composition dev/telemetry-infra/docker-compose.yml
. That composition includes two
instances of Kapacitor.
With the infrastructure running, start the event-engine-ingest
and event-engine-management
applications both with the Spring dev
profile activated. That profile will ensure the
applications are configured to interact with the two Kapacitor instances in the infrastructure
composition.
Finally, to simulate some metrics, run the application located in dev/metrics-gen
. For IntelliJ
to recognize that module, you'll need to right click its pom.xml
and choose "Add as Maven project".
With it added to the overall project build, you'll be able to run it as a typical Spring Boot application.
That application will randomly pick a set of resources with a set of labels each. Within each resource it will
randomly come up with a sine wave definition of measurements for each. The logs at startup
will display the resources and measurements that were randomly defined. The variability and
number of those can be configured in the application.yml
of that application. A small
profile
is provided as an example of a variation of the application configuration.
As a result, you will be able to pick any one of the resource+measurement combinations and configure event scenarios, such as rising/falling threshold, based on the periodicity of the chosen measurement.
- .env files support (0.7)
- Apache Avro™ support (0.3.1)
- BashSupport (1.6.13.182)
- Docker integration (182.4323.18)
- Go (182.4129.55.890)
- HashiCorp Terraform / HCL language support
- Kubernetes (182.3588)
- Lombok Plugin (0.19-LATEST-EAP-SNAPSHOT)
- Lua (1.0.114)
- Makefile support (1.3)
- Markdown support (182.4505.7)
- Maven Helper (3.7.172.1454.3)
- Protobuf Support (0.11.0)
- RegexpTester
- Spring Boot (1.0)
- Toml (0.2.0.19)
Use the preparation part of these docs
to install the Cloud SDK tools and configure Docker for authentication. You can disregard the
details about docker push
since the Maven jib plugin
will take of the equivalent operations.
Tip: on MacOS you can install the Cloud SDK using brew:
brew cask install google-cloud-sdk
In each of the application modules, run the following replacing $PROJECT_ID
with the Google
Cloud project's ID, which is of the form of an identifer and number separated by a dash:
mvn -P docker -Ddocker.image.prefix=gcr.io/$PROJECT_ID deploy
If publishing a snapshot version of the Maven projects, then add
-Dmaven.deploy.skip=true
to skip the Bintray publishing of the Maven artifacts.
If the local system doesn't have Docker installed, you can still perform the remote publish and skip the local Docker build by adding:
-DskipLocalDockerBuild=true