Skip to content

Commit

Permalink
Doc Updates v0.11 Part 3 (#2443)
Browse files Browse the repository at this point in the history
Editing and content and screenshot updates.
  • Loading branch information
jfermi committed Apr 27, 2023
1 parent da83843 commit 0715fda
Show file tree
Hide file tree
Showing 24 changed files with 241 additions and 239 deletions.
2 changes: 1 addition & 1 deletion docs/docs/cli/configuring-your-cli.md
@@ -1,6 +1,6 @@
# Configuring your CLI

Our web interface makes it easier to visualize your traces and add assertions, but sometimes a CLI is needed for automation. The CLI was developed for users creating tests and executing them each time a change is made in the system, so Tracetest can detect regressions and check service SLOs.
Our web interface makes it easier to visualize your traces and add assertions, but sometimes a CLI is needed for automation. The CLI was developed for users creating tests and executing them each time a change is made in the system, so Tracetest can detect regressions and check service Service Level Objectives (SLOs).


## **Available Commands**
Expand Down
2 changes: 1 addition & 1 deletion docs/docs/cli/creating-data-stores.md
Expand Up @@ -71,7 +71,7 @@ spec:
## Apply Configuration

To apply the configuration, you need a [configured CLI](./configuring-your-cli.md) pointed to the instance you want to apply the data store. Then you just have to enter:
To apply the configuration, you need a [configured CLI](./configuring-your-cli.md) pointed to the instance you want to apply the data store. Then use the following command:

```
tracetest apply datastore -f my/data-store/file/location.yaml
Expand Down
18 changes: 9 additions & 9 deletions docs/docs/cli/creating-tests.md
@@ -1,6 +1,6 @@
# Defining Tests as Text Files

One important aspect of testing your code is the ability to quickly implement changes while not breaking your application. If you change your application, it is important that you are able to update your tests and run them against your new implementation as soon as possible for a timely development feedback loop.
One important aspect of testing your code is the ability to quickly implement changes while not breaking your application. If you change your application, it is important to be able to update your tests and run them against your new implementation as soon as possible for a timely development feedback loop.

As Tracetest is mainly a visual tool, this might make it difficult to update tests in an auditable way and execute those changes only when we are sure the application has been deployed with the new changes. With that in mind, we built a new way for you to define your tests: using a YAML test definition!

Expand All @@ -15,7 +15,7 @@ To solve that, the best approach would be to enable developers to define their t

## Definition

The definition can be broken into three parts: `test information`, `triggering transaction`, `assertions`, and `outputs`. Here is a real test we have on Tracetest to test our Pokemon demo api:
The definition can be broken into three parts: `test information` including `triggering transaction`, `assertions`, and `outputs`. Here is a real test we have on Tracetest to test our Pokemon demo API:

```yaml
type: Test
Expand Down Expand Up @@ -70,7 +70,7 @@ The attribute `type` defines which trigger method you are going to use to intera

### HTTP Trigger

When defining a HTTP trigger, you are required to define a `httpRequest` object containing the request Tracetest will send to your system, so here you can define: `url`, `method`, `headers`, `authentication`, and `body`.
When defining a HTTP trigger, you are required to define a `httpRequest` object containing the request Tracetest will send to your system, this is where you define: `url`, `method`, `headers`, `authentication`, and `body`.

> Note: Some APIs require the `Content-Type` header to respond. If you are not able to trigger your application, check if you are sending this header and if its value is correct.
Expand Down Expand Up @@ -134,7 +134,7 @@ trigger:

#### Body

Currently, Testkube supports `raw` body types that enable you to send text formats over HTTP, for example: JSON.
Currently, Testkube supports `raw` body types that enable you to send text formats over HTTP: JSON, for example.

```yaml
trigger:
Expand Down Expand Up @@ -243,7 +243,7 @@ For more information about selectors or assertions, take a look at the documenta

## Outputs

Outputs are really useful when running [Transactions](../concepts/transactions). They allow to export values from a test so they become available in the [Environment Variables](environment-variables.md) of the current transaction.
Outputs are really useful when running [Transactions](../concepts/transactions). They allow for exporting values from a test so they become available in the [Environment Variables](../concepts/environments.md) of the current transaction.

An ouptut exports the result of an [Expression](../concepts/expressions) and assigns it to a name, so it can be injected into the environment variables of a running transaction.
A `selector` is needed only if the provided expression refers to a/some span/s attribute or meta attributes.
Expand All @@ -263,7 +263,7 @@ The `value` attribute is an `expression`, and is a very powerful tool.

### Basic expression

You can output basic expressions
You can output basic expressions:

```yaml
outputs:
Expand All @@ -278,9 +278,9 @@ outputs:
# results in INTERPOLATE_STRING = "the value someValue comes from the env var PRE_EXISTING_VALUE
```

### Extract a value from a JSON
### Extract a Value from a JSON

Imagine an hypotetical `/users/create` endpoint that returns the full `user` object, including the new ID, when the operation is successful.
Imagine a hypotetical `/users/create` endpoint that returns the full `user` object, including the new ID, when the operation is successful.

```yaml
outputs:
Expand All @@ -298,7 +298,7 @@ Using the same hypotethical user creation endpoint, a user creation might result
- `UPDATE accounts SET remaining_users ...`

In this case, the service is instrumented so that each query generates a span of type `database`.
You can get a list of sql operations:
You can get a list of SQL operations:

```yaml
outputs:
Expand Down
11 changes: 6 additions & 5 deletions docs/docs/cli/running-tests.md
@@ -1,13 +1,13 @@
# Running Tests From the Command Line Interface (CLI)
Once you have created a test, whether from the Tracetest UI of via a text editor, you will need the capabity to run it via the Command Line Interface (CLI) to integrate it into your CI/CD process or your local development workflow. The documentation for running a test via the CLI can be found here: [tracetest test run](./reference/tracetest_test_run.md). This page will provide some examples of using this command.
Once you have created a test, whether from the Tracetest UI or via a text editor, you will need the capabity to run it via the Command Line Interface (CLI) to integrate it into your CI/CD process or your local development workflow. The documentation for running a test via the CLI can be found here: [tracetest test run](./reference/tracetest_test_run.md). This page will provide some examples of using this command.

## Running Your First Test
To run a test, give the path to the test definition file with the '-d' option. This will launch a test, providing us with a link to the created test run.

```
tracetest test run -d path/to/test.yaml -w
```
output:
Output:
```
✔ Pokeshop - Import (http://localhost:11633/test/4oI08rA4g/run/3/test)
```
Expand All @@ -16,7 +16,7 @@ Now, let's run the same test but tell the CLI to wait for the test to complete r
```
tracetest test run -d path/to/test.yaml -w
```
output:
Output:
```
✘ Pokeshop - Import (http://localhost:11633/test/4oI08rA4g/run/12/test)
✔ Response should be ok
Expand All @@ -38,7 +38,7 @@ Running the same command with the '-o json' option would change the output from
```
tracetest test run -d path/to/test.yaml -w - o json
```
output:
Output:
```
{
"testRunWebUrl": "http://localhost:11633/test/4oI08rA4g/run/13/test",
Expand Down Expand Up @@ -203,8 +203,9 @@ You can also reference an .env file which will be used to create a new environme
POKEID=45
POKENAME=vileplume
```

```
tracetest test run -d path/to/test.yaml -e path/to/local.env -w
```

If you use the .env approach, a new environment will be created in Tracetest. If it does not exist, the environment name and id will be the file name without the suffix .env. So, local.env becomes local.

Expand Down
2 changes: 0 additions & 2 deletions docs/docs/examples-tutorials/overview.md
@@ -1,7 +1,5 @@
# Overview

Below you can find tutorials to help you get started with Tracetest.

<!-- If you're already building something with Tracetest, please explore recipes — short, self-contained, and runnable solutions to popular use cases. -->

## Tutorials
Expand Down
Expand Up @@ -14,7 +14,7 @@

## Sample Node.js Serverless API with Jaeger, OpenTelemetry, AWS Fargate and Tracetest

This is a simple quick start on how to deploy a Node.js Serverless API to use OpenTelemetry instrumentation with traces and Tracetest for enhancing your E2E and integration tests with trace-based testing. The infrastructure will use Jaeger as the trace data store, and OpenTelemetry Collector to receive traces from the Node.js app and Terraform to provision the required AWS services to run Tracetest in the cloud.
This is a simple quick start guide on how to deploy a Node.js Serverless API to use OpenTelemetry instrumentation with traces and Tracetest for enhancing your E2E and integration tests with trace-based testing. The infrastructure will use Jaeger as the trace data store, and OpenTelemetry Collector to receive traces from the Node.js app and Terraform to provision the required AWS services to run Tracetest in the cloud.

## Services Architecture

Expand All @@ -39,7 +39,7 @@ The `tracetest.tf` file contains the different services and dependencies to run

### 3. Jaeger

Inside the `jaeger.tf` file you'll find the required services to run the all in one instance using AWS Fargate.
Inside the `jaeger.tf` file you'll find the required services to run them all in one instance using AWS Fargate.

### AWS Network

Expand Down Expand Up @@ -109,14 +109,14 @@ NODE_OPTIONS="--require tracing.js"

The `tracetest.tf` file contains the required services for the Tracetest server which include.

- **Postgres RDS** - Postgres is a prerequisite for Tracetest to work. It stores the trace-based tests you create, information about prior test runs, and other data that Tracetest needs.
- **Postgres RDS** - Postgres is a prerequisite for Tracetest to work. It stores the trace-based tests you create, information about prior test runs and other data that Tracetest needs.
- **Tracetest Task Definition** - The information on how to configure and provision Tracetest using ECS.
- [**ECS Service**](https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ecs_services.html) - The server provisioning metadata to run the Tracetest Task Definition.
- **Networking** - Security groups, target groups and load balancer listeners required to have Tracetest connected to the rest of the AWS infrastructure.

### Configuring the Tracetest Container

The Tracetest Docker image supports environment variables as entry point for the bootstrap configuration, in this case the task definition includes the following:
The Tracetest Docker image supports environment variables as the entry point for the bootstrap configuration, in this case, the task definition includes the following:

```json
{
Expand Down Expand Up @@ -153,7 +153,7 @@ The Tracetest Docker image supports environment variables as entry point for the

## Jaeger

Similar to the Tracetest setup, there is a file called `jaeger.tf` which contains a basic setup to run the all-in-one Jaeger image using AWS Fargate. In this case, it includes networking rules for the internal and external load balancers, so we can provide a way for both the Node.js Lambdas and Tracetest have access to the API endpoints from within the VPC while providing public access to the UI.
Similar to the Tracetest setup, there is a file called `jaeger.tf` which contains a basic setup to run the all-in-one Jaeger image using AWS Fargate. In this case, it includes networking rules for the internal and external load balancers, so we can provide a way for both the Node.js Lambdas and Tracetest to have access to the API endpoints from within the VPC while providing public access to the UI.

### Jaeger OTLP Endpoints

Expand Down Expand Up @@ -284,4 +284,4 @@ Now that all of the required services and infra have been created, you can start

## Learn More

Check out our [examples on GitHub](https://github.com/kubeshop/tracetest/tree/main/examples), and join our [Discord Community](https://discord.gg/8MtcMrQNbX) for more info!
Check out our [examples on GitHub](https://github.com/kubeshop/tracetest/tree/main/examples) and join our [Discord Community](https://discord.gg/8MtcMrQNbX) for more info!
Expand Up @@ -6,13 +6,13 @@

[Tracetest](https://tracetest.io/) is a testing tool based on [OpenTelemetry](https://opentelemetry.io/) that allows you to test your distributed application. It allows you to use your telemetry data generated by the OpenTelemetry tools to check and assert if your application has the desired behavior defined by your test definitions.

[AWS X-Ray](https://aws.amazon.com/xray/) provides a complete view of requests as they travel through your application and filters visual data across payloads, functions, traces, services, APIs, and more with no-code and low-code motions.
[AWS X-Ray](https://aws.amazon.com/xray/) provides a complete view of requests as they travel through your application and filters visual data across payloads, functions, traces, services, APIs and more with no-code and low-code motions.

[AWS Distro for OpenTelemetry (ADOT)](https://aws-otel.github.io/docs/getting-started/collector) is a secure, production-ready, AWS-supported distribution of the OpenTelemetry project. Part of the Cloud Native Computing Foundation, OpenTelemetry provides open source APIs, libraries, and agents to collect distributed traces and metrics for application monitoring.
[AWS Distro for OpenTelemetry (ADOT)](https://aws-otel.github.io/docs/getting-started/collector) is a secure, production-ready, AWS-supported distribution of the OpenTelemetry project. Part of the Cloud Native Computing Foundation, OpenTelemetry provides open source APIs, libraries and agents to collect distributed traces and metrics for application monitoring.

## Simple Node.js API with AWS X-Ray and Tracetest

This is a simple quick start on how to configure a Node.js app to use instrumentation with traces and Tracetest for enhancing your E2E and integration tests with trace-based testing. The infrastructure will use AWS X-Ray as the trace data store, the ADOT as a middleware and a Node.js app to generate the telemetry data.
This is a simple quick start guide on how to configure a Node.js app to use instrumentation with traces and Tracetest for enhancing your E2E and integration tests with trace-based testing. The infrastructure will use AWS X-Ray as the trace data store, the ADOT as a middleware and a Node.js app to generate the telemetry data.

## Prerequisites

Expand All @@ -32,7 +32,7 @@ The `docker-compose.yaml` file, `tracetest.provision.yaml`, and `tracetest-confi

### Docker Compose Network

All `services` in the `docker-compose.yaml` are on the same network and will be reachable by hostname from within other services. E.g. `adot-collector:2000` in the `src/index.js` will map to the `adot-collector` service, where the port `2000` is the port where the X-Ray Daemon accepts telemetry data.
All `services` in the `docker-compose.yaml` are on the same network and will be reachable by hostname from within other services. For example, `adot-collector:2000` in the `src/index.js` will map to the `adot-collector` service, where port `2000` is the port where the X-Ray Daemon accepts telemetry data.

## Node.js App

Expand Down Expand Up @@ -91,7 +91,7 @@ CMD [ "npm", "start" ]
The `docker-compose.yaml` includes three other services.

- **Postgres** - Postgres is a prerequisite for Tracetest to work. It stores trace data when running the trace-based tests.
- [**AWS Distro for OpenTelemetry (ADOT)**](https://aws-otel.github.io/docs/getting-started/collector) - is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service.
- [**AWS Distro for OpenTelemetry (ADOT)**](https://aws-otel.github.io/docs/getting-started/collector) - Software application that listens for traffic on UDP port 2000, gathers raw segment data and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service.
- [**Tracetest**](https://tracetest.io/) - Trace-based testing that generates end-to-end tests automatically from traces.

```yaml
Expand Down Expand Up @@ -227,4 +227,4 @@ Make sure to use the `http://app:3000/` url in your test creation, because your

## Learn More

Please visit our [examples in GitHub](https://github.com/kubeshop/tracetest/tree/main/examples), and join our [Discord Community](https://discord.gg/8MtcMrQNbX) for more info!
Please visit our [examples in GitHub](https://github.com/kubeshop/tracetest/tree/main/examples) and join our [Discord Community](https://discord.gg/8MtcMrQNbX) for more info!
Expand Up @@ -32,7 +32,7 @@ The `docker-compose.yaml` file, `tracetest.provision.yaml`, and `tracetest.confi

### Docker Compose Network

All `services` in the `docker-compose.yaml` are on the same network and will be reachable by hostname from within other services. E.g. `adot-collector:2000` in the `src/index.js` will map to the `adot-collector` service, where the port `2000` is the port where the X-Ray Daemon accepts telemetry data
All `services` in the `docker-compose.yaml` are on the same network and will be reachable by hostname from within other services. E.g. `adot-collector:2000` in the `src/index.js` will map to the `adot-collector` service, where the port `2000` is the port where the X-Ray Daemon accepts telemetry data.

## Pokeshop API

Expand Down Expand Up @@ -208,7 +208,7 @@ services:
The `docker-compose.yaml` includes three other services.

- **Postgres** - Postgres is a prerequisite for Tracetest to work. It stores trace data when running the trace-based tests.
- [**AWS Distro for OpenTelemetry (ADOT**)](https://aws-otel.github.io/docs/getting-started/collector) - is a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service.
- [**AWS Distro for OpenTelemetry (ADOT**)](https://aws-otel.github.io/docs/getting-started/collector) - Software application that listens for traffic on UDP port 2000, gathers raw segment data and relays it to the AWS X-Ray API. The daemon works in conjunction with the AWS X-Ray SDKs and must be running so that data sent by the SDKs can reach the X-Ray service.
- [**Tracetest**](https://tracetest.io/) - Trace-based testing that generates end-to-end tests automatically from traces.

```yaml
Expand Down Expand Up @@ -284,7 +284,7 @@ postgres:
params: sslmode=disable
```

The `tracetest.provision.yaml` file definines the trace data store, set to AWS X-Ray, meaning the traces will be stored in X-Ray and Tracetest will fetch them from X-Ray when running tests.
The `tracetest.provision.yaml` file defines the trace data store, set to AWS X-Ray, meaning the traces will be stored in X-Ray and Tracetest will fetch them from X-Ray when running tests.

But how does Tracetest fetch traces?

Expand Down

0 comments on commit 0715fda

Please sign in to comment.