Skip to content


Repository files navigation


Captures Cloudformation nested stack deploy events as traces

Five bars of equal height, each one less wide than the one above it, stacked on top of each other to look like an upside down pyramid. Each bar represents a span of time, and its duration is written out on the bar. On the left and inline with each bar is the name associated to each bar, corresponding to the name given to the resource in Cloudformation. There are connecting lines between the names indicating that each bar is considered a child of the one above it.

Getting Started

Before you start, you'll need to have an AWS access key and secret on hand, as well as the name of a Cloudformation stack you have permissions to call the DescribeStackEvents API on.

Download the Zip And Extract the Binary

Use one of the following sets of commands, depending on your operating system. Alternatively you can download the zip directly from the releases page.



Leaves a binary named cfn-trace in the current working directory.


Invoke-WebRequest -OutFile ./
Expand-Archive -LiteralPath .\ -DestinationPath .\
rm .\

Leaves an executable named cfn-trace.exe in the current working directory.

MacOS (with Intel chip)

curl -OL
unzip ./
rm ./

Leaves a binary named cfn-trace in the current working directory.

MacOS (with Apple chip)

curl -OL
unzip ./
rm ./

Leaves a binary named cfn-trace in the current working directory.

Setup a (Local) OpenTelemetry Collector

If you can use Docker and Docker Compose, the following should get you setup. Otherwise check out its official docs for alternative approaches.

Create a docker-compose.yaml file and a config.yaml file in the same directory.


version: "3.9"
    image: otel/opentelemetry-collector:0.50.0 # The specific version here is unimportant, as long as some tag is specified (otherwise the volume mount won't work)
      - ./config.yaml:/etc/otelcol/config.yaml
      - "4318:4318"




    loglevel: debug

      receivers: [otlp]
      processors: [batch]
      exporters: [logging]

Then start the collector by running docker compose up from this directory.

Setup AWS Environment Variables

For the moment, the only way to pass AWS credentials and the region for the binary to use is via environment variables.

In a new shell, setup the following environment variables (defined in this AWS doc), after replacing the dummy values with your own.

Linux or MacOS

export AWS_DEFAULT_REGION=us-west-2



Try Generating Trace Data

In this new shell with the AWS environment variables present, run the following command, after replacing the dummy value with your own.

Linux or MacOS

./cfn-trace --stack-name <your root stack's name>

The collector's shell should be displaying raw trace/span information.


.\cfn-trace.exe --stack-name <your root stack's name>

The collector's shell should be displaying raw trace/span information.

How to Export to a Vendor?

The details will vary from vendor to vendor, but in general this should only require tweaking the config.yaml file to include a vendor-specific exporter (e.g. for Honeycomb, you can see on page 19 of this doc what tweaks are needed to send the data to them).

CLI Reference

There isn't a --help argument yet, but here is a list of the arguments that are available

  • --debug - this will turn on debug-level logging (mostly intended for troubleshooting)
  • --stack-name - set this to the name of a root Cloudformation stack to generate a trace from its most recent deploy
  • --version - this will echo the version of the binary to the console

Validating the Binaries Haven't Been Compromised Since They Were Published

The Sigstore project's cosign tool was used to sign the zips of the binaries. To check for post-publish tamperment, you'll need to install cosign first.

Using the zip for Linux as an example, here is how you can do the check.

cosign verify-blob --cert ./ --signature ./ ./

If it hasn't been tampered with, you should see the text "Verified OK" show up in the output.

Why is a Collector Needed At All?

As of this writing, Deno doesn't support gRPC, meaning JSON-encoded protobuf format via OTLP/HTTP is its only option for exporting OpenTelemetry data (like is the case for browsers). However, this is still classified as experimental, so vendors have not implemented ways to directly receive it yet (as far as I am aware).

However, the OpenTelemetry Collector is able to receive it, and then transform it into a format that can be sent to a vendor. Hence why it is needed, as a workaround for the time being.


Open in Gitpod

The easiest way to get started playing with the repository is to click on the above link and create a (free) Gitpod account using your Github credentials. This will open a workspace in your browser that should have all of the necessary tools ready to go (you can do the same from a fork if you're working on a PR, just use your fork's repository in the URL instead of this one's.)

FYI there are no commit hooks or CI commands currently that will save and re-commit any changes made by the code auto-formatter, so you may have to run make format before you finish a PR to work around this.