Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
95 changes: 95 additions & 0 deletions .github/workflows/blazemeter-evidence-example.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
name: "BlazeMeter evidence integration example"

on:
workflow_dispatch:

permissions:
id-token: write
contents: read
actions: read

jobs:
package-docker-image-with-blazemeter-evidence:
runs-on: ubuntu-latest
env:
REGISTRY_URL: ${{ vars.JF_URL}}
REPO_NAME: 'docker-blazemeter-repo'
IMAGE_NAME: 'docker-blazemeter-image'
TAG_NAME: ${{ github.run_number }}
BUILD_NAME: 'blazemeter-docker-build'
BUILD_NUMBER: ${{ github.run_number }}
BLAZEMETER_API_KEY: ${{ secrets.BLAZEMETER_API_KEY }}
BLAZEMETER_API_SECRET: ${{ secrets.BLAZEMETER_API_SECRET }}
BLAZEMETER_TEST_ID: "14909295"
ATTACH_OPTIONAL_MARKDOWN_TO_EVIDENCE: true
steps:
- uses: jfrog/setup-jfrog-cli@v4
name: jfrog-cli setup
env:
JF_URL: ${{ vars.ARTIFACTORY_URL }}
JF_ACCESS_TOKEN: ${{ secrets.JF_ACCESS_TOKEN }}

- name: Checkout repository
uses: actions/checkout@v4
with:
sparse-checkout: |
examples/blazemeter/**
sparse-checkout-cone-mode: false

- name: Build and publish Docker image
run: |
docker build --file ./examples/blazemeter/app/Dockerfile ./examples/blazemeter/app --tag ${{ env.REGISTRY_URL }}/${{ env.REPO_NAME }}/${{ env.IMAGE_NAME }}:${{ env.TAG_NAME }}
jf rt docker-push ${{ env.REGISTRY_URL }}/${{ env.REPO_NAME }}/${{ env.IMAGE_NAME }}:${{ env.TAG_NAME }} ${{ env.REPO_NAME }} --build-name=${{ env.BUILD_NAME }} --build-number=${{ env.BUILD_NUMBER }}

- name: Deploy app
# In a real scenario, this step would deploy your Docker image to a staging environment accessible by BlazeMeter.
run: |
echo "Simulating deployment of ${{ env.IMAGE_NAME }}:${{ env.TAG_NAME }} to a staging environment."

- name: Run BlazeMeter Performance Test
id: blazemeter_test_run
uses: Blazemeter/github-action@v8.5
with:
apiKey: ${{ env.BLAZEMETER_API_KEY }}
apiSecret: ${{ env.BLAZEMETER_API_SECRET }}
testID: ${{ env.BLAZEMETER_TEST_ID }}
continuePipeline: "false"

- name: Fetch BlazeMeter Results
working-directory: examples/blazemeter
run: |
LATEST_RUN_INFO=$(curl -s -X GET \
"https://a.blazemeter.com/api/v4/masters?testId=${{ env.BLAZEMETER_TEST_ID }}" \
-H "Content-Type: application/json" \
-u "${{ env.BLAZEMETER_API_KEY }}:${{ env.BLAZEMETER_API_SECRET }}" | \
jq -r '.result | sort_by(.id) | reverse | .[0]')

TEST_RUN_ID=$(echo "$LATEST_RUN_INFO" | jq -r '.id')

echo "Fetching BlazeMeter aggregate results"
BLAZEMETER_RESULTS=$(curl -s -X GET \
"https://a.blazemeter.com/api/v4/masters/$TEST_RUN_ID/reports/aggregatereport/data" \
-H "Content-Type: application/json" \
-u "${{ env.BLAZEMETER_API_KEY }}:${{ env.BLAZEMETER_API_SECRET }}")

echo "$BLAZEMETER_RESULTS" > blazemeter-predicate.json

- name: Generate optional custom markdown report
if: env.ATTACH_OPTIONAL_MARKDOWN_TO_EVIDENCE == 'true'
working-directory: examples/blazemeter
run: |
ARTIFACT_NAME="${{ env.REGISTRY_URL }}/${{ env.REPO_NAME }}/${{ env.IMAGE_NAME }}:${{ env.TAG_NAME }}"
python scripts/generate-markdown-report.py blazemeter-predicate.json "$ARTIFACT_NAME" "${{ env.BLAZEMETER_TEST_ID }}" > blazemeter-results.md

- name: Attach evidence to the package
working-directory: examples/blazemeter
run: |
jf evd create \
--package-name $IMAGE_NAME \
--package-version $TAG_NAME \
--package-repo-name $REPO_NAME \
--key "${{ secrets.PRIVATE_KEY }}" \
--key-alias "${{ secrets.PRIVATE_KEY_ALIAS }}" \
--predicate "blazemeter-predicate.json" \
--predicate-type "http://blazemeter.com/performance-results/v1" \
${{ env.ATTACH_OPTIONAL_MARKDOWN_TO_EVIDENCE == 'true' && '--markdown "blazemeter-results.md"' || '' }}
170 changes: 170 additions & 0 deletions examples/blazemeter/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,170 @@
# BlazeMeter Evidence Integration Example

This repository provides a working example of a GitHub Actions workflow that automates performance testing with **BlazeMeter** for a Dockerized application. It then attaches the resulting test results as signed, verifiable evidence to the package in **JFrog Artifactory**.

This workflow is an essential pattern for DevSecOps, creating a traceable, compliant, and secure software supply chain with comprehensive performance testing coverage.

### **Key Features**

* **Automated Build & Push**: Builds a Docker image from a Dockerfile and pushes it to Artifactory.
* **Performance Testing**: Runs a BlazeMeter test using the official GitHub Action.
* **Evidence Generation**: Fetches BlazeMeter aggregate results as a JSON predicate file.
* **Optional Markdown Report**: Includes helper scripts to generate a human-readable Markdown summary from the BlazeMeter test results.
* **Signed Evidence Attachment**: Attaches the test results to the corresponding package version in Artifactory using jf evd create, cryptographically signing it for integrity.
* **BlazeMeter**: [What BlazeMeter can test](https://help.blazemeter.com/docs/guide/intro.html)

### **Workflow**

The following diagram illustrates the sequence of operations performed by the GitHub Actions workflow.

```mermaid
graph TD
A[Workflow Dispatch Trigger] --> B[Setup JFrog CLI]
B --> C[Checkout Repository]
C --> D[Build and Publish Docker Image to Artifactory]
D --> E[Deploy App]
E --> F[Run BlazeMeter Performance Test]
F --> G[Fetch BlazeMeter Results]
G --> H{Attach Optional Custom Markdown Report?}
H -->|Yes| I[Generate Custom Markdown Report]
H -->|No| J[Skip Markdown Report]
I --> K[Attach Evidence to Package]
J --> K[Attach Evidence to Package]
```

---

### **1. Prerequisites**

Before running this workflow, you must have:

* JFrog CLI 2.65.0 or above (installed automatically in the workflow)
* An Artifactory repository of type docker (e.g., docker-blazemeter-repo)
* A private key and a corresponding key alias configured in your JFrog Platform for signing evidence
* BlazeMeter API credentials and a valid test ID
* The following GitHub repository variables:
* `JF_URL` (Artifactory Docker registry domain, e.g. `mycompany.jfrog.io`)
* `ARTIFACTORY_URL` (Artifactory base URL)
* The following GitHub repository secrets:
* `JF_ACCESS_TOKEN` (Artifactory access token)
* `BLAZEMETER_API_KEY` (BlazeMeter API key)
* `BLAZEMETER_API_SECRET` (BlazeMeter API secret)
* `PRIVATE_KEY` (Private key for signing evidence)
* `PRIVATE_KEY_ALIAS` (Key alias for signing evidence)

### Environment Variables Used

* `REGISTRY_URL` - Docker registry domain
* `REPO_NAME` - Docker repository name
* `IMAGE_NAME` - Docker image name
* `TAG_NAME` - Docker image tag (uses GitHub run number)
* `BUILD_NAME` - Build name for Artifactory
* `BUILD_NUMBER` - Build number (uses GitHub run number)
* `ATTACH_OPTIONAL_MARKDOWN_TO_EVIDENCE` - Set to `true` to attach a Markdown report as evidence
* `BLAZEMETER_TEST_ID` - BlazeMeter test ID to execute

### **2. Configuration**

To use this workflow, you must configure the following GitHub Repository Secrets and Variables.

#### **GitHub Secrets**

Navigate to Settings > Secrets and variables > Actions and create the following secrets:

| Secret Name | Description |
| :---- | :---- |
| JF_ACCESS_TOKEN | A valid JFrog Access Token with permissions to read, write, and annotate in your target repository. |
| BLAZEMETER_API_KEY | BlazeMeter API key for authentication |
| BLAZEMETER_API_SECRET | BlazeMeter API secret for authentication. |
| PRIVATE_KEY | The private key used to sign the evidence. This key corresponds to the alias configured in JFrog Platform. |
| PRIVATE_KEY_ALIAS | The key alias for signing evidence. |

#### **GitHub Variables**

Navigate to Settings > Secrets and variables > Actions and create the following variables:

| Variable Name | Description | Example Value |
| :---- | :---- | :---- |
| JF_URL | Artifactory Docker registry domain | mycompany.jfrog.io |
| ARTIFACTORY_URL | The Artifactory base URL | https://mycompany.jfrog.io |

#### **Workflow Environment Variables**

You can also customize the workflow's behavior by modifying the env block in the .github/workflows/blazemeter-evidence-example.yml file:

| Variable Name | Description | Default Value |
| :---- | :---- | :---- |
| REPO_NAME | The name of the target Docker repository in Artifactory. | docker-blazemeter-repo |
| IMAGE_NAME | The name of the Docker image to be built and pushed. | docker-blazemeter-image |
| BUILD_NAME | The name assigned to the build information in Artifactory. | blazemeter-docker-build |
| ATTACH_OPTIONAL_MARKDOWN_TO_EVIDENCE | Set to true to generate and attach a Markdown report alongside the JSON evidence. Set to false to skip this step. | true |
| BLAZEMETER_TEST_ID | The BlazeMeter test ID to execute for performance testing. | 14909295 |

---

### **3. Usage**

This workflow is triggered manually.

1. Navigate to the **Actions** tab of your forked repository.
2. In the left sidebar, click on the **BlazeMeter evidence integration example** workflow.
3. Click the **Run workflow** dropdown button. You can leave the default branch selected.
4. Click the green **Run workflow** button.

Once the workflow completes successfully, you can navigate to your repository in Artifactory (docker-blazemeter-repo) and view the docker-blazemeter-image. Under the **Evidence** tab for the latest version, you will find the signed BlazeMeter test results.

---

### **How It Works: A Step-by-Step Breakdown**

1. **Setup and Checkout**: The workflow begins by setting up the JFrog CLI and checking out the repository code.
2. **Build and Publish Docker Image**: It uses standard docker commands to build an image. The jf rt docker-push command then pushes this image to your Artifactory instance and associates it with build information using jf rt build-publish.
3. **Run BlazeMeter Performance Test**: The BlazeMeter GitHub Action is executed for running the specified test ID and generating performance metrics.
4. **Fetch BlazeMeter Results**: The workflow retrieves the latest test run information and downloads the aggregate BlazeMeter results.
5. **Generate Optional Markdown Report**: If ATTACH\_OPTIONAL\_MARKDOWN\_TO\_EVIDENCE is true, a Python helper script is run to parse the JSON output and create a more human-readable blazemeter-results.md file.
6. **Attach Signed Evidence**: The final step uses the jf evd create command. It takes the blazemeter-predicate.json file as the official "predicate" and attaches it as evidence to the specific package version in Artifactory. The evidence is signed using the provided PRIVATE\_KEY, ensuring its authenticity and integrity.

### **Key Commands Used**

* **Build and Push Docker Image:**
```bash
docker build --file ./examples/blazemeter/app/Dockerfile ./examples/blazemeter/app --tag $REGISTRY_URL/$REPO_NAME/$IMAGE_NAME:$TAG_NAME
jf rt docker-push ${{ env.REGISTRY_URL }}/${{ env.REPO_NAME }}/${{ env.IMAGE_NAME }}:${{ env.TAG_NAME }} ${{ env.REPO_NAME }} --build-name=${{ env.BUILD_NAME }} --build-number=${{ env.BUILD_NUMBER }}
```

* **Run BlazeMeter Test:**
```yaml
uses: Blazemeter/github-action@v8.5
with:
apiKey: ${{ env.BLAZEMETER_API_KEY }}
apiSecret: ${{ env.BLAZEMETER_API_SECRET }}
testID: ${{ env.BLAZEMETER_TEST_ID }}
continuePipeline: "false"
```

* **Fetch BlazeMeter Results:**
```bash
BLAZEMETER_RESULTS=$(curl -s -X GET \
"https://a.blazemeter.com/api/v4/masters/78837525/reports/aggregatereport/data" \
-H "Content-Type: application/json" \
-u "${{ env.BLAZEMETER_API_KEY }}:${{ env.BLAZEMETER_API_SECRET }}")
```

* **Attach Evidence:**
```bash
jf evd create \
--package-name $IMAGE_NAME \
--package-version $TAG_NAME \
--package-repo-name $REPO_NAME \
--key "$PRIVATE_KEY" \
--key-alias "$PRIVATE_KEY_ALIAS" \
--predicate "blazemeter-predicate.json" \
--predicate-type "http://blazemeter.com/performance-results/v1" \
${{ env.ATTACH_OPTIONAL_MARKDOWN_TO_EVIDENCE == 'true' && '--markdown "blazemeter-results.md"' || '' }}
```

### **References**

* [BlazeMeter Documentation](https://help.blazemeter.com)
* [JFrog Evidence Management](https://jfrog.com/help/r/jfrog-artifactory-documentation/evidence-management)
* [JFrog CLI Documentation](https://jfrog.com/getcli/)
13 changes: 13 additions & 0 deletions examples/blazemeter/app/Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
FROM node:18-alpine

WORKDIR /usr/src/app

COPY package*.json ./

RUN npm install

COPY . .

EXPOSE 3000

CMD [ "npm", "start" ]
21 changes: 21 additions & 0 deletions examples/blazemeter/app/index.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
const express = require('express');
const app = express();
const port = 3000;

app.get('/', (req, res) => {
res.send('Hello Users!');
});

app.get('/api/data', (req, res) => {
const delay = Math.floor(Math.random() * 200) + 50;
setTimeout(() => {
res.json({
title: "delectus aut autem",
completed: false
});
}, delay);
});

app.listen(port, () => {
console.log(`App listening at http://localhost:${port}`);
});
13 changes: 13 additions & 0 deletions examples/blazemeter/app/package.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,13 @@
{
"name": "example-app",
"version": "1.0.0",
"description": "A simple example app",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"dependencies": {
"express": "^4.21.2"
}
}

71 changes: 71 additions & 0 deletions examples/blazemeter/scripts/generate-markdown-report.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,71 @@
import json
import os
import sys
from datetime import datetime

def generate_markdown_report(json_data, artifact_name, test_id):
markdown_output = "# BlazeMeter Performance Test Report\n\n"
markdown_output += f"**Artifact Name:** {artifact_name} \n"
markdown_output += f"**Test ID:** {test_id} \n"
markdown_output += f"**Execution Date:** {datetime.now().strftime('%Y-%m-%d %H:%M:%S UTC')} \n\n"

summary_data = None
if json_data and 'result' in json_data and isinstance(json_data['result'], list):
for item in json_data['result']:
if item.get('labelName') == 'ALL':
summary_data = item
break
if not summary_data and json_data['result']:
summary_data = json_data['result'][0]

if not summary_data:
markdown_output += "## No Performance Summary Data Found\n\n"
markdown_output += "The aggregate report did not contain expected summary data.\n"
return markdown_output

markdown_output += "## Test Summary\n\n"
markdown_output += "| Metric | Value |\n"
markdown_output += "| :-------------------- | :--------- |\n"
markdown_output += f"| **Total Samples** | {summary_data.get('samples', 'N/A')} |\n"
markdown_output += f"| **Avg Response Time** | {summary_data.get('avgResponseTime', 'N/A'):.2f} ms |\n"
markdown_output += f"| **Median Response** | {summary_data.get('medianResponseTime', 'N/A')} ms |\n"
markdown_output += f"| **90th Percentile** | {summary_data.get('90line', 'N/A')} ms |\n"
markdown_output += f"| **95th Percentile** | {summary_data.get('95line', 'N/A')} ms |\n"
markdown_output += f"| **99th Percentile** | {summary_data.get('99line', 'N/A')} ms |\n"
markdown_output += f"| **Min Response Time** | {summary_data.get('minResponseTime', 'N/A')} ms |\n"
markdown_output += f"| **Max Response Time** | {summary_data.get('maxResponseTime', 'N/A')} ms |\n"
markdown_output += f"| **Avg Latency** | {summary_data.get('avgLatency', 'N/A'):.2f} ms |\n"
markdown_output += f"| **Std Deviation** | {summary_data.get('stDev', 'N/A'):.2f} |\n"
markdown_output += f"| **Total Duration** | {summary_data.get('duration', 'N/A')} seconds |\n"
markdown_output += f"| **Avg Throughput** | {summary_data.get('avgThroughput', 'N/A'):.2f} req/s |\n"
markdown_output += f"| **Error Count** | {summary_data.get('errorsCount', 'N/A')} |\n"
markdown_output += f"| **Error Rate** | {summary_data.get('errorsRate', 'N/A'):.2f}% |\n"
markdown_output += f"| **Concurrency** | {summary_data.get('concurrency', 'N/A')} |\n"
markdown_output += "\n"

return markdown_output

if __name__ == "__main__":
if len(sys.argv) < 4:
print("Usage: python generate-markdown-report.py <path_to_blazemeter_report.json> <artifact_name> <test_id>")
sys.exit(1)

json_file_path = sys.argv[1]
artifact_name = sys.argv[2]
test_id = sys.argv[3]

if not os.path.exists(json_file_path):
print(f"Error: File not found at {json_file_path}")
sys.exit(1)

try:
with open(json_file_path, 'r') as f:
blazemeter_report_json = json.load(f)
markdown_report = generate_markdown_report(blazemeter_report_json, artifact_name, test_id)
print(markdown_report)
except json.JSONDecodeError:
print(f"Error: Invalid JSON in file {json_file_path}")
sys.exit(1)
except Exception as e:
print(f"An unexpected error occurred: {e}")
sys.exit(1)