A CLI-first API testing language written in C.
BAD helps you write readable API tests with requests, assertions, variables, imports, templates, hooks, and execution controls in a compact syntax.
- Main binary usage:
./bad file.bad [flags] - Full demo:
examples/01-basics/quick_start_demo.bad - Examples catalog:
examples/README.md - Reusable templates:
examples/02-imports/reusable_templates.bad - Group + hook workflow:
examples/03-hooks-and-flow/group_lifecycle_with_overrides.bad - Object/template/error-hook workflow:
examples/03-hooks-and-flow/object_template_url_hooks.bad - Deterministic regression suite:
examples/03-hooks-and-flow/regression_object_template_hooks.bad - Runtime stats demo:
examples/04-runtime/runtime_stats_report.bad - Benchmark scenario:
examples/04-runtime/benchmark_baseline.bad - Benchmark tool:
node bench/compare.js - Config sample:
examples/.badrc
brew install curl
make
./bad examples/01-basics/quick_start_demo.badsudo apt update
sudo apt install -y build-essential libcurl4-openssl-dev
make
./bad examples/01-basics/quick_start_demo.badpacman -S --needed mingw-w64-ucrt-x86_64-gcc mingw-w64-ucrt-x86_64-curl make
make
./bad.exe examples/01-basics/quick_start_demo.badPowerShell runner:
.\run_bad.ps1 examples/01-basics/quick_start_demo.badsudo make install
bad examples/01-basics/quick_start_demo.badtest "ping" {
send GET "https://jsonplaceholder.typicode.com/users/1"
expect status 200
}
Run it:
./bad quick.badbase_url = "https://jsonplaceholder.typicode.com"
timeout = 10000
test "get user" {
send GET "/users/1"
expect status 200
expect json.id == 1
}
test "name" {
send GET "/users/1"
expect status 200
}
Supported methods:
GETPOSTPUTPATCHDELETE
Examples:
send GET "/users/1"
send GET user_path
send POST "/posts" {
body {
title: "hello"
userId: 1
}
}
send POST "/posts" {
body {
title: "example"
body: "created by bad"
userId: 1
}
header {
Content-Type: "application/json"
Accept: "application/json"
}
}
Aliases also supported:
payload->bodyheaders->header
expect status 200
assert status 201
expect json.id exists
expect json.user.name == "krish"
expect json.count > 0
expect json.items.0.price <= 100
expect json.status != "error"
Operators:
==!=>>=<<=
let auth_token = "Bearer demo"
let user_path = "/users/1"
send GET user_path {
header {
Authorization: auth_token
}
}
Store reusable object fragments and spread them directly into body or header:
let common_headers = {
Accept: "application/json"
X-Client: "bad"
}
let base_payload = {
userId: 1
published: true
}
send POST "/posts" {
header common_headers
body {
base_payload
title: "from spread"
}
}
body/header can accept either a block or a direct object source:
body base_payload
header common_headers
let can now read from the most recent response:
send POST "/auth/login" {
body {
email: "temp@example.com"
password: "password123"
}
}
let jwt = json.token
let status_code = status
let auth_header = bearer jwt
let req_ms = time_ms
let now = now_ms
let api_base = env API_BASE
let first_arg = args 0
print jwt
print status_code
Notes:
json.pathreads from the last response body.statusreads the last response status code.time_msreads the last response duration in milliseconds.now_msreads current epoch time in milliseconds.time <name>reads elapsed milliseconds for a named timer.env NAMEreads process environment variable values.args Nreads positional CLI argument at indexN.bearer <value>prependsBearerif missing.- Object variables are spread-only in request
body/header; using them as scalar values is rejected. print <value>prints resolved values during test execution.- Variables are file-scoped at runtime, so values set in one test can be reused in later tests or top-level
printstatements.
BAD exposes built-in runtime metrics through stats and stats.*.
stats is treated as a built-in namespace in value expressions.
Examples:
print stats
print stats.requests.total
print stats.requests.last_time_ms
print stats.assertions.passed
print stats.runtime.soft_errors
fail_if stats.requests.avg_time_ms > 1200 because "too slow"
Supported selectors include:
stats.requests.totalstats.requests.successfulstats.requests.network_failures(alias:stats.requests.failed)stats.requests.last_statusstats.requests.last_time_msstats.requests.avg_time_msstats.requests.total_time_msstats.assertions.passedstats.assertions.failedstats.assertions.totalstats.assertions.current_test_passedstats.assertions.current_test_failedstats.runtime.soft_errorsstats.runtime.zero_assert_testsstats.runtime.skipped_testsstats.runtime.skipped_groupsstats.runtime.filtered_testsstats.runtime.filtered_groupsstats.runtime.strict_runtime_errorsstats.timers.count
base_url = "https://jsonplaceholder.typicode.com"
timeout = 10000
print_request = false
print_response = true
show_time = true
show_timestamp = true
strict_runtime_errors = false
json_pretty = true
save_history = true
save_steps = true
history_mode = "per-file" # all | per-file | per-test | off
history_methods = "GET,POST" # allow-list by method
history_exclude_methods = "DELETE" # deny-list by method
history_only_failed = false
history_include_headers = true
history_include_request_body = true
history_include_response_body = true
history_max_body_bytes = 0 # 0 => unlimited
history_dir = ".bad-history"
history_file = ".bad-history/all-runs.jsonl"
if json.token exists {
print "token present"
} else_if status == 429 {
sleep 100
stop because "rate limited"
} else {
stop_all because "token missing"
}
Logical operators are supported inside conditions:
if status == 200 and not json.error exists {
print "healthy response"
}
if status == 200 or status == 201 {
print "accepted status"
}
if (status == 200 or status == 201) and not (json.error exists) {
print "explicitly grouped condition"
}
Condition operator precedence:
| Priority | Operator(s) | Notes |
|---|---|---|
| 1 (highest) | (...) |
explicit grouping |
| 2 | not |
unary negation |
| 3 | ==, !=, <, <=, >, >=, contains, starts_with, ends_with, regex, in, exists |
comparison/membership |
| 4 | and |
logical conjunction |
| 5 (lowest) | or |
logical disjunction |
Condition grammar (simplified):
condition := or_expr
or_expr := and_expr ("or" and_expr)*
and_expr := unary_expr ("and" unary_expr)*
unary_expr := "not" unary_expr | "(" condition ")" | primary
primary := value ["exists" | op value | "in" list]
op := "==" | "!=" | "<" | "<=" | ">" | ">=" |
"contains" | "starts_with" | "ends_with" | "regex"
list := "[" value ("," value)* "]"
Top-level if/else is also supported (outside tests). This is useful for suite preflight checks:
let preflight = send GET "/health"
if status != 200 {
stop_all because "health check failed"
} else {
print "preflight ok"
}
skip_if status != 200 because "service unavailable"
fail_if status >= 500 because "server error"
expect json.message contains "ready"
expect json.name starts_with "clixiya"
expect json.email ends_with "@example.com"
expect json.trace_id regex "^[a-z0-9-]{8,}$"
expect status in [200, 201, 204]
if status in [200, 201] and not json.error exists {
print "ok response set"
}
sleep 250
stop because "skip remaining steps in this test"
stop_all because "abort full file execution"
Global defaults:
retry_count = 2
retry_delay_ms = 100
retry_backoff = linear
retry_jitter_ms = 25
Per-request override:
send GET "http://127.0.0.1:9/unreachable" {
retry 3
retry_delay_ms 200
retry_backoff exponential
retry_jitter_ms 50
}
Retry applies to network failures, 429, and 5xx statuses.
expect time_ms < 300
expect time auth_flow < 2000
Track duration across multiple steps:
time_start auth_flow
send POST "/auth/login" {
body {
email: "temp@example.com"
password: "password123"
}
}
time_stop auth_flow
let auth_flow_ms = time auth_flow
print auth_flow_ms
Built-in timing values:
time_ms: most recent request duration in milliseconds.time <name>: elapsed milliseconds for timer<name>.now_ms: current epoch time in milliseconds.<name>_ms: auto variable written whentime_stop <name>runs.last_time_ms: auto variable for most recent request duration.
import "examples/02-imports/shared_exports.bad"
import "examples/02-imports/selective_source.bad" only auth_token, api_users
import "examples/02-imports/reusable_templates.bad" only profile_path as user_path
export let auth_token = "Bearer xyz"
export request get_user {
method GET
path "/users/1"
}
use->importassert->expectreq->requesttemplate->requestpayload->bodyheaders->headerbase->base_urlwait->timeout
template get_user {
method GET
path "/users/1"
header common_headers
expect status 200
expect json.id exists
}
Notes:
templateis an alias ofrequest.pathis optional in template declarations and can be supplied at send time.expect ...statements inside a template run automatically after the request executes.
test "template call" {
send req get_user
expect status 200
}
Also valid:
send request get_user
send template get_user
send req get_user with {
path "/users/2"
payload {
title: "override"
}
headers {
X-Demo: "1"
}
}
Override behavior:
pathoverride replaces template path.- Header overrides merge with template headers (
withvalues win on duplicate keys). - Body override replaces template body by default.
- Set
body_merge true(ormerge_body true) insidewith { ... }to merge template body defaults with override fields.
Example body merge:
send req create_todo with {
body {
title: "override title"
}
body_merge true
}
before_all {
print "suite setup"
}
before_each {
let trace = "start"
}
after_each {
let trace = "end"
}
after_all {
let suite = "done"
}
on_error {
print "any assertion/network failure"
}
on_assertion_error {
print "assertion failed"
}
on_network_error {
print "transport failed"
}
Pattern matching supports * and is evaluated against both full URL and path.
before_url "/*" {
print "before request"
}
after_url "/users/*" {
print "after users request"
}
on_url_error "/users/*" {
print "users request failed"
}
group "users" {
test "get one" {
send GET "/users/1"
expect status 200
}
}
skip test "temporary" because "flaky"
skip group "legacy" because "slow" {
test "old flow" {
send GET "/users/9"
expect status 200
}
}
only test "focus this"
only group "smoke"
only import "examples/02-imports/reusable_templates.bad"
only req load_profile,load_todo
Filter behavior summary:
- If any
only testexists, only those tests run. - If no
only testbutonly groupexists, only those groups run. only importlimits import execution.only reqruns tests that use selected request templates.
./bad file.bad
./bad file.bad --verbose
./bad file.bad --full-trace
./bad file.bad -- arg0 arg1./bad file.bad --flat
./bad file.bad --table
./bad file.bad --json-view
./bad file.bad --json-pretty./bad file.bad --print-request
./bad file.bad --print-response
./bad file.bad --show-time
./bad file.bad --show-timestamp./bad file.bad --save
./bad file.bad --save --save-steps
./bad file.bad --save --save-dir .history
./bad file.bad --save --save-file .bad-history/all-runs.jsonl
./bad file.bad --save --save-mode per-file
./bad file.bad --save --save-mode per-test --save-dir .history
./bad file.bad --save --save-methods GET,POST
./bad file.bad --save --save-exclude-methods DELETE
./bad file.bad --save --save-only-failed
./bad file.bad --save --no-save-response-body --save-max-body-bytes 2048./bad file.bad --base https://staging.api.com
./bad file.bad --timeout 3000
./bad file.bad --fail-fast
./bad file.bad --strict-runtime-errors
./bad file.bad --remember-token
./bad file.bad --timing --timestamp./bad file.bad --config examples/.badrc
./bad file.bad --log-level debug --color always◆ test "login"
OK status 200
OK json.token exists
OK (2/2 passed) [134ms]
◆ test "get user"
response:
├─ id: 1
├─ name: "krish"
└─ email: "k@example.com"
user.id = 1
user.name = "krish"
id name
---- --------
1 krish
2 alex
Example:
{
"base_url": "https://api.example.com",
"timeout": 10000,
"pretty_output": true,
"save_history": true,
"history_mode": "all",
"history_methods": "",
"history_exclude_methods": "",
"history_only_failed": false,
"save_steps": true,
"history_include_headers": true,
"history_include_request_body": true,
"history_include_response_body": true,
"history_max_body_bytes": 0,
"history_dir": ".bad-history",
"history_file": ".bad-history/all-runs.jsonl",
"history_format": "jsonl",
"print_request": false,
"print_response": false,
"show_time": false,
"show_timestamp": false,
"json_view": false,
"json_pretty": true,
"remember_token": false,
"use_color": true,
"fail_fast": false,
"strict_runtime_errors": false,
"log_level": "info"
}CLI flags override .badrc values.
With --save, BAD writes structured test history.
Records include:
- schema
- id
- timestamp
- source file
- test name
- request snapshot
- response snapshot
- optional step timeline (
--save-steps)
Advanced save controls:
history_mode/--save-mode:all,per-file,per-test,offhistory_methods/--save-methods: allow-list request methodshistory_exclude_methods/--save-exclude-methods: deny-list methodshistory_only_failed/--save-only-failed: save only failed testshistory_include_headers: include or omit request headers in historyhistory_include_request_body: include or omit request bodyhistory_include_response_body: include or omit response bodyhistory_max_body_bytes/--save-max-body-bytes: truncate stored bodies
--save-file still appends to one JSONL file in all mode. Use per-file or per-test mode when you want automatic split files under history_dir.
The examples/ directory is bundled and ready to run.
Start here:
examples/01-basics/quick_start_demo.badfor first runexamples/04-runtime/advanced_runtime_controls.badfor grouped conditions, string operators,in, retry backoff/jitter, env/args, timersexamples/02-imports/composed_import_suite.badfor import-driven suitesexamples/03-hooks-and-flow/object_template_url_hooks.badfor object spread vars, template inline expects, and URL/error hooksexamples/03-hooks-and-flow/regression_object_template_hooks.badfor deterministic regression of object/template/hook behaviorexamples/04-runtime/runtime_stats_report.badfor built-in runtime metrics viastats.*examples/04-runtime/benchmark_baseline.badfor benchmark runs
Full index and coverage map: examples/README.md
testsendexpectletprintimportexportrequesttemplategroupbefore_allbefore_eachafter_eachafter_allon_erroron_assertion_erroron_network_errorbefore_urlafter_urlon_url_errorskipskip_iffail_ifonlybecausewithifandornotelseelse_ifcontainsstarts_withends_withregexinretryretry_delay_msretry_backoffretry_jitter_mssleepstopstop_allbearerenvargstime_starttime_stoptimetime_msnow_ms
statusjsonexistscontainsstarts_withends_withregexin
body/payloadheader/headers
base/base_urlwait/timeout
test:test "health" { ... }send:send GET "/users/1"expect/assert:expect status 200let:let token = json.tokenprint:print tokenimport/use:import "examples/02-imports/shared_exports.bad"export:export let profile_path = "/users/1"request/req:request get_user { method GET path "/users/1" }template:template get_user { method GET path "/users/1" }group:group "users" { ... }before_all:before_all { print "suite setup" }before_each:before_each { let trace = "start" }after_each:after_each { let trace = "end" }after_all:after_all { print "suite done" }on_error:on_error { print "failure" }on_assertion_error:on_assertion_error { print "assert failed" }on_network_error:on_network_error { print "network failed" }before_url:before_url "/*" { print "before" }after_url:after_url "/users/*" { print "after" }on_url_error:on_url_error "/users/*" { print "url failed" }skip:skip test "legacy" because "flaky"skip_if:skip_if status != 200 because "service down"fail_if:fail_if status >= 500 because "server error"only:only test "smoke"because:stop because "precondition failed"with:send req get_user with { path "/users/2" }body_merge/merge_body:send req create_todo with { body { title: "x" } body_merge true }if:if status == 200 { ... }else_if:else_if status == 429 { sleep 100 }else:else { stop_all because "abort" }and:if status == 200 and json.ok exists { ... }or:if status == 200 or status == 201 { ... }not:if not json.error exists { ... }- Parentheses grouping:
if (status == 200 or status == 201) and not json.error exists { ... } contains:expect json.message contains "ready"starts_with:expect json.name starts_with "clixiya"ends_with:expect json.email ends_with "@example.com"regex:expect json.trace_id regex "^[a-z0-9-]+$"in:expect status in [200, 201, 204]retry:send GET "/foo" { retry 3 }retry_delay_ms:send GET "/foo" { retry_delay_ms 200 }retry_backoff:send GET "/foo" { retry_backoff exponential }retry_jitter_ms:send GET "/foo" { retry_jitter_ms 50 }sleep:sleep 250stop:stop because "end this test"stop_all:stop_all because "stop suite"bearer:let auth = bearer tokenenv:let api_base = env API_BASEargs:let first = args 0time_start:time_start auth_flowtime_stop:time_stop auth_flowtime:expect time auth_flow < 2000time_ms:expect time_ms < 300now_ms:expect now_ms > 0status:expect status >= 200json:expect json.user.id == 1exists:expect json.user.name existsbody/payload:payload { title: "hello" }header/headers:headers { Accept: "application/json" }base/base_url:base = "https://api.example.com"wait/timeout:wait = 10000
bad/
├── bench/
│ ├── compare.js
│ └── results/
├── examples/
│ ├── README.md
│ ├── demo.bad
│ ├── advanced_features.bad
│ └── benchmark.bad
├── include/
│ └── bad.h
│ └── bad_platform.h
├── src/
│ ├── main.c
│ ├── lexer.c
│ ├── parser.c
│ ├── runtime.c
│ ├── http.c
│ ├── json_helpers.c
│ └── vars.c
├── run_bad.sh
├── run_bad.ps1
├── run_bad.cmd
├── extenstion/
└── server/
make clean && makeEnsure curl development libraries are installed.
Increase timeout in file or CLI:
timeout = 30000
or
./bad file.bad --timeout 30000Use correct relative path from current working directory.
Use --print-response or --json-pretty to inspect shape.
0=> all assertions passed1=> one or more assertions failed
Yes.
Yes, e.g. send GET user_path.
Yes, skip test "name" because "reason".
Yes, set base_url to local server address.
# simple run
./bad examples/01-basics/quick_start_demo.bad
# config run
./bad examples/01-basics/quick_start_demo.bad --config examples/.badrc
# debug request and response
./bad examples/01-basics/quick_start_demo.bad --print-request --print-response
# include timing diagnostics
./bad examples/01-basics/quick_start_demo.bad --show-time --show-timestamp
# pretty output + history
./bad examples/01-basics/quick_start_demo.bad --json-pretty --save --save-steps
# fail-fast CI style
./bad examples/01-basics/quick_start_demo.bad --fail-fast --color never- Update parser/runtime features.
- Add or update examples.
- Run full example suite.
- Update docs (
README.md, extension docs). - Rebuild extension VSIX if language behavior changed.
Run the benchmark comparison:
node bench/compare.jsLatest checked-in results: bench/results/latest.md
Current snapshot (process startup included per run):
| Tool | Runs | Mean | Median | P95 | Min | Max |
|---|---|---|---|---|---|---|
| bad | 15 | 361.77 ms | 371.66 ms | 436.30 ms | 282.59 ms | 436.30 ms |
| curl | 15 | 355.22 ms | 350.45 ms | 450.34 ms | 280.03 ms | 450.34 ms |
Interpretation:
- BAD is within a close range of raw
curlfor this public endpoint. - Numbers vary with network jitter; rerun locally for decision-grade comparisons.
End of main README.