A powerful, intuitive CLI tool for querying Better Stack logs with GraphQL-inspired syntax. Query your logs naturally without memorizing complex SQL or API endpoints.
- GraphQL-inspired query syntax - Write queries that feel natural and are easy to remember
- Simple commands - Common operations like
tail,errors,searchwork out of the box - Smart filtering - Filter by level, subsystem, time ranges, or any JSON field
- Beautiful output - Color-coded, formatted logs that are easy to read
- Multiple formats - Export as JSON, CSV, or formatted tables
- Fast - Built with Bun for maximum performance
- Real-time following - Tail logs in real-time with
-fflag - Query history - Saves your queries for quick re-use
- Configurable - Set defaults for source, output format, and more
- Responsive networking - Uses the native Fetch API with sane timeouts so hung connections never stall your session
Better Stack's MCP (Model Context Protocol) endpoints expose the same log storage that bslog queries, but they serve different workflows. Reference the official MCP integration guide for setup, and use the two side by side:
- Token footprint matters. The stock MCP ships ~50 tools and consumes roughly 7 k tokens of system context before it does any useful work. If you need lean prompts or run on constrained models, that overhead is noticeable.
- Latency favors the CLI. bslog keeps a single HTTP session alive and streams JSON rows immediately; in practice a 5 000-row fetch finished ~40 % faster than the MCP round-tripping small SQL snippets through the assistant shell.
- Lean on MCP for quick context in chat-based tooling. Assistants can run small ClickHouse snippets via MCP to summarize recent events without leaving the conversation.
- Reach for bslog when you need the full payload. CLI commands stream nested JSON fields, respect your saved defaults, and integrate with pipes/
jqfor deeper debugging. - Consider credential flow. MCP sessions often require short-lived "Connect remotely" credentials, while bslog reads long-lived environment variables once per machine.
- Collaborate with both. Share MCP snapshots with teammates for at-a-glance status, then dive into bslog to follow requests, inspect stack traces, or automate reporting.
| Reach for MCP when you need... | Reach for bslog CLI when you need... |
|---|---|
| Lightweight, conversational context inline with an AI assistant | Full-fidelity log payloads, streaming output, and piping into local tooling |
| Quick summaries or counts across a narrow time window | Chronological drill-down across multiple sources with --sources dev,prod |
| Built-in helpers for quick aggregated snapshots | Explicit SQL, --format json, or --jq when you need rollups |
| Temporary credentials you can rotate per debugging session | Long-lived local credentials stored in your shell profile |
| Remote teammates to reproduce a query in their chat workspace | Automation or scripting from CI/cron with reproducible output formats |
Skip raw SQL when you need structured filters—every tail/search/errors/warnings/trace command now accepts --where key=value. Repeat the flag to chain predicates:
bslog search "timeline" --where module=timeline --where env=production --since 1h
bslog tail prod --where userId='0199abc' --where attempt=3 --until 2025-09-24T18:00Filters perform equality matches; values auto-detect booleans (true/false), numbers, null, quoted strings, and JSON objects/arrays.
Combine --since and the new --until flag on any tail/errors/warnings/search/trace command to capture a fixed time window.
bslog tail prod --since 2025-09-24T13:00 --until 2025-09-24T14:00Use --format json --jq '<filter>' to trim or reshape large payloads without leaving the terminal.
bslog tail prod --format json --jq '.[].message' --limit 20
bslog search "timeout" --format json --jq '[.[] | {dt, message}]'The CLI automatically switches to JSON when --jq is provided; if jq is missing, the raw JSON payload is printed instead with an error message.
# Using bun (recommended)
bun add -g @steipete/bslog
# Or using npm
npm install -g @steipete/bsloggit clone https://github.com/steipete/bslog.git
cd bslog
bun install # Uses Bun for package management
bun run build # Uses Bun for building and running
bun link # Link globally for testing- Bun >= 1.0.0 - JavaScript runtime, bundler, and package manager
# Install Bun
curl -fsSL https://bun.sh/install | bashBetter Stack uses two different authentication systems, and both are required for full functionality:
Better Stack separates authentication for security and access control:
- Telemetry API Token: Manages sources and metadata (read-only access to source configuration)
- Query API Credentials: Provides access to actual log data (sensitive information)
This separation allows teams to grant different access levels - for example, someone might be able to list sources but not read the actual logs.
What it's used for:
- Listing available log sources (
bslog sources list) - Getting source metadata (team ID, table names)
- Resolving source names to table identifiers
- Required for ALL commands that reference sources by name
How to get it:
- Log into Better Stack
- Navigate to Settings > API Tokens
- Create or copy your Telemetry API token
- Add to your shell configuration (
~/.zshrc,~/.bashrc, etc.):
export BETTERSTACK_API_TOKEN="your_telemetry_token_here"What it's used for:
- Actually reading log data (
bslog tail,bslog errors, etc.) - Executing SQL queries against log tables
- Any command that retrieves log content
How to get them:
- Go to Better Stack > Logs > Dashboards
- Click "Connect remotely"
- Click "Create credentials"
- Important: Copy the password immediately (it won't be shown again)
- Add to your shell configuration:
export BETTERSTACK_QUERY_USERNAME="your_username_here"
export BETTERSTACK_QUERY_PASSWORD="your_password_here"Add all three environment variables to your shell configuration:
# ~/.zshrc or ~/.bashrc
# For source discovery and metadata
export BETTERSTACK_API_TOKEN="your_telemetry_token_here"
# For querying log data
export BETTERSTACK_QUERY_USERNAME="your_username_here"
export BETTERSTACK_QUERY_PASSWORD="your_password_here"Then reload your shell:
source ~/.zshrc # or ~/.bashrcTest that both authentication systems are working:
# Test Telemetry API (should list your sources)
bslog sources list
# Test Query API (should retrieve recent logs)
bslog tail -n 5# Verify installation
bslog --version
# List all your log sources
bslog sources list
# Set your default source
bslog config source my-app-production
# Optional: capture every log level by default or tighten filtering later
bslog config set logLevel all
# Check your configuration
bslog config show# Get last 100 logs
bslog tail
# Get last 50 error logs
bslog errors -n 50
# Search for specific text
bslog search "user authentication failed"
# Follow logs in real-time (like tail -f)
bslog tail -f
# Get logs from the last hour
bslog tail --since 1h
# Get warnings from the last 2 days
bslog warnings --since 2dThe killer feature of bslog is its intuitive query language that feels like GraphQL:
# Simple query with field selection
bslog query "{ logs(limit: 100) { dt, level, message } }"
# Filter by log level
bslog query "{ logs(level: 'error', limit: 50) { * } }"
# Time-based filtering
bslog query "{ logs(since: '1h') { dt, message, error } }"
bslog query "{ logs(since: '2024-01-01', until: '2024-01-02') { * } }"# Filter by subsystem
bslog query "{ logs(subsystem: 'api', limit: 100) { dt, message } }"
# Filter by custom fields
bslog query "{ logs(where: { userId: '12345' }) { * } }"
# Complex filters
bslog query "{
logs(
level: 'error',
subsystem: 'payment',
since: '1h',
limit: 200
) {
dt,
message,
userId,
error_details
}
}"
# Search within logs
bslog query "{ logs(search: 'database connection') { dt, message } }"- Use
*to get all fields - Specify individual fields:
dt, level, message, customField - Access nested JSON fields directly (dot + bracket paths) e.g.
metadata.proxy[0].status,metadata["complex key"].value - CLI shortcut: combine
--fieldswith compact formatters when you just want to confirm counts, e.g.bslog errors -n 5 --fields dt,message.
bslog tail [options]
Options:
-n, --limit <number> Number of logs to fetch (default: 100)
-s, --source <name> Source name
-l, --level <level> Filter by log level
--subsystem <name> Filter by subsystem
--since <time> Show logs since (e.g., 1h, 2d, 2024-01-01)
--fields <names> Comma-separated list of fields to select (e.g., dt,message,level)
-f, --follow Follow log output
--interval <ms> Polling interval in milliseconds (default: 2000)
--format <type> Output format (json|table|csv|pretty)
--sources <names> Comma-separated list of sources to merge chronologically
-v, --verbose Show SQL query and debug information
Examples:
bslog tail -n 50 # Last 50 logs
bslog tail -f # Follow logs
bslog tail -v # Show SQL queries
bslog tail --since 1h --level error # Errors from last hour
bslog tail --sources dev,prod # Merge logs from multiple sources chronologicallyWhen you pass multiple names to --sources, bslog issues concurrent queries, merges the results in strict timestamp order, and augments every row with a source field. The combined output works with all formatters (pretty, json, table, csv) and keeps follow mode responsive by polling each source independently.
bslog errors [options]
Options:
-n, --limit <number> Number of logs to fetch (default: 100)
-s, --source <name> Source name
--fields <names> Comma-separated list of fields to select (e.g., dt,message,level)
--since <time> Show errors since
--format <type> Output format
--sources <names> Comma-separated list of sources to merge chronologically
Examples:
bslog errors --since 1h # Errors from last hour
bslog errors -n 200 --format json # Last 200 errors as JSONbslog warnings [options]
# Same options as errors commandbslog search <pattern> [options]
Options:
-n, --limit <number> Number of logs to fetch (default: 100)
-s, --source <name> Source name
-l, --level <level> Filter by log level
--since <time> Search logs since
--fields <names> Comma-separated list of fields to select (e.g., dt,message,level)
--format <type> Output format
--sources <names> Comma-separated list of sources to merge chronologically
Examples:
bslog search "authentication failed"
bslog search "user:john@example.com" --level error
bslog search "timeout" --since 1h --subsystem apibslog trace <requestId> [options]
Options:
-n, --limit <number> Number of logs to fetch (default: 100)
-s, --source <name> Source name
--since <time> Show logs since
--until <time> Show logs until
--format <type> Output format (json|table|csv|pretty)
--sources <names> Comma-separated list of sources to merge chronologically
-v, --verbose Show SQL query and debug information
Examples:
bslog trace 01HXABCDEF --sources dev,prod
bslog trace req-123 --since 1hbslog query <query> [options]
Options:
-s, --source <name> Source name
-f, --format <type> Output format (default: pretty)
-v, --verbose Show SQL query and debug information
Examples:
bslog query "{ logs(limit: 100) { dt, level, message } }"
bslog query "{ logs(level: 'error', since: '1h') { * } }" --verbosebslog sql <sql> [options]
Options:
-f, --format <type> Output format (default: json)
-v, --verbose Show SQL query and debug information
Example:
bslog sql "SELECT dt, raw FROM remote(t123_logs) WHERE raw LIKE '%error%' LIMIT 10"bslog sources list [options]
Options:
-f, --format <type> Output format (json|table|pretty)
Example:
bslog sources list --format tablebslog sources get <name> [options]
Options:
-f, --format <type> Output format (json|pretty)
Example:
bslog sources get my-app-productionFor convenience, common source aliases are available:
dev,development→sweetistics-devprod,production→sweetisticsstaging→sweetistics-stagingtest→sweetistics-test
# These are equivalent:
bslog tail prod
bslog tail production
bslog tail sweetistics
# Use in any command:
bslog errors dev --since 1h
bslog query "{ logs(limit: 10) { * } }" --source stagingbslog config set <key> <value>
Keys:
source Default source name
limit Default query limit
format Default output format (json|table|csv|pretty)
logLevel Default log level fallback (all|debug|info|warning|error|fatal|trace)
Examples:
bslog config set source my-app-production
bslog config set limit 200
bslog config set format pretty
bslog config set logLevel warningbslog config show
bslog config show --format jsonbslog config source <name>
Example:
bslog config source my-app-stagingThe --since and --until options support various time formats:
- Relative time:
1h(1 hour),30m(30 minutes),2d(2 days),1w(1 week) - ISO 8601:
2024-01-15T10:30:00Z - Date only:
2024-01-15 - DateTime:
2024-01-15 10:30:00
Choose the output format that works best for your use case:
Color-coded, human-readable format with proper formatting:
[2024-01-15 10:30:45.123] ERROR [api] User authentication failed
userId: 12345
ip: 192.168.1.1
Standard JSON output, perfect for piping to other tools:
[
{
"dt": "2024-01-15 10:30:45.123",
"level": "error",
"message": "User authentication failed"
}
]Formatted table output for structured viewing:
┌──────────────────────┬───────┬──────────────────────────┐
│ dt │ level │ message │
├──────────────────────┼───────┼──────────────────────────┤
│ 2024-01-15 10:30:45 │ error │ User authentication fail │
└──────────────────────┴───────┴──────────────────────────┘
CSV format for spreadsheet import:
dt,level,message
"2024-01-15 10:30:45.123","error","User authentication failed"Configuration is stored in ~/.bslog/config.json:
{
"defaultSource": "my-app-production",
"defaultLimit": 100,
"outputFormat": "pretty",
"queryHistory": [
"{ logs(level: 'error', limit: 50) { * } }",
"{ logs(search: 'timeout') { dt, message } }"
],
"savedQueries": {
"recent-errors": "{ logs(level: 'error', limit: 100) { * } }",
"api-logs": "{ logs(subsystem: 'api', limit: 50) { dt, message } }"
}
}You can combine multiple filters for precise queries:
# Errors from API subsystem in the last hour
bslog query "{
logs(
level: 'error',
subsystem: 'api',
since: '1h'
) { dt, message, stack_trace }
}"
# Search for timeouts in production, excluding certain users
bslog query "{
logs(
search: 'timeout',
where: {
environment: 'production',
userId: { not: 'test-user' }
}
) { * }
}"# Export errors to CSV for analysis
bslog errors --since 1d --format csv > errors.csv
# Pipe to jq for JSON processing
bslog query "{ logs(limit: 100) { * } }" --format json | jq '.[] | select(.level == "error")'
# Count errors by type
bslog errors --format json | jq -r '.[] | .error_type' | sort | uniq -c
# Watch for specific errors
watch -n 5 'bslog errors --since 5m | grep "DatabaseError"'# Send critical errors to Slack
bslog errors --since 5m --format json | \
jq -r '.[] | select(.level == "critical") | .message' | \
xargs -I {} curl -X POST -H 'Content-type: application/json' \
--data "{\"text\":\"{}\"}" \
YOUR_SLACK_WEBHOOK_URL
# Generate daily error report
bslog errors --since 1d --format json | \
jq -r '[.[] | {time: .dt, error: .message}]' > daily-errors.jsonMake sure you've added the token to your shell configuration and reloaded it:
echo 'export BETTERSTACK_API_TOKEN="your_token_here"' >> ~/.zshrc
source ~/.zshrcThis error occurs when trying to query logs. You need to create Query API credentials in the Better Stack dashboard (different from the Telemetry API token).
List your available sources and ensure you're using the correct name:
bslog sources list
bslog config source correct-source-nameIf you're experiencing timeouts, try:
- Reducing the
--limitparameter - Using more specific time ranges with
--sinceand--until - Checking your network connection
git clone https://github.com/steipete/bslog.git
cd bslog
bun install
bun run build# Run all tests
bun test
# Run unit tests only
bun test:unit
# Run integration tests only
bun test:integration
# Watch mode for development
bun test:watch
# With coverage report
bun test:coverageThe test suite includes:
- Unit tests for utilities, parsers, and formatters
- Integration tests for query building and API interactions
- 70+ test cases ensuring reliability
bun run type-checkbun run devContributions are welcome! Please feel free to submit a Pull Request. For major changes, please open an issue first to discuss what you would like to change.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature) - Commit your changes (
git commit -m 'Add some AmazingFeature') - Push to the branch (
git push origin feature/AmazingFeature) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Built with Bun for blazing fast performance
- Powered by Better Stack logging infrastructure
- Inspired by GraphQL's intuitive query syntax
- Issues: GitHub Issues
- Discussions: GitHub Discussions
- Email: steipete@gmail.com
Made by steipete