AIT implements layered security controls to protect API credentials, prevent dangerous command execution, and ensure safe interaction with AI-generated shell commands.
Configuration is stored at:
- Unix:
~/.ait/config.json(file0600, directory0700) - Windows:
%USERPROFILE%\.ait\config.json
Only the file owner has read/write access. Permissions are enforced on creation.
API tokens are never shown in full. All display output masks tokens to show only the last 4 characters:
API Token: ***********a1b2
This applies to:
ait configoutput- Error messages
- Debug logs
- Tokens are never logged in full
- Tokens are never printed to stdout
- Tokens are never included in error messages
- Tokens are never sent to any endpoint other than the configured
api_endpoint
When using LiteLLM as a proxy (see deploy/litellm/), users authenticate with virtual keys instead of raw provider tokens.
| Risk | Raw Provider Token | LiteLLM Virtual Key |
|---|---|---|
| Revocation | Must regenerate token | Disable instantly |
| Scope | Full provider access | Specific models only |
| Tracking | No usage visibility | Per-key cost tracking |
| Budget | No limits | Configurable spend caps |
| Rate limits | Provider-level only | Per-key RPM/TPM |
sk-litellm-{random}
Virtual keys are created via the LiteLLM dashboard or API:
curl -X POST http://localhost:4000/key/generate \
-H "Authorization: Bearer $LITELLM_MASTER_KEY" \
-H "Content-Type: application/json" \
-d '{"models": ["default"], "max_budget": 5.0, "budget_duration": "30d"}'| Action | Method |
|---|---|
| Create | Dashboard or POST /key/generate |
| Disable | Dashboard or POST /key/delete |
| Set budget | Dashboard or POST /key/update |
| View spend | Dashboard or GET /key/info |
AI-generated commands are always shown to the user before any action is taken. AIT never auto-executes commands.
1. User provides natural language description
2. AI generates a shell command
3. Command is printed to stdout
4. User reviews the command
5. User decides whether to copy/run it manually
Users should be aware of potentially dangerous commands. Common risky patterns:
| Risk Level | Examples | Recommendation |
|---|---|---|
| Safe | ls, cat, pwd, echo |
Run freely |
| Low | grep, find, du, df |
Review paths |
| Medium | chmod, chown, apt install |
Verify arguments |
| High | curl ... | bash, rm -rf |
Inspect carefully |
| Critical | rm -rf /, :(){ :|:& };: |
Never run |
AIT intentionally does not execute commands — it only prints them. This is a deliberate security choice:
- The user's existing terminal handles execution
- The user has full control and visibility
- No privilege escalation through the tool
- No hidden side effects
Each request contains:
- System prompt: OS type and shell type (e.g., "Linux", "bash")
- User prompt: The natural language description provided by the user
- File contents
- Environment variables
- Command history
- Directory listings
- Any data beyond the user's explicit prompt
| Data | Stored Locally | Location |
|---|---|---|
| Config (endpoint, model, shell) | Yes | ~/.ait/config.json |
| API token | Yes (in config) | ~/.ait/config.json |
| Debug logs (if enabled) | Yes | ~/.ait/debug.log |
| Command history | No | — |
| Prompts | No | — |
| AI responses | No | — |
When --debug is enabled:
- Logs are written to
~/.ait/debug.log - API tokens are masked in logs
- Logs contain request metadata (not full prompts by default)
- Users should not share debug logs publicly without review
When using LiteLLM, rate limits are enforced per virtual key:
Free Tier:
rpm: 30 # requests per minute
tpm: 5000 # tokens per minute
Pro Tier:
rpm: 120
tpm: 20000
Enterprise:
rpm: 600
tpm: 100000Rate limiting prevents:
- Accidental cost overruns
- API abuse
- Denial of service
| Role | API Access | Dashboard | Key Management |
|---|---|---|---|
| End user | Virtual key → specific models | No | No |
| Admin | Master key → all models | Full access | Full control |
- Create/revoke virtual keys
- Set per-key budgets and rate limits
- View spend logs and usage metrics
- Configure model routing
- Disable the key immediately via dashboard or API
- Review spend logs for unauthorized usage
- Generate a new key for the affected user
- Update AIT config:
ait config set api_token sk-new-key
- Rotate the master key (
LITELLM_MASTER_KEYin.env) - Restart the LiteLLM proxy
- Revoke all existing virtual keys
- Regenerate virtual keys for legitimate users
- Audit database for unauthorized key creation
- Always use HTTPS endpoints in production
- AIT sends the
Authorization: Bearerheader — this must be encrypted in transit - Local development (localhost) may use HTTP
- AIT is a stateless CLI tool — no listening ports, no daemon
- Configuration is local files only
- No telemetry, no analytics, no phone-home
- User prompts are processed but not stored by AIT
- LiteLLM can be configured with data retention policies
- Users can delete their local config at any time:
rm -rf ~/.ait
When using LiteLLM with a database, all administrative actions are logged:
- Key creation and deletion
- Budget changes
- Rate limit updates
- Access denials