English | 中文
pi install npm:pi-models-metadata
# Custom provider base URL and API Key
export PIMM_BASE_URL=https://openrouter.ai/api/v1
export PIMM_API_KEY=your_api_key_here
# Then launch pi
pi# Defaults to OpenAI Responses API
export PIMM_API_TYPE=openai-responses
# Use this for OpenAI Chat Completions-compatible providers
export PIMM_API_TYPE=openai-completions
export PIMM_PROVIDER_NAME=custom-provider-name
export PIMM_METADATA_DATA_URL=https://yourdomain.com/path/to/models-data.json
export PIMM_CACHE_TTL_SECONDS=3600
export PIMM_CACHE_DIR=/path/to/cache
export PIMM_SKIP_CACHE=trueThe extension also reads these variables from a local .env file. Real
environment variables take precedence over .env values. Model and metadata
responses are cached locally for 1 hour by default. The default cache directory
is $XDG_CACHE_HOME/pi-models-metadata, or ~/.cache/pi-models-metadata when
XDG_CACHE_HOME is not set. Set PIMM_SKIP_CACHE=true to force fresh provider
model and metadata requests while still updating the local cache after a
successful request.
- Model IDs and display names
- Context window
- Maximum output tokens
- Text/image input capability
- Reasoning support
- Input/output/cache pricing
- tab-follow-up: Use Tab instead of Alt+Enter to trigger follow-up input.
- ui-finetune: UI tweaks for a cleaner look.