Transform plain text into powerful AI prompts with async efficiency
APA is an async, provider-agnostic command-line tool that converts .txt files into structured prompts for leading LLM providers. Built on LiteLLM with enterprise-grade retry logic and clean architecture.
Features β’ Installation β’ Usage β’ Configuration β’ API β’ Contributing
|
|
|
|
apa/
βββ π configuration.toml # Runtime settings
βββ π system_prompt.toml # Customizable system prompt
βββ π __init__.py
βββ π§ config.py # Unified configuration system
βββ π domain/ # Domain layer
β βββ models.py # Value objects (Prompt, LLMConfig, etc.)
β βββ interfaces.py # Abstract interfaces
β βββ exceptions.py # Domain exceptions
βββ π application/ # Application layer
β βββ prompt_processor.py # Prompt processing orchestration
β βββ response_handler.py # Response handling and file output
βββ π infrastructure/ # Infrastructure layer
βββ llm/
β βββ llm_client.py # LiteLLM adapter
β βββ model_capabilities.py # Model capability definitions
βββ io/
β βββ file_writer.py # File I/O operations
βββ ui/
βββ console_loading_indicator.py # Loading animations
π main.py # CLI entry point
π run-apa.sh # uv-powered execution script
π requirements.txt # Dependencies
π¦ pyproject.toml # Project metadata
- Python 3.13+
- API Key for at least one provider
- uv (recommended) or pip
# 1. Clone the repository
git clone https://github.com/yourusername/apa.git
cd apa
# 2. Create virtual environment
uv venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# 3. Install APA
uv pip install -e .
# 4. Set up your API key
echo "OPENAI_API_KEY=sk-..." > .envCreate a prompt file:
echo "Explain quantum computing in simple terms" > prompt.txtRun APA:
# Using the helper script (auto-loads .env, manages virtual environment with uv)
./run-apa.sh --msg-file prompt.txt
# Direct execution
python main.py --msg-file prompt.txt
# After installation
apa --msg-file prompt.txtimport asyncio
from apa.config import load_settings
from apa.domain.models import Prompt, SystemPrompt, LLMConfig
from apa.application.prompt_processor import PromptProcessor
from apa.infrastructure.llm.llm_client import LLMClient
async def main():
# Load unified configuration
settings = load_settings()
# Create domain objects
user_prompt = Prompt(content="Explain the SOLID principles", language=settings.programming_language)
system_prompt = SystemPrompt(template=settings.system_prompt, language=settings.programming_language)
llm_config = LLMConfig(
provider=settings.provider,
model=settings.model,
api_key=settings.api_key,
temperature=settings.temperature,
stream=settings.stream
)
# Create infrastructure and application services
llm_client = LLMClient(llm_config)
prompt_processor = PromptProcessor(llm_client)
# Process prompt
if settings.stream:
async for chunk in prompt_processor.process_prompt_stream(system_prompt, user_prompt, llm_config):
print(chunk, end='', flush=True)
else:
response = await prompt_processor.process_prompt(system_prompt, user_prompt, llm_config)
print(response)
asyncio.run(main())# Model parameters
temperature = 0.2 # Creativity level (0.0-1.0)
stream = true # Enable real-time streaming
# Provider-specific settings
programming_language = "Python" # Default language injected into system prompt
reasoning_effort = "high" # OpenAI o3/o4 models only
thinking_tokens = 16384 # Anthropic Claude models only
# Model selection
provider = "openai" # openai | anthropic | deepseek | openrouter
model = "o3" # Model identifier
# Fallback configuration (optional)
fallback_provider = "anthropic" # Provider to use if primary fails
fallback_model = "claude-sonnet-4-20250514" # Model to use if primary failsCustomize the AI assistant's behavior:
system_prompt = """
## Role
You are an advanced AI programming assistant specializing in $programming_language programming language...
## Task
Your tasks include...
"""
The only variable required is `programming_language`. The value comes from `configuration.toml`; if omitted it defaults to "Python".
### π Environment Variables
Create a `.env` file:
```bash
# Provider API Keys
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=anthropic-...
DEEPSEEK_API_KEY=...
OPENROUTER_API_KEY=...
| Feature | Supported Models |
|---|---|
| π― Reasoning Effort | o3, o3-mini, o4, o4-mini |
| π§ Extended Thinking | Claude 3.7 Sonnet, Sonnet 4, Opus 4 |
| π¨βπ» Developer Role | o1, o3, o4, gpt-4.1 |
| π‘οΈ No Temperature | DeepSeek Reasoner, o1-o4 series |
APA automatically retries failed requests:
- 3 attempts maximum
- Exponential backoff: 2-8 seconds
- Smart error handling
APA includes an intelligent fallback system that automatically switches providers when the primary fails:
- Primary attempts: 3 tries with exponential backoff
- Automatic switchover: Seamlessly transitions to fallback provider
- Provider hot-swap: Loads provider-specific settings without restart
- Configurable: Set
fallback_providerandfallback_modelinconfiguration.toml
To disable fallback, simply omit these keys from your configuration.
Example configuration:
# Primary provider
provider = "openai"
model = "gpt-4"
# Fallback provider (activated after 3 primary failures)
fallback_provider = "anthropic"
fallback_model = "claude-sonnet-4-20250514"- Update
PROVIDER_ENV_MAPinapa/config.py - Add model capabilities to
apa/infrastructure/llm/model_capabilities.py - Update LLMClient logic if needed
- Test with your API key
APA follows Clean Architecture principles with clear separation of concerns:
- Domain Layer: Core business logic and value objects
- Application Layer: Use cases and orchestration
- Infrastructure Layer: External adapters (LLM providers, file I/O, UI)
The unified configuration system in apa/config.py provides a single entry point for all settings with automatic provider detection and template rendering.
MIT License - see LICENSE for details.
Built with β€οΈ by Kenny Dizi