The current dry run mode (--dry-run) only shows basic statistics like line count and character count. It doesn't provide crucial information for cost planning such as:
- Estimated token count for the LLM API call
- Estimated cost based on the selected provider
- Processing time estimates
This makes it difficult for users to understand the cost implications before making actual API calls.