A powerful command-line tool for managing conversational context windows with Claude AI. Forkify enables intelligent document processing and "forking" of conversation branches, allowing you to explore multiple conversational paths from a shared context trunk.
- Overview
- Features
- Conversation Branching
- Requirements
- Installation
- Usage
- Command Reference
- Development
- License
Forkify Command Line redefines AI document analysis by enabling sophisticated context window management with Anthropic's Claude AI. The name "Forkify" comes from its core ability to "fork" conversation contextsβcreating branches that explore different paths while maintaining a shared base trunk.
The application manages the entire document lifecycle:
- Loading documents into Claude's cache (
input-docs
) - Processing them into AI-optimized summaries (
processed-docs
) - Storing conversation outputs for future reference (
output-docs
)
By removing the web and API components, this command-line version focuses exclusively on providing a lightweight yet powerful tool for context-aware document processing and conversation management.
Feature | Description |
---|---|
Context Window Management | Precisely control how much conversational history is included in Claude's attention window |
Document Processing Pipeline | Three-stage pipeline with raw documents, processed summaries, and conversation outputs |
Conversation Branching | Fork conversations to explore different paths while maintaining a shared context trunk |
System Prompt Customization | Choose different system prompts for analysis, QA, or content generation tasks |
Response Length Control | Adjust response length from extremely short (128 tokens) to very detailed (4096+ tokens) |
Token Usage Monitoring | Track token usage and associated costs across all conversations |
Document Source Tracking | View which documents are being used in the current conversation |
Session Persistence | Automatically save and restore conversation state between sessions |
The core innovation of Forkify is its conversation branching capability:
βββ Branch A: Explore technical details
β
Main Conversation βββΌββ Branch B: Focus on business implications
β
βββ Branch C: Investigate edge cases
How it works:
- You establish a base context "trunk" with your initial documents and conversation
- Create branches to explore different tangents or conversation paths
- Each branch maintains its own context window but shares the base trunk
- Switch between branches without losing your place in either conversation
This approach allows you to:
- Explore multiple analytical angles from the same document set
- Try different prompting strategies without starting over
- Maintain separate conversations that share foundational context
- Preserve trunk attention mechanisms while exploring new directions
- Python 3.8 or higher
- Anthropic API key (for Claude)
# Clone the repository
git clone https://github.com/HumainLabs/forkify-cmdline.git
cd forkify-cmdline
# Run the setup script
./scripts/setup.sh
Update the .env
file with your API keys:
ANTHROPIC_API_KEY=your_api_key_here
Start the command-line interface:
python claude.py
Or use the development script:
./scripts/dev.sh
Forkify implements a three-stage document processing pipeline:
- Input Documents (
input-docs/
): Raw documents loaded into Claude's context - Processed Documents (
processed-docs/
): AI-optimized summaries and analysis - Output Documents (
output-docs/
): Conversation history and generated content
This approach ensures efficient token usage while maintaining comprehensive context.
- Place your documents in the
input-docs/main
or create subdirectories for organization - Start a conversation with
/sw <name>
to create a new conversation - Refer to documents in your queries with
@@filename
syntax - Ask questions about your documents
- Create branches with the branching commands to explore different conversation paths
> /sw research_project
Created new conversation 'research_project'
> I need to analyze @@research_paper.pdf
Processing document 'research_paper.pdf'...
Document analysis complete.
> What are the key findings in this paper?
The key findings from the research paper include:
[Claude generates response analyzing the document]
> /branch technical_details
Created new branch 'technical_details' from 'research_project'
> Let's focus on the implementation details in section 3
[Claude responds with technical analysis]
> /sw research_project
Switched to 'research_project' conversation
> Let's discuss the business implications instead
[Claude responds with business analysis while maintaining original context]
Command | Description |
---|---|
/q |
Quit the program |
/sw |
List all conversations |
/sw <name> |
Create new conversation or switch to existing |
/reload |
Reprocess all documents for current conversation |
/ld <n> |
Reload documents and update context (default: last 20 messages) |
/docs |
Show document sources for current conversation |
/p |
List available system prompts |
/p <type> |
Switch system prompt type (analysis|qa|generation) |
/xxs |
Switch to extremely short responses (128 tokens) |
/xs |
Switch to very short responses (256 tokens) |
/s |
Switch to short responses (512 tokens) |
/m |
Switch to medium responses (1024 tokens) |
/l |
Switch to long responses (2048 tokens) |
/xl |
Switch to very long responses (4096 tokens) |
/xxl |
Switch to extremely long responses (8192 tokens) |
/clearall |
Clear ALL sessions and conversations (requires confirmation) |
/clearconv |
Clear ALL conversations and related files/folders (requires confirmation) |
/usage |
Display total token usage and costs |
/ |
Show help message |
input-docs/
- Place your input documents hereprocessed-docs/
- Contains processed document dataoutput-docs/
- Contains output generated from conversationssessions/
- Contains conversation session datadata/
- Contains application datascripts/
- Contains utility scripts
setup.sh
- Sets up the application environmentdev.sh
- Starts the command-line interface
This project is licensed under the MIT License - see the LICENSE file for details.
Maintained with β€οΈ by HumainLabs.ai