- Project Overview
- Screenshots
- Entering Prompts for Group Conversations
- Sample Conversation
- Project Structure
- File Dependencies
- Module Dependencies
- Configuration and Security Files
- Detailed File Descriptions
- Creating and Configuring Dependent Files
- Setup and Running the Application
- Error Handling and Logging
- Extending the Application
- Testing
- Performance Considerations
- Security Considerations
- Additional Notes
The AI Conversation CLI is an interactive command-line application that facilitates conversations with multiple AI personalities using advanced language models. It manages complex conversation flows and provides a rich text user interface for interacting with AI agents.
To initiate a group conversation with the AI personalities, follow these guidelines:
- Start the application and choose the desired participants.
- Enter your prompt, addressing it to the group or a specific participant.
- The AI personalities will engage in a discussion based on your prompt.
Examples:
- General prompt: "What are your thoughts on renewable energy?"
- Specific participant prompt: "Vanessa, what's your perspective on social media's impact on society?"
- Follow-up prompt: "Lukas, can you provide some data to support or challenge the previous points?"
Remember, you can always interject or steer the conversation by addressing specific participants or asking for clarification on certain points.
For a detailed example of how the AI personalities interact in a conversation, please refer to the sample conversation.
The project consists of the following Python files:
convo.py
: Main entry point of the applicationconversation_manager.py
: Manages conversation history and AI interactionsai_conversation_cli.py
: Handles the command-line interface and user interactionspersonalities.py
: Defines AI personalities and their characteristicsai_config.py
: Configuration for AI models and related settings
- Imports:
asyncio
: For asynchronous programminglogging
: For application-wide loggingos
: For system operations (clearing screen)typing
: For type hintingConversationManager
fromconversation_manager.py
AIConversationCLI
fromai_conversation_cli.py
- Imports:
asyncio
: For asynchronous programmingjson
: For reading/writing conversation historylogging
: For logging operationsrandom
: For randomizing participant orderre
: For regular expressions in response extractiontyping
: For type hintingAI_PERSONALITIES
,HELPER_PERSONALITIES
,MASTER_SYSTEM_MESSAGE
frompersonalities.py
AI_CONFIG
,log_ai_error
fromai_config.py
- Various classes from
rich
for formatted console output
- Imports:
cmd2
: For building the interactive CLIasyncio
: For asynchronous programminglogging
: For logging operationstracemalloc
: For memory debuggingConversationManager
fromconversation_manager.py
- Various classes from
rich
for formatted console output AI_PERSONALITIES
frompersonalities.py
ainput
fromaioconsole
(with fallback to asyncio.run(input))
-
Python Standard Library:
asyncio
json
logging
os
random
re
typing
tracemalloc
-
Third-party Libraries:
cmd2
: For creating interactive command-line applicationsrich
: For rich text and beautiful formatting in the terminalaioconsole
: For asynchronous console input (optional)- AI model specific libraries:
openai
: For OpenAI GPT modelsanthropic
: For Anthropic's Claude model
The project uses several configuration files to manage API keys, tokens, and credentials. These files are crucial for securely connecting to various AI services and should be handled with care.
This file is used to store API keys and other sensitive configuration data.
- Store API keys for various AI services (e.g., OpenAI, Anthropic)
- Keep sensitive configuration data separate from the main code
Create this file in the project root directory with the following structure:
# keys.py
OPENAI_API_KEY = "your_openai_api_key_here"
ANTHROPIC_API_KEY = "your_anthropic_api_key_here"
# Add any other API keys or sensitive configuration data here
In your main code (e.g., ai_config.py), import and use these keys:
from keys import OPENAI_API_KEY, ANTHROPIC_API_KEY
# Use the keys when initializing AI clients
openai.api_key = OPENAI_API_KEY
anthropic.api_key = ANTHROPIC_API_KEY
This file is typically used for storing authentication tokens, often for OAuth 2.0 flows or similar authentication mechanisms.
- Store refresh tokens or access tokens for APIs that use token-based authentication
- Allow the application to maintain authentication between sessions
Create this file in the project root directory with the following structure (example for OAuth 2.0):
{
"access_token": "your_access_token_here",
"refresh_token": "your_refresh_token_here",
"token_type": "Bearer",
"expires_at": 1234567890
}
In your code, you would typically read this file, check if the token is still valid, and refresh it if necessary:
import json
from datetime import datetime, timedelta
def get_valid_token():
with open('token.json', 'r') as token_file:
token_data = json.load(token_file)
if datetime.fromtimestamp(token_data['expires_at']) < datetime.now():
# Implement token refresh logic here
pass
return token_data['access_token']
This file is often used to store client credentials for OAuth 2.0 or similar authentication flows.
- Store client ID, client secret, and other credentials required for authenticating with certain APIs
- Keep these credentials separate from the main code for security reasons
Create this file in the project root directory with the following structure:
{
"client_id": "your_client_id_here",
"client_secret": "your_client_secret_here",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"redirect_uris": ["urn:ietf:wg:oauth:2.0:oob", "http://localhost"]
}
In your code, you would typically read this file when setting up OAuth flows:
import json
def get_credentials():
with open('credentials.json', 'r') as cred_file:
return json.load(cred_file)
# Use these credentials when setting up OAuth client
credentials = get_credentials()
oauth_client = SomeOAuthClient(
client_id=credentials['client_id'],
client_secret=credentials['client_secret']
)
Main script that initializes and runs the application.
setup_logging()
: Configures logging for the entire applicationrun_application()
: Initializes AIConversationCLI and ConversationManager, runs the main application loopmain()
: Entry point of the application
Contains the ConversationManager class, which handles conversation logic and history.
load_conversation_history()
: Loads conversation history from a JSON filesave_conversation_history()
: Saves conversation history to a JSON fileupdate_conversation()
: Adds a new message to the conversation historyformat_and_print_message()
: Formats and displays messages in the consoleget_conversation_context()
: Retrieves the current conversation contextgenerate_ai_response()
: Generates an AI response for a given promptextract_ai_response()
: Extracts the AI's response from the full response stringgenerate_single_response()
: Generates and processes a single AI responsedisplay_thinking_message()
: Shows a "thinking" message while generating a responsedetect_addressed_participant()
: Detects which participant a message is addressed torandomize_participants()
: Randomizes the order of AI participantsgenerate_ai_conversation()
: Orchestrates a full AI conversation based on a promptgenerate_moderator_summary()
: Generates a summary of the conversation by a moderator AI
Contains the AIConversationCLI class, which manages the command-line interface and user interactions.
run()
: Main method to run the applicationcleanup()
: Performs cleanup operations before exitingget_user_input()
: Gets user input asynchronouslycmdloop_async()
: Asynchronous version of cmd2's command looponecmd_async()
: Processes a single command asynchronouslyai_conversation_loop()
: Main loop for processing user input and generating AI responsesprocess_input()
: Handles user input and generates AI responseshandle_command()
: Processes system commands (starting with '!')show_conversation_history()
: Displays the conversation history- Various
do_*
methods: Handle specific CLI commands (e.g.,do__quit
,do__help
,do__switch_thread
,do__list_threads
,do__clear
)
Defines AI personalities and their characteristics.
AI_PERSONALITIES
: Dictionary defining main AI personalitiesHELPER_PERSONALITIES
: Dictionary defining helper AI personalities (e.g., moderator, response detector)MASTER_SYSTEM_MESSAGE
: Dictionary containing the master system message for AI interactions
Configuration for AI models and related settings.
AI_CONFIG
: Dictionary containing configurations for different AI modelslog_ai_error()
: Function to log AI-specific errors- AI model-specific configuration and functions
Create this file in the project root directory with the following structure:
AI_PERSONALITIES = {
"PersonalityName": {
"ai_name": "ModelName",
"color": "ColorName",
"system_message": "Personality description and instructions"
},
# Add more personalities as needed
}
HELPER_PERSONALITIES = {
"Moderator": {
"ai_name": "ModelName",
"color": "yellow",
"system_message": "Moderator instructions"
},
"ResponseDetector": {
"ai_name": "ModelName",
"system_message": "Instructions for detecting addressed participants"
}
}
MASTER_SYSTEM_MESSAGE = {
"system_message": "Master instructions for all AI interactions"
}
Create this file in the project root directory with the following structure:
import logging
# Import necessary AI model libraries (e.g., openai, anthropic)
AI_CONFIG = {
"ModelName": {
"model": "model_identifier",
"generate_func": async_generation_function
},
# Add configurations for other models
}
def log_ai_error(ai_name: str, error_message: str):
logging.error(f"AI Error ({ai_name}): {error_message}")
# Define async generation functions for each AI model
async def openai_generate(model: str, prompt: str):
# Implementation for OpenAI model generation
pass
async def anthropic_generate(model: str, prompt: str):
# Implementation for Anthropic model generation
pass
# Add the generation functions to the AI_CONFIG
AI_CONFIG["gpt4"]["generate_func"] = openai_generate
AI_CONFIG["claude"]["generate_func"] = anthropic_generate
-
Ensure all required Python libraries are installed:
pip install cmd2 rich openai anthropic aioconsole
-
Create and configure
personalities.py
andai_config.py
as described in the "Creating and Configuring Dependent Files" section. -
Set up any necessary environment variables or API keys for the AI models.
-
Run the application using:
python convo.py
- Comprehensive error handling is implemented throughout the application.
- Detailed logging is set up in
convo.py
and used across all files. - Logs are written to 'log/convo.log' for debugging and troubleshooting.
To add new features or AI personalities:
- Update the
AI_PERSONALITIES
dictionary inpersonalities.py
. - Add new AI model configurations in
ai_config.py
if necessary. - Implement new command handlers in the AIConversationCLI class in
ai_conversation_cli.py
. - Extend the ConversationManager class in
conversation_manager.py
for new conversation management features.
- Implement unit tests for individual components (e.g., ConversationManager methods, AIConversationCLI commands).
- Create integration tests to ensure proper interaction between different modules.
- Perform end-to-end testing of the entire conversation flow.
- The application uses asynchronous programming to handle concurrent operations efficiently.
- Consider implementing caching mechanisms for frequently accessed data or AI responses.
- Monitor and optimize AI model API usage to manage costs and improve response times.
- Ensure proper handling and storage of API keys and sensitive configuration data.
- Implement input validation and sanitization to prevent potential security vulnerabilities.
- Consider implementing user authentication if extending the application for multi-user scenarios.
- The application uses the
rich
library for enhanced console output, including colored text, panels, and progress spinners. - The
aioconsole
library is used for asynchronous console input, with a fallback to synchronous input if not available. - Memory tracking is implemented using
tracemalloc
for debugging purposes. - The application supports multiple conversation threads and allows switching between them.
- A moderator AI can generate summaries of the conversation.