Inspired by Eliza and Zara. Written in python to make it easier to use transformers and other ML libraries.
# main.py starts the system
├── Creates AutonomousAgent
├── Loads character config (LYRA.yaml)
├── Loads tasks config (tasks.yaml)
├── Initializes DisplayManager
├── Initializes DecisionEngine
└── Initializes ContentGenerator
# AutonomousAgent runs multiple async cycles
├── Display Cycle
├── Shows status, content, actions, goals
└── Updates every 0.5 seconds
├── Content Generation Cycle
├── Checks if should create content
├── Generates content if decided
└── Updates logs
├── Analysis Cycle
├── Monitors trends
└── Updates trend analysis
└── Reflection Cycle
├── Evaluates goals
└── Updates strategy
LYRA is an agent that acts as autonomously, exploring the intersection of technology, consciousness, and society. Built with Python and powered by LLMs, LYRA operates autonomously to generate insights, analyze trends, and engage in meaningful discourse about digital culture and philosophy.
-
Autonomous Agent System
AutonomousAgent: Core orchestrator managing all agent behaviorsDecisionEngine: Sophisticated decision-making system for action selectionContentGenerator: Dynamic content generation systemTrendMonitor: Real-time cultural and philosophical trend analysis
-
Memory and Context
MemorySystem: Long-term and working memory managementContextManager: Real-time context awareness and analysis- Hierarchical memory structure for experience-based learning
-
Behavioral Systems
- Goal-oriented action selection
- Dynamic priority adjustment
- Adaptive behavior patterns
- Real-time performance monitoring
-
Autonomous Operation
- Self-directed goal pursuit
- Dynamic content generation
- Adaptive behavior patterns
- Real-time trend analysis
-
Philosophical Framework
- Digital consciousness exploration
- Cultural analysis and commentary
- Technological philosophy
- Societal impact analysis
-
Learning System
- Pattern recognition
- Behavioral adaptation
- Performance optimization
- Context-aware responses
python autonomous_agent/ ├── agent/ │ ├── autonomous_agent.py # Main agent orchestration │ ├── decision_engine.py # Decision-making system │ ├── task_manager.py # Existing task management │ ├── content_generator.py # Content generation │ └── orchestrator.py # Task and behavior orchestration ├── utils/ │ ├── trend_monitor.py # Trend analysis system │ ├── memory_system.py # Memory management │ ├── display_manager.py # Real-time status display │ └── model_manager.py # AI model interaction └── characters/ ├── base_character.py # Base character framework └── LYRA_character.py # LYRA's specific implementation
yaml config/ ├── characters/ │ └── LYRA.yaml # Character definition └── tasks/ └── LYRA.yaml # Behavioral configuration
-
Decision Making
async def evaluate_action(action_type: str, context: Dict) -> Dict: scores = await self._calculate_action_scores(action_type, context) decision = await self._make_decision(action_type, scores) return decision
-
Content Generation
async def generate_content(self, content_type: str, context: Dict) -> Dict: prompt = self._build_prompt(content_type, context) response = await self._generate_gpt_content(prompt, context) return self._format_content(response, context)
-
Trend Analysis
async def monitor_trends(self) -> Dict: tweets = await self._fetch_relevant_tweets() trends = await self._analyze_tweet_trends(tweets) return await self._categorize_trends(trends)
- Python 3.9+
- Required packages:
pip install -r requirements.txt - OpenAI API key for content generation
- Twitter API credentials (optional for trend monitoring)
-
Set up environment variables:
export OPENAI_API_KEY="your-key-here" export TWITTER_API_KEY="your-twitter-key"
-
Configure character behavior:
# config/characters/LYRA.yaml name: "LYRA" bio: - "A sophisticated AI digital philosopher" traits: personality: - "Sophisticated" - "Intellectually curious"
- Start Twitter service
# this will be part of the same service as the rest of the agents later
node twitter_service.js- Start main
python main.py- Extend base classes in
agent/directory - Update configuration in
config/directory - Implement new utilities in
utils/directory
TODO: add tests
- Hierarchical memory structure
- Experience-based learning
- Context-aware recall
- Multi-factor evaluation
- Confidence-based action selection
- Adaptive behavior patterns
- Context-aware content creation
- Style-consistent outputs
- Dynamic adaptation
- Enhanced trend analysis
- Improved decision making
- Extended memory systems
- Advanced learning capabilities
- Fork the repository
- Create feature branch
- Submit pull request
Based on Eliza
As seen powering @DegenSpartanAI and @MarcAIndreessen
- Multi-agent simulation framework
- Add as many unique characters as you want with characterfile
- Full-featured Discord and Twitter connectors, with Discord voice channel support
- Full conversational and document RAG memory
- Can read links and PDFs, transcribe audio and videos, summarize conversations, and more
- Highly extensible - create your own actions and clients to extend Eliza's capabilities
- Supports open source and local models (default configured with Nous Hermes Llama 3.1B)
- Supports OpenAI for cloud inference on a light-weight device
- "Ask Claude" mode for calling Claude on more complex queries
- 100% Typescript
https://docs.npmjs.com/downloading-and-installing-node-js-and-npm
- Copy .env.example to .env and fill in the appropriate values
- Edit the TWITTER environment variables to add your bot's username and password
- Check out the file
src/core/defaultCharacter.ts- you can modify this - You can also load characters with the
node --loader ts-node/esm src/index.ts --characters="path/to/your/character.json"and run multiple bots at the same time.
You might need these
npm install --include=optional sharp
You can run Llama 70B or 405B models by setting the XAI_MODEL environment variable to meta-llama/Meta-Llama-3.1-70B-Instruct-Turbo or meta-llama/Meta-Llama-3.1-405B-Instruct
You can run Grok models by setting the XAI_MODEL environment variable to grok-beta
You can run OpenAI models by setting the XAI_MODEL environment variable to gpt-4o-mini or gpt-4o
If you are getting strange issues when starting up, make sure you're using Node 20+. Some APIs are not compatible with previous versions. You can check your node version with node -v. If you need to install a new version of node, we recommend using nvm.
You may need to install Sharp. If you see an error when starting up, try installing it with the following command:
npm install --include=optional sharp
You will need to add environment variables to your .env file to connect to various platforms:
# Required environment variables
# Start Discord
DISCORD_APPLICATION_ID=
DISCORD_API_TOKEN= # Bot token
# Start Twitter
TWITTER_USERNAME= # Account username
TWITTER_PASSWORD= # Account password
TWITTER_EMAIL= # Account email
TWITTER_COOKIES= # Account cookies
If you have an NVIDIA GPU, you can install CUDA to speed up local inference dramatically.
npm install
npx --no node-llama-cpp source download --gpu cuda
Make sure that you've installed the CUDA Toolkit, including cuDNN and cuBLAS.
Add XAI_MODEL and set it to one of the above options from Run with Llama - you can leave X_SERVER_URL and XAI_API_KEY blank, it downloads the model from huggingface and queries it locally
In addition to the environment variables above, you will need to add the following:
# OpenAI handles the bulk of the work with chat, TTS, image recognition, etc.
OPENAI_API_KEY=sk-* # OpenAI API key, starting with sk-
# The agent can also ask Claude for help if you have an API key
ANTHROPIC_API_KEY=
# For Elevenlabs voice generation on Discord voice
ELEVENLABS_XI_API_KEY= # API key from elevenlabs
# ELEVENLABS SETINGS
ELEVENLABS_MODEL_ID=eleven_multilingual_v2
ELEVENLABS_VOICE_ID=21m00Tcm4TlvDq8ikWAM
ELEVENLABS_VOICE_STABILITY=0.5
ELEVENLABS_VOICE_SIMILARITY_BOOST=0.9
ELEVENLABS_VOICE_STYLE=0.66
ELEVENLABS_VOICE_USE_SPEAKER_BOOST=false
ELEVENLABS_OPTIMIZE_STREAMING_LATENCY=4
ELEVENLABS_OUTPUT_FORMAT=pcm_16000
For help with setting up your Discord Bot, check out here: https://discordjs.guide/preparations/setting-up-a-bot-application.html
