Skip to content

Shojaeei/Lya

Repository files navigation

Lya

Lya

Production-Grade Autonomous AGI Agent Framework
Clean Architecture · Emotional Intelligence · Self-Evolving Tools · Multi-LLM Orchestration

Python License Build Status Stars


Overview

Lya is a self-contained, enterprise-ready AGI agent framework designed around Clean Architecture and Domain-Driven Design principles. It orchestrates Ollama models, persistent vector memory, and a dynamic tool registry into a unified autonomous system capable of reasoning, planning, coding, and self-improvement — with genuine personality and emotional intelligence.

Unlike conventional chatbot wrappers, Lya operates as an event-driven cognitive loop: it observes, thinks, and acts through composable workflows, CQRS command pipelines, and a multi-agent orchestration layer. It was built from day one for extensibility via the Model Context Protocol (MCP) and ships with first-class support for Telegram integration.


Quick Start

One-Line Install

Linux / macOS:

curl -fsSL https://raw.githubusercontent.com/Shojaeei/Lya/main/install.sh | bash

Windows (PowerShell):

irm https://raw.githubusercontent.com/Shojaeei/Lya/main/install.ps1 | iex

The installer handles everything: environment setup (venv or system-wide), dependency installation, interactive configuration, and launch — in a single command.

Manual Installation
git clone https://github.com/Shojaeei/Lya.git && cd Lya && python install.py

Architecture

┌─────────────────────────────────────────────────────────────┐
│                      Adapters Layer                         │
│   Telegram · Discord · Slack · REST API · CLI · WebSocket   │
├─────────────────────────────────────────────────────────────┤
│                    Application Layer                        │
│   Commands · Queries · Event Handlers · CQRS Pipeline       │
├─────────────────────────────────────────────────────────────┤
│                      Domain Layer                           │
│   Agent · Goal · Task · Memory · Personality · Events       │
├─────────────────────────────────────────────────────────────┤
│                   Infrastructure Layer                      │
│   LLM Providers · Vector DB · Tool Registry · Security      │
│   Self-Improvement · Workflows · Health Monitoring          │
│   Coding Agent · Multi-Agent Orchestrator · Spec Engine     │
└─────────────────────────────────────────────────────────────┘

Lya follows a strict four-layer Clean Architecture with dependency inversion. Domain entities have zero external dependencies. Infrastructure implementations are injected at runtime. All cross-layer communication flows through well-defined ports and adapters.


Core Capabilities

Autonomous Agent Loop

Capability Technology Description
Cognitive Loop Observe → Think → Act Async event-driven agent loop with goal decomposition and task scheduling
Multi-Agent Orchestration Planner · Coder · Reviewer · Tester MetaGPT/CrewAI-style multi-role pipeline for complex software tasks
Self-Improvement EvoAgentX-inspired evolution Reviews failures, proposes improvements, and auto-generates new tools at runtime
Spec-Driven Development Markdown → Code + Tests Parses specifications into structured requirements and implements them autonomously

Intelligence & Memory

Capability Technology Description
Persistent Memory ChromaDB / Qdrant Episodic, semantic, and procedural memory with decay, consolidation, and relevance scoring
Working Memory Buffer Context Manager Token-aware context windowing with priority-based memory retrieval
Personality Engine Russell's Circumplex Model Big Five traits, emotional state (valence/arousal/dominance), and 9 discrete moods
User Adaptation Rapport Tracking Learns communication style, interests, and preferences per user over time

Development & Tooling

Capability Technology Description
Autonomous Coding Plan → Code → Test → Commit Full development cycle with iterative test-fix loops and auto-commit
Dynamic Tool Registry PyPI / GitHub hot-install Discovers, installs, and registers new skills at runtime without restart
Unrestricted System Access Native OS integrations Full filesystem, process, and network access with configurable security policies
Code Sandboxing Ephemeral Docker containers Secure isolated execution for untrusted or generated code

Connectivity & Deployment

Capability Technology Description
Multi-Channel Messaging Discord · Telegram · Slack Unified ChannelManager routes messages across platforms seamlessly
Live Dashboard FastAPI + WebSocket Real-time tool log streaming, session history, and interactive chat UI
MCP Integration Model Context Protocol Native plugin system for GitHub, databases, and third-party tools
Visuomotor Automation PyAutoGUI + MSS Screen capture and desktop UI interaction for visual automation tasks

Configuration

Lya is configured via environment variables (.env file). The installer walks you through this interactively, or you can edit .env directly:

Variable Default Description
LYA_LLM_PROVIDER ollama LLM backend: ollama (only supported provider)
LYA_LLM_MODEL llama3 Model identifier
LYA_LLM_BASE_URL http://localhost:11434 Ollama API endpoint
LYA_PERSONALITY_ENABLED true Enable personality and emotional state engine
LYA_PERSONALITY_DEFAULT_TONE friendly Default tone: friendly, formal, playful, calm
LYA_TELEGRAM_BOT_TOKEN Telegram bot token for deployment
LYA_WORKSPACE_DIR ~/.lya-workspace Directory for downloads, uploads, and projects
LYA_VOICE_ENABLED false Enable voice features (setup during installation)
LYA_SANDBOX_ENABLED true Enable isolated code execution

See .env.example for the full configuration reference.


Documentation

Section Description
Installation Guide Detailed setup instructions and requirements
Configuration Reference All environment variables and their effects
Architecture Overview Clean Architecture layers and design decisions
Data Flow & CQRS Command/query separation and event flow
Tool & Plugin System MCP integration and custom tool development
Development Guide Contributing, testing, and code style

Requirements

  • Python 3.12.0 – 3.12.13
  • Git (for one-line installer)
  • Ollama (local or remote access)
  • Optional: Docker (for code sandboxing), ChromaDB/Qdrant (for persistent memory)

Contributing

Lya is designed to be extended. We welcome contributions of all kinds — new tools, channel integrations, memory backends, and agent capabilities.

  1. Fork the repository
  2. Create a feature branch (git checkout -b feat/my-feature)
  3. Commit your changes (git commit -m 'feat: add my feature')
  4. Push and open a Pull Request

Please review the Development Guide for code style and testing conventions.


License

Licensed under the Apache License 2.0.

Copyright 2024–2026 Shojaeei. All rights reserved.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors