CELI (pronounced 'Kelly') leverages the capabilities of large language models (LLMs) to automate a wide range of knowledge work tasks. Hereβs an overview of what CELI offers:
- π Autonomous Operation: Functions independently, dynamically adapting strategies without human intervention.
- π οΈ Flexible Task Automation: Applicable across diverse tasks from document drafting to data analysis.
- π Scalability: Efficiently manages projects of varying sizes and complexities.
- π Streamlined Document Management: Enhances every phase of the document lifecycle management.
- ποΈ Development Flexibility: Supports the development of custom applications that meet specific industry standards.
π Join our Discord | π Read our Docs
Important: CELI is currently in alpha. For support, join our Discord server or submit an issue on this GitHub repo.
CELI (Controller-Embedded Language Interactions) automates projects by decomposing them into sets of tasks and utilizing LLM-directed controller logic for execution. Key features include:
-
- Transforms traditional hierarchical models by embedding the LLM controller within the operational fabric of Object-Oriented Programming (OOP) via Inversion of Control (IoC).
- This integration moves away from a single OOP controller directing multiple LLM agents and instead allows the LLM controller to actively manage and execute OOP functions.
- Each OOP function directly interacts with the LLM controller, enhancing their autonomy and enabling dynamic function calls.
- This setup ensures cohesive system operation and facilitates real-time interactions with external systems like APIs, databases, and LLM agents, significantly boosting flexibility and enabling complex operations.
-
Supports complex workflows with the capability for nested operations and recursion within tasks. This dynamic structure allows workflows to adapt based on contextual changes or external data inputs, providing unparalleled flexibility and responsiveness.
-
Acts as the central orchestrating unit, managing all operations from data handling to task execution. The engine efficiently handles both predefined tasks and dynamic adjustments, ensuring seamless automation across diverse platforms and use cases.
Join our Discord server to ask questions or get involved in our project!
To get an idea of what CELI can do, we have prepackaged an example use case. In this case, we will have CELI write a wiki page on a topic given an example page and a set of references.
First, install celi using PIP with the following command:
pip install celi-framework
You can also clone the GitHub repo and install CELI from source. See Running CELI from Source for info on how to do that.
Once you have the steps above done, you can test your setup by running a demo of CELI's capabilities:
python -m celi_framework.main \
--job-description=celi_framework.examples.human_eval.job_description.job_description \
--tool-config='{"single_example":"HumanEval/3"}' \
--simulate-live
This example simulates using CELI to solve problem #3 of the HumanEval benchmark programming problem set. It uses
cached versions of the LLM outputs so it doesn't require an API key or make any paid LLM calls on your behalf. The
result will be put in the target/drafts
directory.
Running this demo should take a couple minutes. You will be able to see how CELI tackles the problem and the LLM calls it makes, along with the responses.
The code above uses a cached version of the LLM results. To meaningfully run CELI on anything new, you will need to make new LLM calls, which will require an OpenAI API key (or your own local LLM. See LLM Support).
We can now run the full HumanEval data set. This has 168 examples, so we won't use --simulate-live to impose a delay.
python -m celi_framework.main \
--job-description=celi_framework.examples.human_eval.job_description.job_description \
--openai-api-key=<Insert your OpenAI API key here> \
You can also set an OPENAI_API_KEY environment variable instead of passing one on the command line.
CELI is structured into distinct packages, each housing modules responsible for different aspects of the document processing workflow.
Located in the celi_framework.core
package, the following essential core modules facilitate CELI's primary operations:
- Processor: Manages and orchestrates the drafting of documents using language models, acting as the core of the CELI system.
- Monitor: Observes and evaluates the performance of the ProcessRunner, ensuring quality and efficiency in automated tasks.
- Job Description: Manages a comprehensive list of user-defined job descriptions that guide how tasks are executed.
- Tools: Provides mechanisms for CELI to interact with external systems and can be customized to suit specific use cases.
Users extend the CELI framework by defining their own job descriptions and tools, which leverage and extend the functionalities of the core modules. This allows for a high degree of customization and tailoring to specific needs:
- User-Defined Job Descriptions: Users can create unique job descriptions that specify detailed instructions and operational steps, ensuring that automated processes align closely with project requirements.
- Custom Tool Implementations: Developers can implement custom tools by importing core modules and utilizing their functionalities. These tools can be adapted to integrate seamlessly with existing systems or to introduce new capabilities.
Located in the celi_framework.experimental
package, these modules are designed to support the development of new use cases and enhance existing functionalities:
- Pre-Processor: Converts DOCX documents into a clean Markdown format, priming them for further processing.
- Embeddor: Embeds pre-cleaned text data from source documents, preparing it for integration with machine learning models and data analysis.
- Mapper: Focuses on pre-computing mappings between document contents to enhance the efficiency of the embedding process.
For practical applications and demonstrations, explore the celi_framework.examples
package:
- This package contains a variety of examples demonstrating how CELI can be applied across different scenarios and use cases.
CELI's architecture uniquely integrates recursion within its operational logic, significantly enhanced by embedding controller logic directly within LLM prompts. This sophisticated structure enables CELI to efficiently handle complex, multi-layered tasks with greater autonomy. The key capabilities facilitated by this approach include:
-
Controllers within LLM prompts direct the flow of operations, establishing loops that enable recursion crucial for managing complex sequences, where tasks may depend on the outcomes of preceding actions or require repeated iterations until a certain condition is met.
-
CELI manages tasks requiring multiple layers of sub-tasks, recursively processing each layer across various sections of documents or elements within data structures like dictionaries or lists, enhancing the systemβs ability to handle diverse and complex workflows.
-
By leveraging recursion, CELI dynamically identifies and manages errors or inconsistencies during task execution, ensuring reliability and operational accuracy.
-
The recursive nature of CELIβs task management supports an environment of adaptive learning, continuously refining strategies and approaches based on ongoing interactions and feedback.
CELI offers a distinct approach to automated knowledge work, setting it apart from traditional agent-based frameworks with its effective integration of task automation and interaction with large language models (LLMs). Key differentiators include:
-
CELIβs controller logic embedded within LLM prompts enables a more autonomous and streamlined operation for handling complex tasks, reducing the dependency on manual interventions.
-
Unlike traditional models confined to conversational dynamics, CELI employs a structured pseudo-code approach, allowing for complex and precise task execution beyond simple dialogue systems.
-
Utilizes recursion to manage multiple layers of tasks efficiently, allowing for dynamic adaptation in response to operational challenges, enhancing system reliability and responsiveness.
-
Enhances LLM utility by embedding function calls within operational prompts, enabling real-time data interactions crucial for applying model outputs effectively in real-world scenarios.
-
Designed to handle a wide range of demands, from small tasks to large-scale projects, CELIβs architecture supports diverse requirements without sacrificing performance.
Join our Discord server to discuss the project with users, contributors, and project authors.
Explore the rest of the documentation to learn more about CELI.
- Getting Started - Learn more about the various ways to run CELI.
- Running CELI - Learn more about the various ways to run CELI.
- New Use Cases - Learn how to apply CELI to your own use case.
- API Reference - If you are into reading API docs directly. The CELI dev team is committed to continuous improvement and user-driven development. Whether you're a seasoned developer or just starting, your feedback and contributions are invaluable to us. Let's build a smarter future together!
If you would like to contribute to the development of CELI, we welcome contributions of all forms. For more information on contributing, see the contributor guidelines.
CELI is licensed under the MIT License. Feel free to use, modify, and distribute the framework as per the license terms.