Skip to content
forked from Agenta-AI/agenta

The LLMOps platform to build robust LLM apps. Easily experiment and evaluate different prompts, models, and workflows.

License

Notifications You must be signed in to change notification settings

devgenix/agenta

Β 
Β 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Shows the logo of agenta
Quickly iterate, debug, and evaluate your LLM apps
The open-source LLMOps platform for prompt-engineering, evaluation, human feedback, and deployment of complex LLM apps.

MIT license. Doc PRs welcome Contributors Last Commit Commits per month PyPI - Downloads





Mockup agenta



About β€’ Quick Start β€’ Installation β€’ Features β€’ Documentation β€’ Enterprise β€’ Community β€’ Contributing


ℹ️ About

Agenta is an end-to-end LLMOps platform. It provides the tools for prompt engineering and management, βš–οΈ evaluation, and πŸš€ deployment. All without imposing any restrictions on your choice of framework, library, or model.

Agenta allows developers and product teams to collaborate and build robust AI applications in less time.

πŸ”¨ How does it work?

Using an LLM App Template (For Non-Technical Users) Starting from Code
1. Create an application using a pre-built template from our UI
2. Access a playground where you can test and compare different prompts and configurations side-by-side.
3. Systematically evaluate your application using pre-built or custom evaluators.
4. Deploy the application to production with one click.
1. Add a few lines to any LLM application code to automatically create a playground for it
2. Experiment with prompts and configurations, and compare them side-by-side in the playground.
3. Systematically evaluate your application using pre-built or custom evaluators.
4. Deploy the application to production with one click.



Quick Start

Features

Playground πŸͺ„

With just a few lines of code, define the parameters and prompts you wish to experiment with. You and your team can quickly experiment and test new variants on the web UI.
playground_1024_30.07.2023.mp4

Version Evaluation πŸ“Š

Define test sets, then evaluate manually or programmatically your different variants.

API Deployment πŸš€

When you are ready, deploy your LLM applications as APIs in one click.

Why choose Agenta for building LLM-apps?

  • πŸ”¨ Build quickly: You need to iterate many times on different architectures and prompts to bring apps to production. We streamline this process and allow you to do this in days instead of weeks.
  • πŸ—οΈ Build robust apps and reduce hallucination: We provide you with the tools to systematically and easily evaluate your application to make sure you only serve robust apps to production.
  • πŸ‘¨β€πŸ’» Developer-centric: We cater to complex LLM-apps and pipelines that require more than one simple prompt. We allow you to experiment and iterate on apps that have complex integration, business logic, and many prompts.
  • 🌐 Solution-Agnostic: You have the freedom to use any libraries and models, be it Langchain, llma_index, or a custom-written alternative.
  • πŸ”’ Privacy-First: We respect your privacy and do not proxy your data through third-party services. The platform and the data are hosted on your infrastructure.

How Agenta works:

1. Write your LLM-app code

Write the code using any framework, library, or model you want. Add the agenta.post decorator and put the inputs and parameters in the function call just like in this example:

Example simple application that generates baby names:

import agenta as ag
from langchain.chains import LLMChain
from langchain.llms import OpenAI
from langchain.prompts import PromptTemplate

default_prompt = "Give me five cool names for a baby from {country} with this gender {gender}!!!!"
ag.init()
ag.config(prompt_template=ag.TextParam(default_prompt),
          temperature=ag.FloatParam(0.9))

@ag.entrypoint
def generate(
    country: str,
    gender: str,
) -> str:
    llm = OpenAI(temperature=ag.config.temperature)
    prompt = PromptTemplate(
        input_variables=["country", "gender"],
        template=ag.config.prompt_template,
    )
    chain = LLMChain(llm=llm, prompt=prompt)
    output = chain.run(country=country, gender=gender)

    return output

2.Deploy your app using the Agenta CLI

Screenshot 2023-06-19 at 15 58 34

3. Go to agenta at http://localhost

Now your team can πŸ”„ iterate, πŸ§ͺ experiment, and βš–οΈ evaluate different versions of your app (with your code!) in the web platform.

Screenshot 2023-06-25 at 21 08 53

Enterprise Support

Contact us here for enterprise support and early access to agenta self-managed enterprise with Kubernetes support.

Book us

Disabling Anonymized Tracking

To disable anonymized telemetry, set the following environment variable:

  • For web: Set TELEMETRY_TRACKING_ENABLED to false in your agenta-web/.env file.
  • For CLI: Set telemetry_tracking_enabled to false in your ~/.agenta/config.toml file.

After making this change, restart agenta compose.

Contributing

We warmly welcome contributions to Agenta. Feel free to submit issues, fork the repository, and send pull requests.

We are usually hanging in our Slack. Feel free to join our Slack and ask us anything

Check out our Contributing Guide for more information.

Contributors ✨

All Contributors

Thanks goes to these wonderful people (emoji key):

Sameh Methnani
Sameh Methnani

πŸ’» πŸ“–
Suad Suljovic
Suad Suljovic

πŸ’» 🎨 πŸ§‘β€πŸ« πŸ‘€
burtenshaw
burtenshaw

πŸ’»
Abram
Abram

πŸ’» πŸ“–
Israel Abebe
Israel Abebe

πŸ› 🎨 πŸ’»
Master X
Master X

πŸ’»
corinthian
corinthian

πŸ’» 🎨
Pavle Janjusevic
Pavle Janjusevic

πŸš‡
Kaosi Ezealigo
Kaosi Ezealigo

πŸ› πŸ’»
Alberto Nunes
Alberto Nunes

πŸ›
Maaz Bin Khawar
Maaz Bin Khawar

πŸ’» πŸ‘€ πŸ§‘β€πŸ«
Nehemiah Onyekachukwu Emmanuel
Nehemiah Onyekachukwu Emmanuel

πŸ’» πŸ’‘ πŸ“–
Philip Okiokio
Philip Okiokio

πŸ“–
Abhinav Pandey
Abhinav Pandey

πŸ’»
Ramchandra Warang
Ramchandra Warang

πŸ’» πŸ›
Biswarghya Biswas
Biswarghya Biswas

πŸ’»
Uddeepta Raaj Kashyap
Uddeepta Raaj Kashyap

πŸ’»
Nayeem Abdullah
Nayeem Abdullah

πŸ’»
Kang Suhyun
Kang Suhyun

πŸ’»
Yoon
Yoon

πŸ’»
Kirthi Bagrecha Jain
Kirthi Bagrecha Jain

πŸ’»
Navdeep
Navdeep

πŸ’»
Rhythm Sharma
Rhythm Sharma

πŸ’»
Osinachi Chukwujama
Osinachi Chukwujama

πŸ’»
θŽ«ε°”η΄’
θŽ«ε°”η΄’

πŸ“–
Agunbiade Adedeji
Agunbiade Adedeji

πŸ’»
Emmanuel Oloyede
Emmanuel Oloyede

πŸ’» πŸ“–
Dhaneshwarguiyan
Dhaneshwarguiyan

πŸ’»
Priyanshu Prajapati
Priyanshu Prajapati

πŸ“–
Raviteja
Raviteja

πŸ’»
Arijit
Arijit

πŸ’»
Yachika9925
Yachika9925

πŸ“–
Aldrin
Aldrin

⚠️
seungduk.kim.2304
seungduk.kim.2304

πŸ’»
Andrei Dragomir
Andrei Dragomir

πŸ’»
diego
diego

πŸ’»
brockWith
brockWith

πŸ’»
Dennis Zelada
Dennis Zelada

πŸ’»
Romain Brucker
Romain Brucker

πŸ’»

This project follows the all-contributors specification. Contributions of any kind are welcome!

Attribution: Testing icons created by Freepik - Flaticon

About

The LLMOps platform to build robust LLM apps. Easily experiment and evaluate different prompts, models, and workflows.

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • TypeScript 62.0%
  • Python 36.8%
  • HCL 0.5%
  • Dockerfile 0.3%
  • Shell 0.2%
  • CSS 0.1%
  • Other 0.1%