Welcome to AI Enablement Stack mapping. The list is structured into layers based on their functionality in the agentic AI development ecosystem:
Agent Consumer Layer: The interface layer where AI agents interact with users and systems. This includes standalone autonomous agents, assistive tools that enhance human capabilities, and specialized agents built for specific tasks. It's where AI capabilities are packaged into practical, user-facing applications.
Observability and Governance Layer: The control layer for monitoring, evaluating, securing, and governing AI systems. This layer handles everything from development pipelines and performance monitoring to risk management and compliance. It ensures AI systems operate reliably and meet organizational standards.
Engineering Layer: The developer's toolkit for building AI applications. This layer provides essential resources for training models, developing applications, and ensuring quality through testing. It provides tools and methods for transforming raw AI capabilities into production-ready solutions.
Intelligence Layer: The cognitive core of AI systems. This layer contains the frameworks, knowledge engines, and specialized models that power AI applications. It manages the actual processing, decision-making, and information retrieval that makes AI systems intelligent.
Infrastructure Layer: The foundation that powers AI development and deployment. This includes development workspaces, model serving infrastructure, and cloud computing resources. It provides the essential computing backbone that supports all AI operations.
To contribute to this list:
- Read the CONTRIBUTING.md
- Fork the repository
- Add logo under the assets folder
- Add your tool in the appropriate category in the file ai-enablement-stack.json
- Submit a pull request
Self-operating AI systems that can complete complex tasks independently
Agent Consumer Layer - AUTONOMOUS AGENTS
Cognition develops Devin, the world's first AI software engineer, designed to work as a collaborative teammate that helps engineering teams scale their capabilities through parallel task execution and comprehensive development support.
Agent Consumer Layer - AUTONOMOUS AGENTS
Morph AI delivers an enterprise-grade developer assistant that automates engineering tasks across multiple languages and frameworks, enabling developers to focus on high-impact work while ensuring code quality through automated testing and compliance.
Agent Consumer Layer - AUTONOMOUS AGENTS
Multimodal content creation autonomous agent.
Agent Consumer Layer - AUTONOMOUS AGENTS
Kubiya provides AI-powered teammates for operations teams, enabling automated task delegation and execution across DevOps workflows, with hallucination-free operations, enterprise-grade security, and native integration with tools like Slack, Jira, and Terraform.
AI tools that enhance human capabilities and workflow efficiency
Agent Consumer Layer - ASSISTIVE AGENTS
Sourcegraph's Cody is an AI coding assistant that combines the latest LLM models (including Claude 3 and GPT-4) with comprehensive codebase context to help developers write, understand, and fix code across multiple IDEs, while offering enterprise-grade security and flexible deployment options.
Agent Consumer Layer - ASSISTIVE AGENTS
Pieces provides an on-device AI companion that captures and maintains long-term memory across the developer workflow, offering snippet management, multi-LLM support, and context-aware assistance while processing all data locally for enhanced security and privacy.
Purpose-built AI agents designed for specific functions, like PR reviews and similar.
Agent Consumer Layer - SPECIALIZED AGENTS
Superflex is a VSCode Extension that builds features from Figma designs, images and text prompts, while maintaining your design standards, code style, and reusing your UI components.
Agent Consumer Layer - SPECIALIZED AGENTS
Ellipsis provides AI-powered code reviews and automated bug fixes for GitHub repositories, offering features like style guide enforcement, code generation, and automated testing while maintaining SOC 2 Type 1 compliance and secure processing without data retention.
Tools for managing and monitoring AI application lifecycles
Observability and Governance Layer - DEVELOPMENT PIPELINE
Portkey provides a comprehensive AI gateway and control panel that enables teams to route to 200+ LLMs, implement guardrails, manage prompts, and monitor AI applications with detailed observability features while maintaining SOC2 compliance and HIPAA/GDPR standards.
Observability and Governance Layer - DEVELOPMENT PIPELINE
- No description available
Observability and Governance Layer - DEVELOPMENT PIPELINE
- No description available
Observability and Governance Layer - DEVELOPMENT PIPELINE
- No description available
Observability and Governance Layer - DEVELOPMENT PIPELINE
- No description available
Observability and Governance Layer - DEVELOPMENT PIPELINE
Stack AI provides an enterprise generative AI platform for building and deploying AI applications with a no-code interface, offering pre-built templates, workflow automation, enterprise security features (SOC2, HIPAA, GDPR), and on-premise deployment options with support for multiple AI models and data sources.
Systems for tracking AI performance and behavior
Observability and Governance Layer - EVALUATION & MONITORING
- No description available
Observability and Governance Layer - EVALUATION & MONITORING
Cleanlab provides an AI-powered data curation platform that helps organizations improve their GenAI and ML solutions by automatically detecting and fixing data quality issues, reducing hallucinations, and enabling trustworthy AI deployment while offering VPC integration for enhanced security.
Observability and Governance Layer - EVALUATION & MONITORING
Patronus provides a comprehensive AI evaluation platform built on industry-leading research, offering features for testing hallucinations, security risks, alignment, and performance monitoring, with both pre-built evaluators and custom evaluation capabilities for RAG systems and AI agents.
Observability and Governance Layer - EVALUATION & MONITORING
Log10 provides an end-to-end AI accuracy platform for evaluating and monitoring LLM applications in high-stakes industries, featuring expert-driven evaluation, automated feedback systems, real-time monitoring, and continuous improvement workflows with built-in security and compliance features.
Observability and Governance Layer - EVALUATION & MONITORING
Traceloop provides open-source LLM monitoring through OpenLLMetry, offering real-time hallucination detection, output quality monitoring, and prompt debugging capabilities across 22+ LLM providers with zero-intrusion integration.
Observability and Governance Layer - EVALUATION & MONITORING
WhyLabs provides a comprehensive AI Control Center for monitoring, securing, and optimizing AI applications, offering real-time LLM monitoring, security guardrails, and privacy-preserving observability with SOC 2 Type 2 compliance and support for multiple modalities.
Observability and Governance Layer - EVALUATION & MONITORING
OpenLLMetry provides an open-source observability solution for LLMs built on OpenTelemetry standards, offering easy integration with major observability platforms like Datadog, New Relic, and Grafana, requiring just two lines of code to implement.
Frameworks for ensuring responsible AI use and regulatory compliance
Observability and Governance Layer - RISK & COMPLIANCE
- No description available
Observability and Governance Layer - RISK & COMPLIANCE
- No description available
Observability and Governance Layer - RISK & COMPLIANCE
- No description available
Tools for protecting AI systems and managing access and user permissions
Observability and Governance Layer - SECURITY & ACCESS CONTROL
LiteLLM provides a unified API gateway for managing 100+ LLM providers with OpenAI-compatible formatting, offering features like authentication, load balancing, spend tracking, and monitoring integrations, available both as an open-source solution and enterprise service.
Resources for customizing and optimizing AI models
Engineering Layer - TRAINING & FINE-TUNING
Provides tools for efficient fine-tuning of large language models, including techniques like quantization and memory optimization.
Engineering Layer - TRAINING & FINE-TUNING
Platform for building and deploying machine learning models, with a focus on simplifying the development process and enabling faster iteration.
Development utilities, libraries and services for building AI applications
Engineering Layer - TOOLS
Relevance AI provides a no-code AI workforce platform that enables businesses to build, customize, and manage AI agents and tools for various functions like sales and support, featuring Bosh, their AI Sales Agent, while ensuring enterprise-grade security and compliance.
Engineering Layer - TOOLS
Greptile provides an AI-powered code analysis platform that helps software teams ship faster by offering intelligent code reviews, codebase chat, and custom dev tools with full contextual understanding, while maintaining SOC2 Type II compliance and optional self-hosting capabilities.
Engineering Layer - TOOLS
Sourcegraph provides a code intelligence platform featuring Cody, an AI coding assistant, and advanced code search capabilities that help developers navigate, understand, and modify complex codebases while automating routine tasks across enterprise environments.
Engineering Layer - TOOLS
PromptLayer provides a comprehensive prompt engineering platform that enables technical and non-technical teams to collaboratively edit, evaluate, and deploy LLM prompts through a visual CMS, while offering version control, A/B testing, and monitoring capabilities with SOC 2 Type 2 compliance.
Engineering Layer - TOOLS
JigsawStack provides a comprehensive suite of AI APIs including web scraping, translation, speech-to-text, OCR, prediction, and prompt optimization, offering globally distributed infrastructure with type-safe SDKs and built-in monitoring capabilities across 99+ locations.
Systems for validating AI performance and reliability
Engineering Layer - TESTING & QUALITY ASSURANCE
Confident AI provides an LLM evaluation platform that enables organizations to benchmark, unit test, and monitor their LLM applications through automated regression testing, A/B testing, and synthetic dataset generation, while offering research-backed evaluation metrics and comprehensive observability features.
Engineering Layer - TESTING & QUALITY ASSURANCE
AI agent specifically designed for software testing and quality assurance, automating the testing process and providing comprehensive test coverage.
Engineering Layer - TESTING & QUALITY ASSURANCE
Braintrust provides an end-to-end platform for evaluating and testing LLM applications, offering features like prompt testing, custom scoring, dataset management, real-time tracing, and production monitoring, with support for both UI-based and SDK-driven workflows.
Core libraries and building blocks for AI application development
Intelligence Layer - FRAMEWORKS
- No description available
Intelligence Layer - FRAMEWORKS
Provides an agent development platform with advanced memory management for LLMs, enabling developers to build, deploy, and scale production-ready AI agents with transparent reasoning and model-agnostic flexibility.
Intelligence Layer - FRAMEWORKS
Framework for developing LLM applications with multiple conversational agents that collaborate and interact with humans.
Intelligence Layer - FRAMEWORKS
A framework for creating and managing workflows and tasks for AI agents.
Intelligence Layer - FRAMEWORKS
Toolhouse provides a cloud infrastructure platform and universal SDK that enables developers to equip LLMs with actions and knowledge through a Tool Store, offering pre-built optimized functions, low-latency execution, and cross-LLM compatibility with just three lines of code.
Intelligence Layer - FRAMEWORKS
Composio provides an integration platform for AI agents and LLMs with 250+ pre-built tools, managed authentication, and RPA capabilities, enabling developers to easily connect their AI applications with various services while maintaining SOC-2 compliance and supporting multiple agent frameworks.
Intelligence Layer - FRAMEWORKS
CrewAI provides a comprehensive platform for building, deploying, and managing multi-agent AI systems, offering both open-source framework and enterprise solutions with support for any LLM and cloud platform, enabling organizations to create automated workflows across various industries.
Systems for managing and retrieving information
Intelligence Layer - KNOWLEDGE ENGINES
Contextual AI provides enterprise-grade RAG (Retrieval-Augmented Generation) solutions that enable organizations in regulated industries to build and deploy production-ready AI applications for searching and analyzing large volumes of business-critical documents.
Intelligence Layer - KNOWLEDGE ENGINES
Platform for working with unstructured data, offering tools for data pre-processing, ETL, and integration with LLMs.
Intelligence Layer - KNOWLEDGE ENGINES
SciPhi offers R2R, an all-in-one RAG (Retrieval Augmented Generation) solution that enables developers to build and scale AI applications with advanced features including document management, hybrid vector search, and knowledge graphs, while providing superior ingestion performance compared to competitors.
AI models optimized for software development
Development environments for sandboxing and building AI applications
Infrastructure Layer - WORKSPACES
Daytona.io is an open-source Development Environment Manager designed to simplify the setup and management of development environments across various platforms, including local, remote, and cloud infrastructures.
Infrastructure Layer - WORKSPACES
Runloop provides a secure, high-performance infrastructure platform that enables developers to build, scale, and deploy AI-powered coding solutions with seamless integration and real-time monitoring capabilities.
Infrastructure Layer - WORKSPACES
E2B provides an open-source runtime platform that enables developers to securely execute AI-generated code in cloud sandboxes, supporting multiple languages and frameworks for AI-powered development use cases.
Infrastructure Layer - WORKSPACES
Modal offers a serverless cloud platform for AI and ML applications that enables developers to deploy and scale workloads instantly with simple Python code, featuring high-performance GPU infrastructure and pay-per-use pricing.
Services for deploying and running AI models
Infrastructure Layer - INFERENCE PROVIDERS
OpenAI develops advanced artificial intelligence systems like ChatGPT, GPT-4, and Sora, focusing on creating safe AGI that benefits humanity through products spanning language models, image generation, and video creation while maintaining leadership in AI research and safety.
Infrastructure Layer - INFERENCE PROVIDERS
AI21 Labs delivers enterprise-grade generative AI solutions through its Jamba foundation model and RAG engine, enabling organizations to build secure, production-ready AI applications with flexible deployment options and dedicated integration support.
Infrastructure Layer - INFERENCE PROVIDERS
Cohere provides an enterprise AI platform featuring advanced language models, embedding, and retrieval capabilities that enables businesses to build production-ready AI applications with flexible deployment options across cloud or on-premises environments.
Infrastructure Layer - INFERENCE PROVIDERS
Hugging Face provides fully managed inference infrastructure for ML models with support for multiple hardware options (CPU, GPU, TPU) across various cloud providers, offering autoscaling and dedicated deployments with enterprise-grade security.
Infrastructure Layer - INFERENCE PROVIDERS
Cartesia AI delivers real-time multimodal intelligence through state space models that enable fast, private, and offline inference capabilities across devices, offering streaming-first solutions with constant memory usage and low latency.
Infrastructure Layer - INFERENCE PROVIDERS
Provides easy access to open-source language models through a simple API, similar to offerings from closed-source providers.
Infrastructure Layer - INFERENCE PROVIDERS
Offers an API for accessing and running open-source LLMs, facilitating seamless integration into AI applications.
Infrastructure Layer - INFERENCE PROVIDERS
End-to-end platform for deploying and managing AI models, including LLMs, with integrated tools for monitoring, versioning, and scaling.
Infrastructure Layer - INFERENCE PROVIDERS
Amazon Nova provides state-of-the-art foundation models through Amazon Bedrock, offering multiple model variants (Micro, Lite, Pro, Canvas, Reel) for text, image, and video processing with industry-leading price-performance, fine-tuning capabilities, and enterprise-grade features.
Infrastructure Layer - INFERENCE PROVIDERS
Serverless platform for running machine learning models, allowing developers to deploy and scale models without managing infrastructure.
Infrastructure Layer - INFERENCE PROVIDERS
BentoML provides an open-source unified inference platform that enables organizations to build, deploy, and scale AI systems across any cloud with high performance and flexibility, while offering enterprise features like auto-scaling, rapid iteration, and SOC II compliance.
Computing infrastructure that powers AI systems and their workspaces
Infrastructure Layer - CLOUD PROVIDERS
Koyeb provides a high-performance serverless platform specifically optimized for AI workloads, offering GPU/NPU infrastructure, global deployment across 50+ locations, and seamless scaling capabilities for ML model inference and training with built-in observability.
Please read the contribution guidelines before submitting a pull request.
This project is licensed under the Apache 2.0 License - see the LICENSE file for details







