Skip to content
#

ai-evaluation-tools

Here are 2 public repositories matching this topic...

Language: All
Filter by language

Python SDK for Agent AI Observability, Monitoring and Evaluation Framework. Includes features like agent, llm and tools tracing, debugging multi-agentic system, self-hosted dashboard and advanced analytics with timeline and execution graph view

  • Updated Mar 22, 2025
  • Python

MindTrial: Evaluate and compare AI language models (LLMs) on text-based tasks. Supports multiple providers (OpenAI, Google, Anthropic, DeepSeek), custom tasks in YAML, and HTML/CSV reports.

  • Updated Mar 21, 2025
  • Go

Improve this page

Add a description, image, and links to the ai-evaluation-tools topic page so that developers can more easily learn about it.

Curate this topic

Add this topic to your repo

To associate your repository with the ai-evaluation-tools topic, visit your repo's landing page and select "manage topics."

Learn more