Python client library for improving your LLM app accuracy
-
Updated
Jul 23, 2024 - Python
Python client library for improving your LLM app accuracy
A framework to build scenario simulation projects where human and LLM based agents can participant in, with a user-friendly web UI to visualize simulation, support automatically evaluation on agent action level.
This library implements various metrics (including Kaggle Competition, Medicine) for evaluating ML, DL, AI models, and algorithms. 📐📊📈📉📏
NLP tool for wide-range model reliability evaluations
IELTS listening, speaking, reading and writing modules practice and evaluation with IELTS band calculation based on speech and text analysis and evaluation.
A functional chess game implemented in python, with pygame as a supporting graphics module.
You can build a robust opinion mining and website evaluation system on AWS. The combination of data collection, preprocessing, sentiment analysis, and rating calculation ensures that you can efficiently analyze user feedback and generate meaningful insights to evaluate websites.
Add a description, image, and links to the evaluations topic page so that developers can more easily learn about it.
To associate your repository with the evaluations topic, visit your repo's landing page and select "manage topics."