Popular repositories Loading
-
-
-
-
evals
evals PublicForked from openai/evals
Evals is a framework for evaluating OpenAI models and an open-source registry of benchmarks.
Python
-
-
chain-of-thought-hub
chain-of-thought-hub PublicForked from FranxYao/chain-of-thought-hub
Benchmarking LLM reasoning performance w. chain-of-thought prompting
Jupyter Notebook
Something went wrong, please refresh the page to try again.
If the problem persists, check the GitHub status page or contact support.
If the problem persists, check the GitHub status page or contact support.