Skip to content

jamesmurdza/humaneval-langchain

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 

Repository files navigation

HumanEval with LangChain

📖 Related article: Evaluating code generation agents—LangChain and CodeChain

This is a demonstration of how to run HumanEval on GPT-3.5 and GPT-4 while taking advantage of LangSmith's visibility and tracing features:

Open In Colab

Related repositories

  • human-eval: Fork of OpenAI's HumanEval framework used in this workflow.
  • humaneval-results: Repository of HumanEval solutions generated with this workflow.
  • codechain: A simple library for generating code with LLMs.
  • agenteval: Early version of a framework for evaluating code generation agents.

About

Benchmark results from code generation with LLMs

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published