Skip to content

bbgit/openai-cookbook

 
 

Repository files navigation

OpenAI Cookbook

The OpenAI Cookbook shares example code for accomplishing common tasks with the OpenAI API.

To run these examples, you'll need an OpenAI account and API key (create a free account).

Most code examples are written in Python, though the concepts can be applied in any language.

Open in GitHub Codespaces

Recently added/updated 🆕 ✨

Guides & examples

Related OpenAI resources

Beyond the code examples here, you can learn about the OpenAI API from the following resources:

Related resources from around the web

People are writing great tools and papers for improving outputs from GPT. Here are some cool ones we've seen:

Prompting libraries & tools

  • Guidance: A handy looking Python library from Microsoft that uses Handlebars templating to interleave generation, prompting, and logical control.
  • LangChain: A popular Python/JavaScript library for chaining sequences of language model prompts.
  • FLAML (A Fast Library for Automated Machine Learning & Tuning): A Python library for automating selection of models, hyperparameters, and other tunable choices.
  • Chainlit: A Python library for making chatbot interfaces.
  • Guardrails.ai: A Python library for validating outputs and retrying failures. Still in alpha, so expect sharp edges and bugs.
  • Semantic Kernel: A Python/C#/Java library from Microsoft that supports prompt templating, function chaining, vectorized memory, and intelligent planning.
  • Prompttools: Open-source Python tools for testing and evaluating models, vector DBs, and prompts.
  • Outlines: A Python library that provides a domain-specific language to simplify prompting and constrain generation.
  • Promptify: A small Python library for using language models to perform NLP tasks.
  • Scale Spellbook: A paid product for building, comparing, and shipping language model apps.
  • PromptPerfect: A paid product for testing and improving prompts.
  • Weights & Biases: A paid product for tracking model training and prompt engineering experiments.
  • OpenAI Evals: An open-source library for evaluating task performance of language models and prompts.
  • LlamaIndex: A Python library for augmenting LLM apps with data.
  • Arthur Shield: A paid product for detecting toxicity, hallucination, prompt injection, etc.
  • LMQL: A programming language for LLM interaction with support for typed prompting, control flow, constraints, and tools.

Prompting guides

Video courses

Papers on advanced prompting to improve reasoning

Contributing

If there are examples or guides you'd like to see, feel free to suggest them on the issues page. We are also happy to accept high quality pull requests, as long as they fit the scope of the repo.

About

Examples and guides for using the OpenAI API

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 65.8%
  • Python 18.0%
  • TypeScript 13.0%
  • CSS 2.9%
  • JavaScript 0.3%
  • Shell 0.0%