Skip to content

pamelafox/python-openai-demos

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Python OpenAI demos

This repository contains a collection of Python scripts that demonstrate how to use the OpenAI API to generate chat completions.

OpenAI package

These scripts use the OpenAI package to demonstrate how to use the OpenAI API. In increasing order of complexity, the scripts are:

  1. chat.py: A simple script that demonstrates how to use the OpenAI API to generate chat completions.
  2. chat_stream.py: Adds stream=True to the API call to return a generator that streams the completion as it is being generated.
  3. chat_history.py: Adds a back-and-forth chat interface using input() which keeps track of past messages and sends them with each chat completion call.
  4. chat_history_stream.py: The same idea, but with stream=True enabled.

Plus these scripts to demonstrate additional features:

  • chat_safety.py: The simple script with exception handling for Azure AI Content Safety filter errors.
  • chat_async.py: Uses the async clients to make asynchronous calls, including an example of sending off multiple requests at once using asyncio.gather.

Popular LLM libraries

These scripts use popular LLM libraries to demonstrate how to use the OpenAI API with them:

Retrieval-Augmented Generation (RAG)

These scripts demonstrate how to use the OpenAI API for Retrieval-Augmented Generation (RAG) tasks, where the model retrieves relevant information from a source and uses it to generate a response.

First install the RAG dependencies:

python -m pip install -r requirements-rag.txt

Then run the scripts (in order of increasing complexity):

  • rag_csv.py: Retrieves matching results from a CSV file and uses them to answer user's question.
  • rag_multiturn.py: The same idea, but with a back-and-forth chat interface using input() which keeps track of past messages and sends them with each chat completion call.
  • rag_queryrewrite.py: Adds a query rewriting step to the RAG process, where the user's question is rewritten to improve the retrieval results.
  • rag_documents_ingestion.py: Ingests PDFs by using pymupdf to convert to markdown, then using Langchain to split into chunks, then using OpenAI to embed the chunks, and finally storing in a local JSON file.
  • rag_documents_flow.py: A RAG flow that retrieves matching results from the local JSON file created by rag_documents_ingestion.py.
  • rag_documents_hybrid.py: A RAG flow that implements a hybrid retrieval with both vector and keyword search, merging with Reciprocal Rank Fusion (RRF), and semantic re-ranking with a cross-encoder model.

Setting up the environment

If you open this up in a Dev Container or GitHub Codespaces, everything will be setup for you. If not, follow these steps:

  1. Set up a Python virtual environment and activate it.

  2. Install the required packages:

python -m pip install -r requirements.txt

Configuring the OpenAI environment variables

These scripts can be run with Azure OpenAI account, OpenAI.com, local Ollama server, or GitHub models, depending on the environment variables you set.

  1. Copy the .env.sample file to a new file called .env:

    cp .env.sample .env
  2. For Azure OpenAI, create an Azure OpenAI gpt-3.5 or gpt-4 deployment (perhaps using this template), and customize the .env file with your Azure OpenAI endpoint and deployment id.

    API_HOST=azure
    AZURE_OPENAI_ENDPOINT=https://YOUR-AZURE-OPENAI-SERVICE-NAME.openai.azure.com
    AZURE_OPENAI_DEPLOYMENT=YOUR-AZURE-DEPLOYMENT-NAME
    AZURE_OPENAI_VERSION=2024-03-01-preview

    If you are not yet logged into the Azure account associated with that deployment, run this command to log in:

    az login
  3. For OpenAI.com, customize the .env file with your OpenAI API key and desired model name.

    API_HOST=openai
    OPENAI_KEY=YOUR-OPENAI-API-KEY
    OPENAI_MODEL=gpt-3.5-turbo
  4. For Ollama, customize the .env file with your Ollama endpoint and model name (any model you've pulled).

    API_HOST=ollama
    OLLAMA_ENDPOINT=http://localhost:11434/v1
    OLLAMA_MODEL=llama2

    If you're running inside the Dev Container, replace localhost with host.docker.internal.

  5. For GitHub models, customize the .env file with your GitHub model name.

    API_HOST=github
    GITHUB_MODEL=gpt-4o

    You'll need a GITHUB_TOKEN environment variable that stores a GitHub personal access token. If you're running this inside a GitHub Codespace, the token will be automatically available. If not, generate a new personal access token and run this command to set the GITHUB_TOKEN environment variable:

    export GITHUB_TOKEN="<your-github-token-goes-here>"

About

A series of short examples using the OpenAI SDK

Resources

License

Code of conduct

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages