This project is a minimal Python command-line client that demonstrates how to build a conversational loop with the OpenAI Chat Completions API. It loads configuration from environment variables, maintains the evolving conversation history, and showcases both standard text responses and structured JSON outputs.
- Entry point:
basic/main.py
contains the interactive loop that repeatedly collects user prompts, calls the OpenAI API, and prints responses. - Environment management: The script relies on
python-dotenv
to read yourOPENAI_API_KEY
from a local.env
file or the environment so that credentials are not hard-coded. - Dependencies: Package requirements are listed in
requirements.txt
and include the officialopenai
SDK.
- Conversation history tracking – Messages are stored in a shared list so each API call includes the full dialogue context, enabling coherent multi-turn conversations.
- Recursive prompt loop – The
prompt_user()
function keeps the interaction going until you terminate the script, making it easy to iterate quickly on prompts. - Multiple response formats – An alternate helper,
prompt_user2()
, demonstrates how to request JSON-formatted answers by supplyingresponse_format={"type": "json_object"}
to the API. - Environment-first configuration – Credentials are pulled from environment variables, letting you manage secrets securely outside of your source code.
- Install dependencies
pip install -r requirements.txt
- Set your API key
echo "OPENAI_API_KEY=sk-your-key" > .env
- Run the chat client
python basic/main.py
When executed, the script will prompt You:
in the terminal. Enter a message to receive a model-generated reply tagged with AI:
. Conversation state is printed after each exchange so you can inspect the request payload being sent on subsequent turns.
gpt_tutorial/
├── basic/
│ └── main.py # CLI chat client using OpenAI Chat Completions
├── requirements.txt # Python dependencies required to run the client
└── README.md # Project documentation (this file)