This repo demonstrates how to use Azure OpenAI's Responses API with the Code Interpreter tool. The Code Interpreter enables your selected GPT model to build and test Python code within a secure sandbox container, generating outputs such as data visualisations and analysis results.
Using REST API calls (via the requests library), we can retrieve files generated by the Code Interpreter's Python run, including images, charts and other outputs.
Note
For the latest implementation details of Responses API in Azure, refer to the Azure AI Foundry documentation page.
- Part 1: Configuring Solution Environment
- Part 2: Calling Code Interpreter Tool
- Part 3: Retrieving Output Files from Container
To use this notebook, you'll need to set up your Azure OpenAI environment and install the required Python packages.
Ensure you have an Azure OpenAI Service resource with a model deployment that supports the Code Interpreter tool calling (e.g., GPT-4.1).
This demo uses Microsoft Entra ID authentication via DefaultAzureCredential
from the azure.identity
package.
To enable authentication, ensure your environment is properly configured by:
- Logging in via
az login
(Azure CLI), or - Setting relevant environment variables for service principals, or
- Using managed identity in Azure environment.
Define a token provider using the get_bearer_token_provider()
function:
token_provider = get_bearer_token_provider(
DefaultAzureCredential(),
"https://cognitiveservices.azure.com/.default"
)
Configure the following environment variables for your Azure OpenAI deployment:
Environment Variable | Description |
---|---|
AOAI_API_BASE | Your Azure OpenAI endpoint URL (e.g., https://<YOUR_AOAI_RESOURCE>.openai.azure.com). |
AOAI_DEPLOYMENT | The name of your model deployment (e.g., gpt-4.1). |
AOAI_API_VERSION | The API version (e.g., 2025-04-01-preview). |
Install the necessary packages:
pip install openai azure-identity requests Pillow ipython pathlib
Initialise the Azure OpenAI with your environment variables and Entra ID token provider:
client = AzureOpenAI(
azure_endpoint = AOAI_API_BASE,
azure_ad_token_provider = token_provider,
api_version = AOAI_API_VERSION,
)
Include the Code Interpreter as a tool in your Responses API call, along with instructions (system prompt) directing the model to use the appropriate tool:
response = client.responses.create(
model = AOAI_DEPLOYMENT,
tools = [
{
"type": "code_interpreter",
"container": {"type": "auto"}
}
],
instructions = "You are a helpful data analyst. You should use Python tool to perform required calculations.",
input = INPUT_TEXT
)
When the Code Interpreter generates output files, their details (Container ID, File ID and filename) can be extracted from the "annotations" section of the Response API output:
{
"container_id": "cntr_689df3cb69648190b73217f54eeb713806df641648828556",
"file_id": "cfile_689df41900fc81909d22ec83e7dafbe1",
"filename": "cfile_689df41900fc81909d22ec83e7dafbe1.png"
}
Warning
Details shown above are examples. Your actual container ID, file IDs and file names may be different for each execution.
Retrieve the content of generated files using the REST API endpoint that references the temporary container ID and each file ID:
{AOAI_API_BASE}/openai/v1/containers/{container_id}/files/{file_id}/content
After retrieving and saving the file content bytes locally, you can display the outputs in your Jupyter notebook. For example, a data visualisation might look like this: