Skip to content

Latest commit

 

History

History
361 lines (248 loc) · 23 KB

File metadata and controls

361 lines (248 loc) · 23 KB
title titleSuffix description ms.service ms.topic ms.date ms.custom manager author ms.author recommendations
Azure OpenAI Service models
Azure OpenAI
Learn about the different model capabilities that are available with Azure OpenAI.
azure-ai-openai
conceptual
09/09/2024
references_regions, build-2023, build-2023-dataai, refefences_regions
nitinme
mrbullwinkle
mbullwin
false

Azure OpenAI Service models

Azure OpenAI Service is powered by a diverse set of models with different capabilities and price points. Model availability varies by region and cloud. For Azure Government model availability, please refer to Azure Government OpenAI Service.

Models Description
GPT-4o & GPT-4o mini & GPT-4 Turbo The latest most capable Azure OpenAI models with multimodal versions, which can accept both text and images as input.
GPT-4 A set of models that improve on GPT-3.5 and can understand and generate natural language and code.
GPT-3.5 A set of models that improve on GPT-3 and can understand and generate natural language and code.
Embeddings A set of models that can convert text into numerical vector form to facilitate text similarity.
DALL-E A series of models that can generate original images from natural language.
Whisper A series of models in preview that can transcribe and translate speech to text.
Text to speech (Preview) A series of models in preview that can synthesize text to speech.

GPT-4o and GPT-4 Turbo

GPT-4o integrates text and images in a single model, enabling it to handle multiple data types simultaneously. This multimodal approach enhances accuracy and responsiveness in human-computer interactions. GPT-4o matches GPT-4 Turbo in English text and coding tasks while offering superior performance in non-English languages and vision tasks, setting new benchmarks for AI capabilities.

How do I access the GPT-4o and GPT-4o mini models?

GPT-4o and GPT-4o mini are available for standard and global-standard model deployment.

You need to create or use an existing resource in a supported standard or global standard region where the model is available.

When your resource is created, you can deploy the GPT-4o models. If you are performing a programmatic deployment, the model names are:

  • gpt-4o Version 2024-08-06
  • gpt-4o, Version 2024-05-13
  • gpt-4o-mini Version 2024-07-18

GPT-4 Turbo

GPT-4 Turbo is a large multimodal model (accepting text or image inputs and generating text) that can solve difficult problems with greater accuracy than any of OpenAI's previous models. Like GPT-3.5 Turbo, and older GPT-4 models GPT-4 Turbo is optimized for chat and works well for traditional completions tasks.

[!INCLUDE GPT-4 Turbo]

GPT-4

GPT-4 is the predecessor to GPT-4 Turbo. Both the GPT-4 and GPT-4 Turbo models have a base model name of gpt-4. You can distinguish between the GPT-4 and Turbo models by examining the model version.

  • gpt-4 Version 0314
  • gpt-4 Version 0613
  • gpt-4-32k Version 0613

You can see the token context length supported by each model in the model summary table.

GPT-4 and GPT-4 Turbo models

  • These models can only be used with the Chat Completion API.

See model versions to learn about how Azure OpenAI Service handles model version upgrades, and working with models to learn how to view and configure the model version settings of your GPT-4 deployments.

Model ID Description Max Request (tokens) Training Data (up to)
gpt-4o (2024-08-06)
GPT-4o (Omni)
Latest large GA model
- Structured outputs
- Text, image processing
- JSON Mode
- parallel function calling
- Enhanced accuracy and responsiveness
- Parity with English text and coding tasks compared to GPT-4 Turbo with Vision
- Superior performance in non-English languages and in vision tasks
Input: 128,000
Output: 16,384
Oct 2023
gpt-4o-mini (2024-07-18)
GPT-4o mini
Latest small GA model
- Fast, inexpensive, capable model ideal for replacing GPT-3.5 Turbo series models.
- Text, image processing
- JSON Mode
- parallel function calling
Input: 128,000
Output: 16,384
Oct 2023
gpt-4o (2024-05-13)
GPT-4o (Omni)
Text, image processing
- JSON Mode
- parallel function calling
- Enhanced accuracy and responsiveness
- Parity with English text and coding tasks compared to GPT-4 Turbo with Vision
- Superior performance in non-English languages and in vision tasks
Input: 128,000
Output: 4,096
Oct 2023
gpt-4 (turbo-2024-04-09)
GPT-4 Turbo with Vision
New GA model
- Replacement for all previous GPT-4 preview models (vision-preview, 1106-Preview, 0125-Preview).
- Feature availability is currently different depending on method of input, and deployment type.
Input: 128,000
Output: 4,096
Dec 2023
gpt-4 (0125-Preview)*
GPT-4 Turbo Preview
Preview Model
-Replaces 1106-Preview
- Better code generation performance
- Reduces cases where the model doesn't complete a task
- JSON Mode
- parallel function calling
- reproducible output (preview)
Input: 128,000
Output: 4,096
Dec 2023
gpt-4 (vision-preview)
GPT-4 Turbo with Vision Preview
Preview model
- Accepts text and image input.
- Supports enhancements
- JSON Mode
- parallel function calling
- reproducible output (preview)
Input: 128,000
Output: 4,096
Apr 2023
gpt-4 (1106-Preview)
GPT-4 Turbo Preview
Preview Model
- JSON Mode
- parallel function calling
- reproducible output (preview)
Input: 128,000
Output: 4,096
Apr 2023
gpt-4-32k (0613) Older GA model
- Basic function calling with tools
32,768 Sep 2021
gpt-4 (0613) Older GA model
- Basic function calling with tools
8,192 Sep 2021
gpt-4-32k(0314) Older GA model
- Retirement information
32,768 Sep 2021
gpt-4 (0314) Older GA model
- Retirement information
8,192 Sep 2021

Caution

We don't recommend using preview models in production. We will upgrade all deployments of preview models to either future preview versions or to the latest stable/GA version. Models designated preview do not follow the standard Azure OpenAI model lifecycle.

  • GPT-4 version 0125-preview is an updated version of the GPT-4 Turbo preview previously released as version 1106-preview.
  • GPT-4 version 0125-preview completes tasks such as code generation more completely compared to gpt-4-1106-preview. Because of this, depending on the task, customers may find that GPT-4-0125-preview generates more output compared to the gpt-4-1106-preview. We recommend customers compare the outputs of the new model. GPT-4-0125-preview also addresses bugs in gpt-4-1106-preview with UTF-8 handling for non-English languages.
  • GPT-4 version turbo-2024-04-09 is the latest GA release and replaces 0125-Preview, 1106-preview, and vision-preview.

Important

  • gpt-4 versions 1106-Preview, 0125-Preview, and vision-preview will be upgraded with a stable version of gpt-4 in the future. Deployments of gpt-4 versions 1106-Preview, 0125-Preview, and vision-preview set to "Auto-update to default" and "Upgrade when expired" will start to be upgraded after the stable version is released. For each deployment, a model version upgrade takes place with no interruption in service for API calls. Upgrades are staged by region and the full upgrade process is expected to take 2 weeks. Deployments of gpt-4 versions 1106-Preview, 0125-Preview, and vision-preview set to "No autoupgrade" will not be upgraded and will stop operating when the preview version is upgraded in the region. See Azure OpenAI model retirements and deprecations for more information on the timing of the upgrade.

GPT-3.5

GPT-3.5 models can understand and generate natural language or code. The most capable and cost effective model in the GPT-3.5 family is GPT-3.5 Turbo, which has been optimized for chat and works well for traditional completions tasks as well. GPT-3.5 Turbo is available for use with the Chat Completions API. GPT-3.5 Turbo Instruct has similar capabilities to text-davinci-003 using the Completions API instead of the Chat Completions API. We recommend using GPT-3.5 Turbo and GPT-3.5 Turbo Instruct over legacy GPT-3.5 and GPT-3 models.

Model ID Description Max Request (tokens) Training Data (up to)
gpt-35-turbo (0125) NEW Latest GA Model
- JSON Mode
- parallel function calling
- reproducible output (preview)
- Higher accuracy at responding in requested formats.
- Fix for a bug which caused a text encoding issue for non-English language function calls.
Input: 16,385
Output: 4,096
Sep 2021
gpt-35-turbo (1106) Older GA Model
- JSON Mode
- parallel function calling
- reproducible output (preview)
Input: 16,385
Output: 4,096
Sep 2021
gpt-35-turbo-instruct (0914) Completions endpoint only
- Replacement for legacy completions models
4,097 Sep 2021
gpt-35-turbo-16k (0613) Older GA Model
- Basic function calling with tools
16,384 Sep 2021
gpt-35-turbo (0613) Older GA Model
- Basic function calling with tools
4,096 Sep 2021
gpt-35-turbo1 (0301) Older GA Model
- Retirement information
4,096 Sep 2021

To learn more about how to interact with GPT-3.5 Turbo and the Chat Completions API check out our in-depth how-to.

1 This model will accept requests > 4,096 tokens. It is not recommended to exceed the 4,096 input token limit as the newer version of the model are capped at 4,096 tokens. If you encounter issues when exceeding 4,096 input tokens with this model this configuration is not officially supported.

Embeddings

text-embedding-3-large is the latest and most capable embedding model. Upgrading between embeddings models is not possible. In order to move from using text-embedding-ada-002 to text-embedding-3-large you would need to generate new embeddings.

  • text-embedding-3-large
  • text-embedding-3-small
  • text-embedding-ada-002

In testing, OpenAI reports both the large and small third generation embeddings models offer better average multi-language retrieval performance with the MIRACL benchmark while still maintaining performance for English tasks with the MTEB benchmark.

Evaluation Benchmark text-embedding-ada-002 text-embedding-3-small text-embedding-3-large
MIRACL average 31.4 44.0 54.9
MTEB average 61.0 62.3 64.6

The third generation embeddings models support reducing the size of the embedding via a new dimensions parameter. Typically larger embeddings are more expensive from a compute, memory, and storage perspective. Being able to adjust the number of dimensions allows more control over overall cost and performance. The dimensions parameter is not supported in all versions of the OpenAI 1.x Python library, to take advantage of this parameter we recommend upgrading to the latest version: pip install openai --upgrade.

OpenAI's MTEB benchmark testing found that even when the third generation model's dimensions are reduced to less than text-embeddings-ada-002 1,536 dimensions performance remains slightly better.

DALL-E

The DALL-E models generate images from text prompts that the user provides. DALL-E 3 is generally available for use with the REST APIs. DALL-E 2 and DALL-E 3 with client SDKs are in preview.

Whisper

The Whisper models can be used for speech to text.

You can also use the Whisper model via Azure AI Speech batch transcription API. Check out What is the Whisper model? to learn more about when to use Azure AI Speech vs. Azure OpenAI Service.

Text to speech (Preview)

The OpenAI text to speech models, currently in preview, can be used to synthesize text to speech.

You can also use the OpenAI text to speech voices via Azure AI Speech. To learn more, see OpenAI text to speech voices via Azure OpenAI Service or via Azure AI Speech guide.

Model summary table and region availability

Note

This article primarily covers model/region availability that applies to all Azure OpenAI customers with deployment types of Standard. Some select customers have access to model/region combinations that are not listed in the unified table below. For more information on Provisioned deployments, see our Provisioned guidance.

Standard deployment model availability

[!INCLUDE Standard Models]

This table doesn't include global standard model deployment regional availability for GPT-4o, or fine-tuning regional availability information. Consult the dedicated global standard deployment section and the fine-tuning section for this information.

Standard and global standard deployment model quota

[!INCLUDE Quota]

Provisioned deployment model availability

[!INCLUDE Provisioned]

Note

The provisioned version of gpt-4 Version: turbo-2024-04-09 is currently limited to text only.

How do I get access to Provisioned?

You need to speak with your Microsoft sales/account team to acquire provisioned throughput. If you don't have a sales/account team, unfortunately at this time, you cannot purchase provisioned throughput.

For more information on Provisioned deployments, see our Provisioned guidance.

Global standard model availability

gpt-4o Version: 2024-08-06

Supported regions:

  • eastus
  • eastus2
  • northcentralus
  • southcentralus
  • swedencentral
  • westus
  • westus3

gpt-4o Version: 2024-05-13

Supported regions:

  • australiaeast
  • brazilsouth
  • canadaeast
  • eastus
  • eastus2
  • francecentral
  • germanywestcentral
  • japaneast
  • koreacentral
  • northcentralus
  • norwayeast
  • polandcentral
  • spaincentral
  • southafricanorth
  • southcentralus
  • southindia
  • swedencentral
  • switzerlandnorth
  • uksouth
  • westeurope
  • westus
  • westus3

gpt-4o-mini Version: 2024-07-18

Supported regions:

  • eastus
  • swedencentral

Global batch model availability

Region and model support

The following models support global batch:

Model Version Input format
gpt-4o-mini 2024-07-18 text + image
gpt-4o 2024-05-13 text + image
gpt-4 turbo-2024-04-09 text
gpt-4 0613 text
gpt-35-turbo 0125 text
gpt-35-turbo 1106 text
gpt-35-turbo 0613 text

Global batch is currently supported in the following regions:

  • East US
  • West US
  • Sweden Central

GPT-4 and GPT-4 Turbo model availability

Public cloud regions

[!INCLUDE GPT-4]

Select customer access

In addition to the regions above which are available to all Azure OpenAI customers, some select pre-existing customers have been granted access to versions of GPT-4 in additional regions:

Model Region
gpt-4 (0314)
gpt-4-32k (0314)
East US
France Central
South Central US
UK South
gpt-4 (0613)
gpt-4-32k (0613)
East US
East US 2
Japan East
UK South

GPT-3.5 models

Important

The NEW gpt-35-turbo (0125) model has various improvements, including higher accuracy at responding in requested formats and a fix for a bug which caused a text encoding issue for non-English language function calls.

GPT-3.5 Turbo is used with the Chat Completion API. GPT-3.5 Turbo version 0301 can also be used with the Completions API, though this is not recommended. GPT-3.5 Turbo versions 0613 and 1106 only support the Chat Completions API.

GPT-3.5 Turbo version 0301 is the first version of the model released. Version 0613 is the second version of the model and adds function calling support.

See model versions to learn about how Azure OpenAI Service handles model version upgrades, and working with models to learn how to view and configure the model version settings of your GPT-3.5 Turbo deployments.

GPT-3.5-Turbo model availability

Public cloud regions

[!INCLUDE GPT-35-Turbo]

Embeddings models

These models can only be used with Embedding API requests.

Note

text-embedding-3-large is the latest and most capable embedding model. Upgrading between embedding models is not possible. In order to migrate from using text-embedding-ada-002 to text-embedding-3-large you would need to generate new embeddings.

Model ID Max Request (tokens) Output Dimensions Training Data (up-to)
text-embedding-ada-002 (version 2) 8,192 1,536 Sep 2021
text-embedding-ada-002 (version 1) 2,046 1,536 Sep 2021
text-embedding-3-large 8,192 3,072 Sep 2021
text-embedding-3-small 8,192 1,536 Sep 2021

Note

When sending an array of inputs for embedding, the max number of input items in the array per call to the embedding endpoint is 2048.

Public cloud regions

[!INCLUDE Embeddings]

DALL-E models

Model ID Feature Availability Max Request (characters)
dalle2 (preview) East US 1,000
dall-e-3 East US, Australia East, Sweden Central 4,000

Fine-tuning models

babbage-002 and davinci-002 are not trained to follow instructions. Querying these base models should only be done as a point of reference to a fine-tuned version to evaluate the progress of your training.

gpt-35-turbo - fine-tuning of this model is limited to a subset of regions, and is not available in every region the base model is available.

Model ID Fine-Tuning Regions Max Request (tokens) Training Data (up to)
babbage-002 North Central US
Sweden Central
Switzerland West
16,384 Sep 2021
davinci-002 North Central US
Sweden Central
Switzerland West
16,384 Sep 2021
gpt-35-turbo (0613) East US2
North Central US
Sweden Central
Switzerland West
4,096 Sep 2021
gpt-35-turbo (1106) East US2
North Central US
Sweden Central
Switzerland West
Input: 16,385
Output: 4,096
Sep 2021
gpt-35-turbo (0125) East US2
North Central US
Sweden Central
Switzerland West
16,385 Sep 2021
gpt-4 (0613) 1 North Central US
Sweden Central
8192 Sep 2021
gpt-4o-mini 1 (2024-07-18) North Central US
Sweden Central
Input: 128,000
Output: 16,384
Training example context length: 64,536
Oct 2023
gpt-4o 1 (2024-08-06) East US2
North Central US
Sweden Central
Input: 128,000
Output: 16,384
Training example context length: 64,536
Oct 2023

1 GPT-4, GPT-4o, and GPT-4o mini fine-tuning is currently in public preview. See our GPT-4, GPT-4o, & GPT-4o mini fine-tuning safety evaluation guidance for more information.

Whisper models

Model ID Model Availability Max Request (audio file size)
whisper East US 2
North Central US
Norway East
South India
Sweden Central
West Europe
25 MB

Text to speech models (Preview)

Model ID Model Availability
tts-1 North Central US
Sweden Central
tts-1-hd North Central US
Sweden Central

Assistants (Preview)

For Assistants you need a combination of a supported model, and a supported region. Certain tools and capabilities require the latest models. The following models are available in the Assistants API, SDK, Azure AI Studio and Azure OpenAI Studio. The following table is for pay-as-you-go. For information on Provisioned Throughput Unit (PTU) availability, see provisioned throughput. The listed models and regions can be used with both Assistants v1 and v2. You can use global standard models if they are supported in the regions listed below.

Region gpt-35-turbo (0613) gpt-35-turbo (1106) fine tuned gpt-3.5-turbo-0125 gpt-4 (0613) gpt-4 (1106) gpt-4 (0125) gpt-4o (2024-05-13) gpt-4o-mini (2024-07-18)
Australia East
East US
East US 2
France Central
Japan East
Norway East
Sweden Central
UK South
West US
West US 3

Model retirement

For the latest information on model retirements, refer to the model retirement guide.

Next steps