Skip to content
Can you explain the basics of machine learning?
What are some of the most famous works of Shakespeare?
What are some common features of Gothic architecture?

Model navigation navigation

Model alignment

Microsoft and external researchers have found Deepseek R1 to be less aligned than other models -- meaning the model appears to have undergone less refinement designed to make its behavior and outputs more safe and appropriate for users -- resulting in (i) higher risks that the model will produce potentially harmful content and (ii) lower scores on safety and jailbreak benchmarks. We recommend customers use Azure AI Content Safety in conjunction with this model and conduct their own evaluations on production systems.

Reasoning outputs

The model's reasoning output (contained within the tags) may contain more harmful content than the model's final response. Consider how your application will use or display the reasoning output; you may want to suppress the reasoning output in a production setting.

Content filtering

When deployed via Azure AI Foundry, prompts and completions are passed through a default configuration of Azure AI Content Safety classification models to detect and prevent the output of harmful content. Learn more about Azure AI Content Safety. Configuration options for content filtering vary when you deploy a model for production in Azure AI; learn more.

About

DeepSeek-R1 excels at reasoning tasks using a step-by-step training process, such as language, scientific reasoning, and coding tasks.
Context
128k input · 4k output
Training date
Undisclosed
Rate limit tier
Provider support

Languages

 (2)
English, and Chinese