Fine-Tuning LLMs on Azure is a modular, beginner-to-expert-friendly guide for customizing both OpenAI and open-source language models using Azure. Designed for Data Scientists, Machine Learning Engineers, and even those without a deep technical background, this repository offers a clear, scalable path to mastering LLM fine-tuning with practical, real-world examples on the Azure cloud platform.
π₯ New (2025-07-07): Phi-4-mini Fine-Tuning using Azure Python SDK (Low-Code) [Jump to the demo]
π₯ New (2025-06-26): Phi-4-mini Fine-Tuning using Azure AI Foundry UI Dashboard (No-Code) [Jump to the demo]
π₯ Updated (2025-06-22): Llama3.2-11B Vision Fine-Tuning using Unsloth AI Open Source (Pro-Code) Python SDK [Jump to the notebook]
π₯ New (2025-06-15): GPT-4o DPO Fine-Tuning using Azure Machine Learning (Low-Code) Python SDK [Jump to the notebook]
π₯ New (2025-06-15): GPT-4o Fine-Tuning using Azure Python SDK (Low-Code) [Jump to the demo]
π₯ New (2025-06-09): GPT-4.1-mini Fine-Tuning using Azure AI Foundry UI Dashboard (No-Code) [Jump to the demo]
π₯ New (2025-06-09): GPT-4o-mini Fine-Tuning using Azure AI Foundry UI Dashboard (No-Code) [Jump to the demo]
Fine-Tuning, or Supervised Fine-Tuning, retrains an existing pre-trained LLM using example data, resulting in a new "custom" fine-tuned LLM that has been optimized for the provided task-specific examples.
Typically, we use Fine-Tuning to:
- improve LLM performance on specific tasks.
- introduce information that wasn't well represented by the base LLM model.
Good use cases include:
- steering the LLM outputs in a specific style or tone.
- too long or complex prompts to fit into the LLM prompt window.
You may consider Fine-Tuning when:
- you have tried Prompt Engineering and RAG approaches.
- latency is critically important to the use case.
- high accuracy is required to meet the customer requirement.
- you have thousands of high-quality samples with ground-truth data.
- you have clear evaluation metrics to benchmark fine-tuned models.
Lab 1: LLM Fine-Tuning via Azure AI Foundry Dashboard
- Lab 1.1: Supervised Fine-Tuning GPT-3.5 Models (1h duration)
- Lab 1.2: Supervised Fine-Tuning Llama2 Models (1h duration)
- Lab 1.3: Supervised Fine-Tuning GPT-4o-mini Model (1h duration)
- Lab 1.4: Supervised Fine-Tuning GPT-4.1-mini Model (1h duration)
- Lab 1.5: Supervised Fine-Tuning Phi-4-mini Model (1h duration)
Lab 2: LLM Fine-Tuning via Azure Python SDK
- Lab 2.1: Supervised Fine-Tuning GPT-3.5 Models (2h duration)
- Lab 2.2: Supervised Fine-Tuning Llama2 Models (2h duration)
- Lab 2.3: Supervised Fine-Tuning GPT-4o Model (2h duration)
- Lab 2.4: DPO Fine-Tuning GPT-4o Model (2h duration)
- Lab 2.5: Supervised Fine-Tuning Phi-4 Model (2h duration)
Lab 3: LLM Fine-Tuning via Open Source Tools
- Lab 3.1: Supervised Fine-Tuning Llama3.2-11B Vision Model using Unsloth AI Framework (3h duration)
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.
When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT license.