Skip to content
/ GPT-RAG Public template
forked from Azure/GPT-RAG

Sharing the learning along the way we been gathering to enable Azure OpenAI at enterprise scale in a secure manner. GPT-RAG core is a Retrieval-Augmented Generation pattern running in Azure, using Azure Cognitive Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.

License

Notifications You must be signed in to change notification settings

rcsousa/GPT-RAG

 
 

Repository files navigation

The RAG pattern enables businesses to use the reasoning capabilities of LLMs, using their existing models to process and generate responses based on new data. RAG facilitates periodic data updates without the need for fine-tuning, thereby streamlining the integration of LLMs into businesses.

The Enterprise RAG Solution Accelerator (GPT-RAG) offers a robust architecture tailored for enterprise-grade deployment of the RAG pattern. It ensures grounded responses and is built on Zero-trust security and Responsible AI, ensuring availability, scalability, and auditability. Ideal for organizations transitioning from exploration and PoC stages to full-scale production and MVPs.

Enterprise RAG Community

Components

GPT-RAG follows a modular approach, consisting of three components, each with a specific function.

Concepts

If you want to learn more about the RAG Pattern and GPT-RAG architecture.

Getting Started

This guide will walk you through the deployment process of Enterprise RAG. There are two deployment options available, Basic Architecture and Zero Trust Architecture. Before beginning the deployment, please ensure you have prepared all the necessary tools and services as outlined in the Pre-requisites section.

Pre-requisites

** If you have not created an Azure AI service resource in the subscription before

Note: If you implement the Zero-trust architecture described below, you will only need Node.js and Python for the second part of the procedure, which you will carry out on the VM created during the deployment process of this architecture.

Basic Architecture Deployment

For quick demonstrations or proof-of-concept projects without network isolation requirements, you can deploy the accelerator using its basic architecture. Basic Architecture

The deployment procedure is quite simple, just install the prerequisites and follow these four steps using Azure Developer CLI (azd) in a terminal:

1 Download the Repository:

azd init -t azure/gpt-rag

2 Login to Azure:

azd auth login

3 Start Building the infrastructure and components deployment:

azd up

4 Add source documents to object storage

Upload your documents to the 'documents' folder located in the storage account. The name of this account should start with 'strag'. This is the default storage account, as shown in the sample image below.

storage_sample

Zero Trust Architecture Deployment

For more secure and isolated deployments, you can opt for the Zero Trust architecture. This architecture is ideal for production environments where network isolation and stringent security measures are highly valued.

Zero Trust Architecture

Deploying the Zero Trust architecture follows a similar procedure to the Basic Architecture deployment, but includes some additional steps. Refer to the instructions below for a detailed guide on deploying this option:

1 Download the Repository

azd init -t azure/gpt-rag

2 Enable network isolation

azd env set AZURE_NETWORK_ISOLATION true  

3 Login to Azure:

azd auth login

4 Start Building the infrastructure and components deployment:

azd provision

5 Next, you will use the Virtual Machine with the Bastion connection (created during step 4) to continue the deployment.

Log into the created VM with the user gptrag and authenticate with the password stored in the keyvault, similar to the figure below:


Keyvault Login

6 Upon accessing Windows, install Powershell, as the other prerequisites are already installed on the VM.

7 Open the command prompt and run the following command to update azd to the latest version:

choco upgrade azd  

After updating azd, simply close and reopen the terminal.

8 Create a new directory, for example, deploy then enter the created directory.

mkdir deploy  
cd deploy  

To finalize the procedure, execute the subsequent commands in the command prompt to successfully complete the deployment:

azd init -t azure/gpt-rag  
azd auth login   
azd env refresh  
azd package  
azd deploy  

Note: when running the azd init ... and azd env refresh, use the same environment name, subscription, and region used in the initial provisioning of the infrastructure.

Done! Zero trust deployment is completed.

Customizing your Deployment

The deployment process outlined in the Getting Started section sets up Azure resources and deploys the accelerator components with a standard configuration. For those looking to tailor the deployment more closely to their specific requirements, the Custom Deployment section offers further customization possibilities.

Integrating with Additional Data Sources

If you're looking to expand your data retrieval capabilities by adding new data sources, consider integrating Bing Custom Search, SQL Server, and Teradata. For more information, refer to the AI Integration Hub page.

Additional Resources

Troubleshooting

Look at the Troubleshooting page in case you face some error in the deployment process.

Evaluating

Querying Conversation History

Pricing Estimation

Governance

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.opensource.microsoft.com.

When you submit a pull request, a CLA bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., status check, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Trademarks

This project may contain trademarks or logos for projects, products, or services. Authorized use of Microsoft trademarks or logos is subject to and must follow Microsoft's Trademark & Brand Guidelines. Use of Microsoft trademarks or logos in modified versions of this project must not cause confusion or imply Microsoft sponsorship. Any use of third-party trademarks or logos are subject to those third-party's policies.

About

Sharing the learning along the way we been gathering to enable Azure OpenAI at enterprise scale in a secure manner. GPT-RAG core is a Retrieval-Augmented Generation pattern running in Azure, using Azure Cognitive Search for retrieval and Azure OpenAI large language models to power ChatGPT-style and Q&A experiences.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Bicep 86.7%
  • Shell 7.0%
  • PowerShell 6.3%