Skip to content

A sample for implementing retrieval augmented generation using Azure Open AI to generate embeddings, Azure Cosmos DB for MongoDB vCore to perform vector search, and semantic kernel.

License

Notifications You must be signed in to change notification settings

mildother2/rag-semantic-kernel-mongodb-vcore

Β 
Β 

Repository files navigation

RAG using Semantic Kernel with Azure OpenAI and Azure Cosmos DB for MongoDB vCore

A sample for implementing retrieval augmented generation using Azure Open AI to generate embeddings, Azure Cosmos DB for MongoDB vCore to perform vector search, and semantic kernel.

How to use?

  1. Create the following resources on Microsoft Azure:

    • Azure Cosmos DB for MongoDB vCore cluster. See the Quick Start guide here.
    • Azure OpenAI resource with:
      • Embedding model deployment. (ex. text-embedding-ada-002) See the guide here.
      • Chat model deployment. (ex. gpt-35-turbo)
  2. πŸ“ Start here πŸ‘‰ rag-azure-openai-cosmosdb-notebook.ipynb

notebook-full.mp4

Test it inside codespaces πŸ‘‡

Open in GitHub Codespaces

Running the web app

To run the Quart application, follow these steps:

  1. Download the project starter code locally

    git clone https://github.com/john0isaac/rag-semantic-kernel-mongodb-vcore.git
    cd rag-semantic-kernel-mongodb-vcore
  2. Install, initialize and activate a virtualenv using:

    pip install virtualenv
    python -m virtualenv venv
    source venv/bin/activate

    Note - In Windows, the venv does not have a bin directory. Therefore, you'd use the analogous command shown below:

    source venv\Scripts\activate
  3. Install the dependencies:

    pip install -r requirements-dev.txt
  4. Run the notebook to generate the .env file and test out everything first

  5. Install the app as an editable package:

    pip install -e src
  6. Execute the following command in your terminal to start the quart app

    export QUART_APP=src.quartapp
    export QUART_ENV=development
    export QUART_DEBUG=true
    quart run --reload

    For Windows, use setx command shown below:

     setx QUART_APP src.quartapp
     setx QUART_ENV development
     setx QUART_DEBUG true
     quart run --reload
  7. Verify on the Browser

Navigate to project homepage http://127.0.0.1:5000/ or http://localhost:5000

website-full.mp4

Deployment

architecture-thumbnail

This repository is set up for deployment on Azure App Service (w/Azure Cosmos DB for MongoDB vCore) using the configuration files in the infra folder.

To deploy your own instance, follow these steps:

  1. Sign up for a free Azure account

  2. Install the Azure Dev CLI.

  3. Login to your Azure account:

    azd auth login
  4. Initialize a new azd environment:

    azd init

    It will prompt you to provide a name (like "quart-app") that will later be used in the name of the deployed resources.

  5. Provision and deploy all the resources:

    azd up

    It will prompt you to login, pick a subscription, and provide a location (like "eastus"). Then it will provision the resources in your account and deploy the latest code. If you get an error with deployment, changing the location (like to "centralus") can help, as there may be availability constraints for some of the resources.

When azd has finished deploying, you'll see an endpoint URI in the command output. Visit that URI to browse the app! πŸŽ‰

Note

If you make any changes to the app code, you can just run this command to redeploy it:

azd deploy

Add the Data

  1. Open the Azure portal and sign in.

  2. Navigate to your App Service page.

    Azure App service screenshot with the word SSH highlighted in a red box.

  3. Select SSH from the left menu then, select Go.

  4. In the SSH terminal, run python ./scripts/add_data.py.

Add your Own Data

The Python scrips that adds the data is configured to accept any JSON file with your data but you need to specify the following parameters when you run it:

  • Data file path: Path to the JSON file that contains your data. --file="./data/text-sample.json" or -f "./data/text-sample.json"

  • ID field: This is the name of the field that cosmos uses to identify your database records. --id-field=id or -id id

  • Text field: This is the name of the field that will be used to generate the vector embeddings from and stored in the database. --text-field=content or -txt content

  • Description field: This is the name of the description field that cosmos will store along with the embeddings. --description-field=title or -desc title

    python ./scripts/add_data.py --file="./data/text-sample.json" --id-field=id --text-field=contnet --description-field=title

Example for Step by step Manual Deployment

  1. Add your JSON data to the data folder.

  2. The workflow will trigger automatically and push your data to the Azure App service.

  3. Open the Azure portal and sign in.

  4. Navigate to your App Service page.

    Azure App service screenshot with the word SSH highlighted in a red box.

  5. Select SSH from the left menu then, select Go.

  6. In the SSH terminal, run the following command with the changed values to suit your data:

    python ./scripts/add_data.py --file="./data/text-sample.json" --id-field=id --text-field=contnet --description-field=title

Example for azd Deployment

  1. Add your JSON data to the data folder.

  2. Run azd deploy to upload the data to Azure App Service.

  3. Open the Azure portal and sign in.

  4. Navigate to your App Service page.

    Azure App service screenshot with the word SSH highlighted in a red box.

  5. Select SSH from the left menu then, select Go.

  6. In the SSH terminal, run the following command with the changed values to suit your data:

    python ./scripts/add_data.py --file="./data/text-sample.json" --id-field=id --text-field=contnet --description-field=title

About

A sample for implementing retrieval augmented generation using Azure Open AI to generate embeddings, Azure Cosmos DB for MongoDB vCore to perform vector search, and semantic kernel.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Bicep 73.0%
  • Jupyter Notebook 14.2%
  • Python 8.0%
  • HTML 4.5%
  • Other 0.3%