diff --git a/scenarios/openai_batch_pipeline/README.md b/scenarios/openai_batch_pipeline/README.md index 72a9958dc..f7063f948 100644 --- a/scenarios/openai_batch_pipeline/README.md +++ b/scenarios/openai_batch_pipeline/README.md @@ -1,7 +1,7 @@ # Build an Open AI Pipeline to Ingest Batch Data, Perform Intelligent Operations, and Analyze in Synapse # Summary -This scenario allows uses OpenAI to summarize and analyze customer service call logs for the ficticious company, Contoso. The data is ingested into a blob storage account, and then processed by an Azure Function. The Azure Function will return the customer sentiment, product offering the conversation was about, the topic of the call, as well as a summary of the call. These results are written into a separate desginated location in the Blob Storage. From there, Synapse Analytics is utilized to pull in the newly cleansed data to create a table that can be queried in order to derive further insights. +This scenario allows uses OpenAI to summarize and analyze customer service call logs for the ficticious company, Contoso. The data is ingested into a blob storage account, and then processed by an Azure Function. The Azure Function will return the customer sentiment, product offering the conversation was about, the topic of the call, as well as a summary of the call. These results are written into a separate designated location in the Blob Storage. From there, Synapse Analytics is utilized to pull in the newly cleansed data to create a table that can be queried in order to derive further insights. --- --- @@ -30,7 +30,7 @@ This scenario allows uses OpenAI to summarize and analyze customer service call ![](../../documents/media/batcharch.png) -Call logs are uploaded to a designated location in Blob Storage. This upload will trigger the Azure Function which utilzies the [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for summarization, sentiment analysis, product offering the conversation was about, the topic of the call, as well as a summary of the call. These results are written into a separate desginated location in the Blob Storage. From there, Synapse Analytics is utilized to pull in the newly cleansed data to create a table that can be queried in order to derive further insights. +Call logs are uploaded to a designated location in Blob Storage. This upload will trigger the Azure Function which utilizies the [Azure OpenAI Service](https://azure.microsoft.com/en-us/products/cognitive-services/openai-service/) for summarization, sentiment analysis, product offering the conversation was about, the topic of the call, as well as a summary of the call. These results are written into a separate designated location in the Blob Storage. From there, Synapse Analytics is utilized to pull in the newly cleansed data to create a table that can be queried in order to derive further insights. # Deployment @@ -233,7 +233,7 @@ Once the **Sink** tile opens, choose **Inline** for the *Sink type*. Then select ![](../../documents/media/batch_dataflow9.png) -We will then need to head over to the **Settings** tab and adjust the **Scehma name** and **Table name**. If you utilized the script provided earlier to make the target table, the Schema name is **dbo** and the Table name is **cs_detail**. +We will then need to head over to the **Settings** tab and adjust the **Schema name** and **Table name**. If you utilized the script provided earlier to make the target table, the Schema name is **dbo** and the Table name is **cs_detail**. ![](../../documents/media/batch_dataflow10.png) diff --git a/scenarios/prompt_engineering/02_Sample_Scenarios/03_Question_Answering.md b/scenarios/prompt_engineering/02_Sample_Scenarios/03_Question_Answering.md index c303fafb5..fd406762e 100644 --- a/scenarios/prompt_engineering/02_Sample_Scenarios/03_Question_Answering.md +++ b/scenarios/prompt_engineering/02_Sample_Scenarios/03_Question_Answering.md @@ -8,7 +8,7 @@ ## Overview of Question Answering -One of the best ways to get the model to respond specific answers is to improve the format of the prompt. As covered before, a prompt could combine instructions, context, input, and output indicator to get improved results. While these components are not required, it becomes a good practice as the more specific you are with instruction, the better results you will get. Below is an example of how this would look following a more structured prompt. Given the often factual nature that Question-Answering requires, we should make a quick review of some of our [hyperparameters pointers](./98_Hyperparameters_Overview.md) that can be used to control the output. +One of the best ways to get the model to respond specific answers is to improve the format of the prompt. As covered before, a prompt could combine instructions, context, input, and output indicator to get improved results. While these components are not required, it becomes a good practice as the more specific you are with instruction, the better results you will get. Below is an example of how this would look following a more structured prompt. Given the often factual nature that Question-Answering requires, we should make a quick review of some of our [hyperparameters pointers](../98_Hyperparameters_Overview.md) that can be used to control the output. > **Note:** In short, the lower the `temperature` the more deterministic the results in the sense that the highest probable next token is always picked. Increasing temperature could lead to more randomness encouraging more diverse or creative outputs. We are essentially increasing the weights of the other possible tokens. In terms of application, we might want to use lower temperature for something like fact-based QA to encourage more factual and concise responses. For poem generation or other creative tasks it might be beneficial to increase temperature. @@ -44,4 +44,4 @@ The response listed above is a concise summarization of the supplied text and it [Previous Section (Information Extraction)](./02_Information_Extraction.md) -[Next Section (Text Classification)](./04_Text_Classification.md) \ No newline at end of file +[Next Section (Text Classification)](./04_Text_Classification.md) diff --git a/scenarios/prompt_engineering/02_Sample_Scenarios/04_Text_Classification.md b/scenarios/prompt_engineering/02_Sample_Scenarios/04_Text_Classification.md index 9996a9a82..1a5d57dad 100644 --- a/scenarios/prompt_engineering/02_Sample_Scenarios/04_Text_Classification.md +++ b/scenarios/prompt_engineering/02_Sample_Scenarios/04_Text_Classification.md @@ -26,7 +26,7 @@ Entertainment --- ## Classification using One Shot or Few Shot Learning -This topic will be covered in the next section [Advanced Concepts](./03_Advanced_Concepts.md), but it's worth mentioning here as well. One shot or few shot learning is a technique that allows you to train a model on a small amount of data and then use that model to classify new data. This is useful when you have a small amount of data, but you want to be able to classify new data that you haven't seen before. +This topic will be covered in the next section [Advanced Concepts](../03_Advanced_Concepts.md), but it's worth mentioning here as well. One shot or few shot learning is a technique that allows you to train a model on a small amount of data and then use that model to classify new data. This is useful when you have a small amount of data, but you want to be able to classify new data that you haven't seen before. *Prompt:* ``` @@ -54,4 +54,4 @@ You've tought the model to rank between 1 and 5 stars based on the review. You c [Previous Section (Question Answering)](./03_Question_Answering.md) -[Next Section (Conversation)](./05_Conversation.md) \ No newline at end of file +[Next Section (Conversation)](./05_Conversation.md) diff --git a/scenarios/prompt_engineering/02_Sample_Scenarios/07_Data_Generation.md b/scenarios/prompt_engineering/02_Sample_Scenarios/07_Data_Generation.md index db0b89e75..80308f85b 100644 --- a/scenarios/prompt_engineering/02_Sample_Scenarios/07_Data_Generation.md +++ b/scenarios/prompt_engineering/02_Sample_Scenarios/07_Data_Generation.md @@ -63,7 +63,7 @@ Another important consideration is that we can progrmatically feed the model pro --- ## Few-Shot Data Generation -The concepts from the [Introduction to Prompt Engineering](./01_Prompt_Introduction.md) and the [Advanced Concepts](./03_Advanced_Concepts.md) sections can be very informative for generating net-new data. First off, we should be as direct in our requirements as possible and provide examples of our desired output if feasible. +The concepts from the [Introduction to Prompt Engineering](../01_Prompt_Introduction.md) and the [Advanced Concepts](../03_Advanced_Concepts.md) sections can be very informative for generating net-new data. First off, we should be as direct in our requirements as possible and provide examples of our desired output if feasible. *Prompt:* ``` @@ -157,4 +157,4 @@ Given sufficient examples and instructions, the model can fill in the missing va [Previous Section (Code Generation)](./06_Code_Generation.md) -[Next Section (Recommendations)](./08_Recommendations.md) \ No newline at end of file +[Next Section (Recommendations)](./08_Recommendations.md)