-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create Prompt_Engineering_with_Llama_3_On_Amazon_Bedrock.ipynb #515
base: main
Are you sure you want to change the base?
Conversation
...lama_api_providers/examples_with_aws/Prompt_Engineering_with_Llama_3_On_Amazon_Bedrock.ipynb
Show resolved
Hide resolved
...lama_api_providers/examples_with_aws/Prompt_Engineering_with_Llama_3_On_Amazon_Bedrock.ipynb
Show resolved
Hide resolved
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear the output of pip install before pushing the code to repo.
"from langchain.schema.messages import get_buffer_string\n", | ||
"import os\n", | ||
"\n", | ||
"# bedrock = boto3.client(service_name='bedrock', region_name='us-east-1')\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Its bit unclear why we have these 3 lines commented. Can we add some description on why they exist here?
") -> str:\n", | ||
" llm = Bedrock(model_id=model)\n", | ||
" prompt_str = ''.join(prompt)\n", | ||
" if not prompt_str.endswith(\"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\"):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Lets extract this to a variable, and name appropriately so developers knows why we are doing this check explicitly? (We are doing this to determine if the agent must respond back or not. Is that correct?)
"\n", | ||
"def complete_and_print(prompt: str, model: str = DEFAULT_MODEL, verbose: bool = True):\n", | ||
" print(f'==============\\n{prompt}\\n==============')\n", | ||
" if not prompt.endswith(\"<|eot_id|><|start_header_id|>assistant<|end_header_id|>\"):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Variable used above can be res-used here as well.
"name": "stdout", | ||
"output_type": "stream", | ||
"text": [ | ||
"==============\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see you have saved the output of these prompts across the notebook. Ignore my first comment if thats how are you are creating all the notebooks. If its not a standard lets create this output as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks great! added a couple of comments on the Llama Guard section. Also, do we want to keep the Llama 2 version of this notebook?
"# Now before we decide to return the response from the inference, we can run the\n", | ||
"# generated output through Llama Guard again\n", | ||
"\n", | ||
"messages_input = [{\"role\": \"user\", \"content\": bedrock_output}]\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here it should be a role of type "Agent".
"# generated output through Llama Guard again\n", | ||
"\n", | ||
"messages_input = [{\"role\": \"user\", \"content\": bedrock_output}]\n", | ||
"payload_input_guard = {\"inputs\": format_guard_messages(messages_input)}\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ideally, the user prompt should be in this list as well, to showcase that when evaluating the model output, the user input is also taken into consideration.
"source": [ | ||
"TASK = \"Task: Check if there is unsafe content in '{{ role }}' messages in conversations according our safety policy with the below categories.\"\n", | ||
"\n", | ||
"INSTRUCTION = \"\"\"\\\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What do you think about adding the build_default_prompt
methods from the utils file? We can add a small section on how to use those methods to customize categories as well, but for default ones, we can leverage the existing methods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yea, that would be great. I believe that I tested some of our utils file when i hacked this together, but opted for this approach for speed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Gotcha! I think you should be able to import the build_default_prompt
function from here to be able to build the prompt quickly. The import should be as with any package, if the recipes are installed:
from llama_recipes.inference.prompt_format_utils import build_default_prompt, create_conversation, LlamaGuardVersion
This an updated notebook for prompt engineering llama 3 on AWS using Bedrock. It also includes a brief into example using Llama Guard 2.
Before submitting
Pull Request section?
to it if that's the case.
Thanks for contributing 🎉!