Skip to content

Commit

Permalink
Use genai.protos instead of google.ai.generativelanguage (#192)
Browse files Browse the repository at this point in the history
* Use genai.protos instead of google.ai.generativelanguage

* fix links

* lint

* genai.GenerateContent* -> genai.protos.GenerateContent*

* Fix import

---------

Co-authored-by: Mark McDonald <macd@google.com>
  • Loading branch information
MarkDaoust and markmcd committed Jun 7, 2024
1 parent b580e69 commit b4d0067
Show file tree
Hide file tree
Showing 8 changed files with 31 additions and 48 deletions.
1 change: 0 additions & 1 deletion examples/Anomaly_detection_with_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,6 @@
"import seaborn as sns\n",
"\n",
"import google.generativeai as genai\n",
"import google.ai.generativelanguage as glm\n",
"\n",
"# Used to securely store your API key\n",
"from google.colab import userdata\n",
Expand Down
1 change: 0 additions & 1 deletion examples/Classify_text_with_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -90,7 +90,6 @@
"import pandas as pd\n",
"\n",
"import google.generativeai as genai\n",
"import google.ai.generativelanguage as glm\n",
"\n",
"from google.colab import userdata\n",
"\n",
Expand Down
19 changes: 4 additions & 15 deletions examples/Search_reranking_using_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -720,7 +720,7 @@
"In the chat history you can see all 4 steps:\n",
"\n",
"1. The user sent the query.\n",
"2. The model replied with a `glm.FunctionCall` calling the `wikipedia_search` with a number of relevant searches.\n",
"2. The model replied with a `genai.protos.FunctionCall` calling the `wikipedia_search` with a number of relevant searches.\n",
"3. Because you set `enable_automatic_function_calling=True` when creating the `genai.ChatSession`, it executed the search function and returned the list of article summaries to the model.\n",
"4. Folliwing the instructions in the prompt, the model generated a final answer based on those summaries.\n"
]
Expand All @@ -743,17 +743,6 @@
"If you want to understand what happened behind the scenes, this section executes the `FunctionCall` manually to demonstrate."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"id": "6Q4p-zSP7ZBs"
},
"outputs": [],
"source": [
"import google.ai.generativelanguage as glm # for lower level code"
]
},
{
"cell_type": "code",
"execution_count": 12,
Expand Down Expand Up @@ -970,9 +959,9 @@
],
"source": [
"response = chat.send_message(\n",
" glm.Content(\n",
" parts=[glm.Part(\n",
" function_response = glm.FunctionResponse(\n",
" genai.protos.Content(\n",
" parts=[genai.protos.Part(\n",
" function_response = genai.protos.FunctionResponse(\n",
" name='wikipedia_search',\n",
" response={'result': summaries}\n",
" )\n",
Expand Down
1 change: 0 additions & 1 deletion examples/Talk_to_documents_with_embeddings.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -99,7 +99,6 @@
"import pandas as pd\n",
"\n",
"import google.generativeai as genai\n",
"import google.ai.generativelanguage as glm\n",
"\n",
"# Used to securely store your API key\n",
"from google.colab import userdata\n",
Expand Down
2 changes: 1 addition & 1 deletion quickstarts/Counting_Tokens.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -308,7 +308,7 @@
" tools=None,\n",
" system_instruction=None,\n",
" ),\n",
" history=[glm.Content({'parts': [{'text': 'Hi my name is Bob'}], 'role': 'user'}), glm.Content({'parts': [{'text': 'Hi Bob!'}], 'role': 'model'})]\n",
" history=[genai.protos.Content({'parts': [{'text': 'Hi my name is Bob'}], 'role': 'user'}), genai.protos.Content({'parts': [{'text': 'Hi Bob!'}], 'role': 'model'})]\n",
")"
]
},
Expand Down
35 changes: 16 additions & 19 deletions quickstarts/Function_calling.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -128,7 +128,7 @@
"source": [
"To use function calling, pass a list of functions to the `tools` parameter when creating a [`GenerativeModel`](https://ai.google.dev/api/python/google/generativeai/GenerativeModel). The model uses the function name, docstring, parameters, and parameter type annotations to decide if it needs the function to best answer a prompt.\n",
"\n",
"> Important: The SDK converts function parameter type annotations to a format the API understands (`glm.FunctionDeclaration`). The API only supports a limited selection of parameter types, and the Python SDK's automatic conversion only supports a subset of that: `AllowedTypes = int | float | bool | str | list['AllowedTypes'] | dict`"
"> Important: The SDK converts function parameter type annotations to a format the API understands (`genai.protos.FunctionDeclaration`). The API only supports a limited selection of parameter types, and the Python SDK's automatic conversion only supports a subset of that: `AllowedTypes = int | float | bool | str | list['AllowedTypes'] | dict`"
]
},
{
Expand Down Expand Up @@ -278,13 +278,13 @@
"source": [
"However, by examining the chat history, you can see the flow of the conversation and how function calls are integrated within it.\n",
"\n",
"The `ChatSession.history` property stores a chronological record of the conversation between the user and the Gemini model. Each turn in the conversation is represented by a [`glm.Content`](https://ai.google.dev/api/python/google/ai/generativelanguage/Content) object, which contains the following information:\n",
"The `ChatSession.history` property stores a chronological record of the conversation between the user and the Gemini model. Each turn in the conversation is represented by a [`genai.protos.Content`](https://ai.google.dev/api/python/google/generativeai/protos/Content) object, which contains the following information:\n",
"\n",
"* **Role**: Identifies whether the content originated from the \"user\" or the \"model\".\n",
"* **Parts**: A list of [`glm.Part`](https://ai.google.dev/api/python/google/ai/generativelanguage/Part) objects that represent individual components of the message. With a text-only model, these parts can be:\n",
"* **Parts**: A list of [`genai.protos.Part`](https://ai.google.dev/api/python/google/generativeai/protos/Part) objects that represent individual components of the message. With a text-only model, these parts can be:\n",
" * **Text**: Plain text messages.\n",
" * **Function Call** ([`glm.FunctionCall`](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionCall)): A request from the model to execute a specific function with provided arguments.\n",
" * **Function Response** ([`glm.FunctionResponse`](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionResponse)): The result returned by the user after executing the requested function.\n",
" * **Function Call** ([`genai.protos.FunctionCall`](https://ai.google.dev/api/python/google/generativeai/protos/FunctionCall)): A request from the model to execute a specific function with provided arguments.\n",
" * **Function Response** ([`genai.protos.FunctionResponse`](https://ai.google.dev/api/python/google/generativeai/protos/FunctionResponse)): The result returned by the user after executing the requested function.\n",
"\n",
" In the previous example with the mittens calculation, the history shows the following sequence:\n",
"\n",
Expand Down Expand Up @@ -350,7 +350,7 @@
"id": "9610f3465a69"
},
"source": [
"For more control, you can process [`glm.FunctionCall`](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionCall) requests from the model yourself. This would be the case if:\n",
"For more control, you can process [`genai.protos.FunctionCall`](https://ai.google.dev/api/python/google/generativeai/protos/FunctionCall) requests from the model yourself. This would be the case if:\n",
"\n",
"- You use a `ChatSession` with the default `enable_automatic_function_calling=False`.\n",
"- You use `GenerativeModel.generate_content` (and manage the chat history yourself)."
Expand Down Expand Up @@ -560,16 +560,15 @@
}
],
"source": [
"import google.ai.generativelanguage as glm\n",
"from google.protobuf.struct_pb2 import Struct\n",
"\n",
"# Put the result in a protobuf Struct\n",
"s = Struct()\n",
"s.update({\"result\": result})\n",
"\n",
"# Update this after https://github.com/google/generative-ai-python/issues/243\n",
"function_response = glm.Part(\n",
" function_response=glm.FunctionResponse(name=\"find_theaters\", response=s)\n",
"function_response = genai.protos.Part(\n",
" function_response=genai.protos.FunctionResponse(name=\"find_theaters\", response=s)\n",
")\n",
"\n",
"# Build the message history\n",
Expand Down Expand Up @@ -709,8 +708,6 @@
}
],
"source": [
"import google.ai.generativelanguage as glm\n",
"\n",
"# Simulate the responses from the specified tools.\n",
"responses = {\n",
" \"power_disco_ball\": True,\n",
Expand All @@ -720,7 +717,7 @@
"\n",
"# Build the response parts.\n",
"response_parts = [\n",
" glm.Part(function_response=glm.FunctionResponse(name=fn, response={\"result\": val}))\n",
" genai.protos.Part(function_response=genai.protos.FunctionResponse(name=fn, response={\"result\": val}))\n",
" for fn, val in responses.items()\n",
"]\n",
"\n",
Expand All @@ -746,13 +743,13 @@
"Useful API references:\n",
"\n",
"- The [genai.GenerativeModel](https://ai.google.dev/api/python/google/generativeai/GenerativeModel) class\n",
" - Its [GenerativeModel.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method builds a [glm.GenerateContentRequest](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentRequest) behind the scenes.\n",
" - The request's `.tools` field contains a list of 1 [glm.Tool](https://ai.google.dev/api/python/google/ai/generativelanguage/Tool) object.\n",
" - The tool's `function_declarations` attribute contains a list of [FunctionDeclarations](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionDeclaration) objects.\n",
"- The [response](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse) may contain a [glm.FunctionCall](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionCall), in `response.candidates[0].contents.parts[0]`.\n",
"- if `enable_automatic_function_calling` is set the [genai.ChatSession](https://ai.google.dev/api/python/google/generativeai/ChatSession) executes the call, and sends back the [glm.FunctionResponse](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionResponse).\n",
"- In response to a [FunctionCall](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionCall) the model always expects a [FunctionResponse](https://ai.google.dev/api/python/google/ai/generativelanguage/FunctionResponse).\n",
"- If you reply manually using [chat.send_message](https://ai.google.dev/api/python/google/generativeai/ChatSession#send_message) or [model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) remember thart the API is stateless you have to send the whole conversation history (a list of [content](https://ai.google.dev/api/python/google/ai/generativelanguage/Content) objects), not just the last one containing the `FunctionResponse`."
" - Its [GenerativeModel.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method builds a [genai.protos.GenerateContentRequest](https://ai.google.dev/api/python/google/generativeai/protos/GenerateContentRequest) behind the scenes.\n",
" - The request's `.tools` field contains a list of 1 [genai.protos.Tool](https://ai.google.dev/api/python/google/generativeai/protos/Tool) object.\n",
" - The tool's `function_declarations` attribute contains a list of [FunctionDeclarations](https://ai.google.dev/api/python/google/generativeai/protos/FunctionDeclaration) objects.\n",
"- The [response](https://ai.google.dev/api/python/google/generativeai/protos/GenerateContentResponse) may contain a [genai.protos.FunctionCall](https://ai.google.dev/api/python/google/generativeai/protos/FunctionCall), in `response.candidates[0].contents.parts[0]`.\n",
"- if `enable_automatic_function_calling` is set the [genai.ChatSession](https://ai.google.dev/api/python/google/generativeai/ChatSession) executes the call, and sends back the [genai.protos.FunctionResponse](https://ai.google.dev/api/python/google/generativeai/protos/FunctionResponse).\n",
"- In response to a [FunctionCall](https://ai.google.dev/api/python/google/generativeai/protos/FunctionCall) the model always expects a [FunctionResponse](https://ai.google.dev/api/python/google/generativeai/protos/FunctionResponse).\n",
"- If you reply manually using [chat.send_message](https://ai.google.dev/api/python/google/generativeai/ChatSession#send_message) or [model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) remember thart the API is stateless you have to send the whole conversation history (a list of [content](https://ai.google.dev/api/python/google/generativeai/protos/Content) objects), not just the last one containing the `FunctionResponse`."
]
}
],
Expand Down
10 changes: 5 additions & 5 deletions quickstarts/Safety.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -131,7 +131,7 @@
"\n",
"Pick the prompt you want to use to test the safety filters settings. An examples could be `Write a list of 5 very rude things that I might say to the universe after stubbing my toe in the dark` which was previously tested and trigger the `HARM_CATEGORY_HARASSMENT` and `HARM_CATEGORY_DANGEROUS_CONTENT` categories.\n",
"\n",
"The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)."
"The result returned by the [Model.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) method is a [genai.protos.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/types/GenerateContentResponse)."
]
},
{
Expand Down Expand Up @@ -402,13 +402,13 @@
"\n",
"* They can also be passed on each request to [GenerativeModel.generate_content](https://ai.google.dev/api/python/google/generativeai/GenerativeModel#generate_content) or [ChatSession.send_message](https://ai.google.dev/api/python/google/generativeai/ChatSession?hl=en#send_message).\n",
"\n",
"- The [genai.GenerateContentResponse](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse) returns [SafetyRatings](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) for the prompt in the [GenerateContentResponse.prompt_feedback](https://ai.google.dev/api/python/google/ai/generativelanguage/GenerateContentResponse/PromptFeedback), and for each [Candidate](https://ai.google.dev/api/python/google/ai/generativelanguage/Candidate) in the `safety_ratings` attribute.\n",
"- The [genai.protos.GenerateContentResponse](https://ai.google.dev/api/python/google/generativeai/protos/GenerateContentResponse) returns [SafetyRatings](https://ai.google.dev/api/python/google/generativeai/protos/SafetyRating) for the prompt in the [GenerateContentResponse.prompt_feedback](https://ai.google.dev/api/python/google/generativeai/protos/GenerateContentResponse/PromptFeedback), and for each [Candidate](https://ai.google.dev/api/python/google/generativeai/protos/Candidate) in the `safety_ratings` attribute.\n",
"\n",
"- A [glm.SafetySetting](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetySetting) contains: [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [glm.HarmBlockThreshold](https://ai.google.dev/api/python/google/generativeai/types/HarmBlockThreshold)\n",
"- A [genai.protos.SafetySetting](https://ai.google.dev/api/python/google/generativeai/protos/SafetySetting) contains: [genai.protos.HarmCategory](https://ai.google.dev/api/python/google/generativeai/protos/HarmCategory) and a [genai.protos.HarmBlockThreshold](https://ai.google.dev/api/python/google/generativeai/types/HarmBlockThreshold)\n",
"\n",
"- A [glm.SafetyRating](https://ai.google.dev/api/python/google/ai/generativelanguage/SafetyRating) contains a [HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) and a [HarmProbability](https://ai.google.dev/api/python/google/generativeai/types/HarmProbability)\n",
"- A [genai.protos.SafetyRating](https://ai.google.dev/api/python/google/generativeai/protos/SafetyRating) contains a [HarmCategory](https://ai.google.dev/api/python/google/generativeai/protos/HarmCategory) and a [HarmProbability](https://ai.google.dev/api/python/google/generativeai/types/HarmProbability)\n",
"\n",
"The [glm.HarmCategory](https://ai.google.dev/api/python/google/ai/generativelanguage/HarmCategory) enum includes both the categories for PaLM and Gemini models.\n",
"The [genai.protos.HarmCategory](https://ai.google.dev/api/python/google/generativeai/protos/HarmCategory) enum includes both the categories for PaLM and Gemini models.\n",
"\n",
"- When specifying enum values the SDK will accept the enum values themselves, or their integer or string representations.\n",
"\n",
Expand Down
Loading

0 comments on commit b4d0067

Please sign in to comment.