Conversation
There was a problem hiding this comment.
Hello @dhruvj07, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
Summary of Changes
This pull request adds new test cases to the bedrock_client_universal.py example file. Specifically, it introduces tests for the anthropic.claude-v2 and anthropic.claude-3-haiku-20240307-v1:0 models, covering both invoke and invoke-with-response-stream functionalities. The tests send simple requests to these models and print the responses, including streamed outputs, to demonstrate their usage.
Highlights
- New Tests: Added tests for
anthropic.claude-v2model using bothinvoke_modelandinvoke_model_with_response_stream. - New Tests: Added tests for
anthropic.claude-3-haiku-20240307-v1:0model using bothinvoke_modelandinvoke_model_with_response_stream. - Functionality Coverage: The tests cover both synchronous (
invoke) and streaming (invoke-with-response-stream) invocation patterns.
Changelog
- examples/bedrock/bedrock_client_universal.py
- Added
test_claude_v2_invokefunction to test theanthropic.claude-v2model's invoke functionality (lines 144-159). - Added
test_claude_v2_streamfunction to test theanthropic.claude-v2model's streaming functionality (lines 161-181). - Added
test_haiku_v3_invokefunction to test theanthropic.claude-3-haiku-20240307-v1:0model's invoke functionality (lines 183-198). - Added
test_haiku_v3_streamfunction to test theanthropic.claude-3-haiku-20240307-v1:0model's streaming functionality (lines 200-220). - Integrated the new test functions into the
mainfunction to be executed (lines 274-347).
- Added
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.
| Feature | Command | Description |
|---|---|---|
| Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
| Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
| Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
| Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
The models converse,
Streams of data immerse,
Haiku's wisdom flows,
Claude's knowledge bestows,
AI's gentle verse.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Code Review
The code adds new test functions for the Bedrock client, covering Claude v2 and Haiku v3 models for both invoke and streaming scenarios. This is a good addition to ensure the client's functionality with different models. However, there are some areas where the code can be improved for clarity and maintainability.
Summary of Findings
- Duplicated Code Blocks: The test functions share a lot of duplicated code, especially in setting up the request body and handling the response. This can be refactored into a reusable function to improve maintainability.
- Error Handling: The error handling in the test functions is basic. Consider adding more specific error messages or logging to help diagnose issues.
- Inconsistent Printing: There are inconsistencies in how the output is printed, especially in the streaming tests. Standardize the output format for better readability.
Merge Readiness
The code adds valuable test cases, but the duplicated code and basic error handling should be addressed before merging. I recommend refactoring the code to reduce duplication and improve error handling. I am unable to approve this pull request, and other reviewers should review and approve this code before merging.
| def test_claude_v2_invoke(bedrock_runtime_client): | ||
| print("\n--- Test: anthropic.claude-v2 / invoke ---") | ||
| try: | ||
| response = bedrock_runtime_client.invoke_model( | ||
| modelId="anthropic.claude-v2", | ||
| body=json.dumps({ | ||
| "anthropic_version": "bedrock-2023-05-31", | ||
| "max_tokens": 100, | ||
| "messages": [{"role": "user", "content": "Explain quantum computing"}] | ||
| }), | ||
| contentType="application/json" | ||
| ) | ||
| result = json.loads(response["body"].read()) | ||
| print(json.dumps(result, indent=2)) | ||
| except Exception as e: | ||
| print("❌ Error:", e) |
There was a problem hiding this comment.
This test function and the following ones share a lot of common structure. Consider refactoring the common parts into a reusable helper function to reduce duplication and improve maintainability. For example, the model invocation logic, json parsing, and error handling could be generalized.
def invoke_model_and_print(bedrock_runtime_client, model_id, messages):
try:
response = bedrock_runtime_client.invoke_model(
modelId=model_id,
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": messages
}),
contentType="application/json"
)
result = json.loads(response["body"].read())
print(json.dumps(result, indent=2))
except Exception as e:
print("❌ Error:", e)
def test_claude_v2_invoke(bedrock_runtime_client):
print("\n--- Test: anthropic.claude-v2 / invoke ---")
invoke_model_and_print(bedrock_runtime_client, "anthropic.claude-v2",
[{"role": "user", "content": "Explain quantum computing"}])| except Exception as e: | ||
| print("❌ Error:", e) |
There was a problem hiding this comment.
| def test_claude_v2_stream(bedrock_runtime_client): | ||
| print("\n--- Test: anthropic.claude-v2 / invoke-with-response-stream ---") | ||
| try: | ||
| response = bedrock_runtime_client.invoke_model_with_response_stream( | ||
| modelId="anthropic.claude-v2", | ||
| body=json.dumps({ | ||
| "anthropic_version": "bedrock-2023-05-31", | ||
| "max_tokens": 100, | ||
| "messages": [{"role": "user", "content": "Tell me about LLMs"}] | ||
| }), | ||
| contentType="application/json" | ||
| ) | ||
| output = "" | ||
| for part in response["body"]: | ||
| chunk = json.loads(part["chunk"]["bytes"].decode()) | ||
| delta = chunk.get("delta", {}).get("text", "") | ||
| output += delta | ||
| print(delta, end="", flush=True) | ||
| print("\nStreamed Output Complete.") | ||
| except Exception as e: | ||
| print("❌ Error:", e) |
There was a problem hiding this comment.
Similar to the test_claude_v2_invoke function, this test function shares a lot of common code with other streaming test functions. Consider creating a reusable helper function to reduce duplication.
def invoke_model_stream_and_print(bedrock_runtime_client, model_id, messages):
try:
response = bedrock_runtime_client.invoke_model_with_response_stream(
modelId=model_id,
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": messages
}),
contentType="application/json"
)
output = ""
for part in response["body"]:
chunk = json.loads(part["chunk"]["bytes"].decode())
delta = chunk.get("delta", {}).get("text", "")
output += delta
print(delta, end="", flush=True)
print("\nStreamed Output Complete.")
except Exception as e:
print("❌ Error:", e)
def test_claude_v2_stream(bedrock_runtime_client):
print("\n--- Test: anthropic.claude-v2 / invoke-with-response-stream ---")
invoke_model_stream_and_print(bedrock_runtime_client, "anthropic.claude-v2",
[{"role": "user", "content": "Tell me about LLMs"}])| # 5) Test anthropic.claude-v2 / invoke | ||
| print("\n--- Test: anthropic.claude-v2 / invoke ---") | ||
| try: | ||
| response = bedrock_runtime_client.invoke_model( | ||
| modelId="anthropic.claude-v2", | ||
| body=json.dumps({ | ||
| "anthropic_version": "bedrock-2023-05-31", | ||
| "max_tokens": 100, | ||
| "messages": [{"role": "user", "content": "Explain quantum computing"}] | ||
| }), | ||
| contentType="application/json" | ||
| ) | ||
| result = json.loads(response["body"].read()) | ||
| print(json.dumps(result, indent=2)) | ||
| except Exception as e: | ||
| print("Error in claude-v2 invoke:", e) |
| # 6) Test anthropic.claude-v2 / invoke-with-response-stream | ||
| print("\n--- Test: anthropic.claude-v2 / invoke-with-response-stream ---") | ||
| try: | ||
| response = bedrock_runtime_client.invoke_model_with_response_stream( | ||
| modelId="anthropic.claude-v2", | ||
| body=json.dumps({ | ||
| "anthropic_version": "bedrock-2023-05-31", | ||
| "max_tokens": 100, | ||
| "messages": [{"role": "user", "content": "Tell me about LLMs"}] | ||
| }), | ||
| contentType="application/json" | ||
| ) | ||
| for part in response["body"]: | ||
| chunk = json.loads(part["chunk"]["bytes"].decode()) | ||
| delta = chunk.get("delta", {}).get("text", "") | ||
| print(delta, end="", flush=True) | ||
| print("\nStreamed Output Complete.") | ||
| except Exception as e: | ||
| print("Error in claude-v2 stream:", e) |
| # 7) Test anthropic.claude-3-haiku-20240307-v1:0 / invoke | ||
| print("\n--- Test: anthropic.claude-3-haiku-20240307-v1:0 / invoke ---") | ||
| try: | ||
| response = bedrock_runtime_client.invoke_model( | ||
| modelId="anthropic.claude-3-haiku-20240307-v1:0", | ||
| body=json.dumps({ | ||
| "anthropic_version": "bedrock-2023-05-31", | ||
| "max_tokens": 100, | ||
| "messages": [{"role": "user", "content": "What is generative AI?"}] | ||
| }), | ||
| contentType="application/json" | ||
| ) | ||
| result = json.loads(response["body"].read()) | ||
| print(json.dumps(result, indent=2)) | ||
| except Exception as e: | ||
| print("Error in haiku invoke:", e) |
| # 8) Test anthropic.claude-3-haiku-20240307-v1:0 / invoke-with-response-stream | ||
| print("\n--- Test: anthropic.claude-3-haiku-20240307-v1:0 / invoke-with-response-stream ---") | ||
| try: | ||
| response = bedrock_runtime_client.invoke_model_with_response_stream( | ||
| modelId="anthropic.claude-3-haiku-20240307-v1:0", | ||
| body=json.dumps({ | ||
| "anthropic_version": "bedrock-2023-05-31", | ||
| "max_tokens": 100, | ||
| "messages": [{"role": "user", "content": "What are AI guardrails?"}] | ||
| }), | ||
| contentType="application/json" | ||
| ) | ||
| for part in response["body"]: | ||
| chunk = json.loads(part["chunk"]["bytes"].decode()) | ||
| delta = chunk.get("delta", {}).get("text", "") | ||
| print(delta, end="", flush=True) | ||
| print("\nStreamed Output Complete.") | ||
| except Exception as e: | ||
| print("Error in haiku stream:", e) |
No description provided.