Skip to content

Sdk run updated providers branch#181

Merged
dhruvj07 merged 3 commits intomainfrom
sdk-run-updated-providers-branch
Apr 10, 2025
Merged

Sdk run updated providers branch#181
dhruvj07 merged 3 commits intomainfrom
sdk-run-updated-providers-branch

Conversation

@dhruvj07
Copy link
Copy Markdown
Contributor

No description provided.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hello @dhruvj07, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

Summary of Changes

This pull request adds new test cases to the bedrock_client_universal.py example file. Specifically, it introduces tests for the anthropic.claude-v2 and anthropic.claude-3-haiku-20240307-v1:0 models, covering both invoke and invoke-with-response-stream functionalities. The tests send simple requests to these models and print the responses, including streamed outputs, to demonstrate their usage.

Highlights

  • New Tests: Added tests for anthropic.claude-v2 model using both invoke_model and invoke_model_with_response_stream.
  • New Tests: Added tests for anthropic.claude-3-haiku-20240307-v1:0 model using both invoke_model and invoke_model_with_response_stream.
  • Functionality Coverage: The tests cover both synchronous (invoke) and streaming (invoke-with-response-stream) invocation patterns.

Changelog

  • examples/bedrock/bedrock_client_universal.py
    • Added test_claude_v2_invoke function to test the anthropic.claude-v2 model's invoke functionality (lines 144-159).
    • Added test_claude_v2_stream function to test the anthropic.claude-v2 model's streaming functionality (lines 161-181).
    • Added test_haiku_v3_invoke function to test the anthropic.claude-3-haiku-20240307-v1:0 model's invoke functionality (lines 183-198).
    • Added test_haiku_v3_stream function to test the anthropic.claude-3-haiku-20240307-v1:0 model's streaming functionality (lines 200-220).
    • Integrated the new test functions into the main function to be executed (lines 274-347).
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist is currently in preview and may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.


The models converse,
Streams of data immerse,
Haiku's wisdom flows,
Claude's knowledge bestows,
AI's gentle verse.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

The code adds new test functions for the Bedrock client, covering Claude v2 and Haiku v3 models for both invoke and streaming scenarios. This is a good addition to ensure the client's functionality with different models. However, there are some areas where the code can be improved for clarity and maintainability.

Summary of Findings

  • Duplicated Code Blocks: The test functions share a lot of duplicated code, especially in setting up the request body and handling the response. This can be refactored into a reusable function to improve maintainability.
  • Error Handling: The error handling in the test functions is basic. Consider adding more specific error messages or logging to help diagnose issues.
  • Inconsistent Printing: There are inconsistencies in how the output is printed, especially in the streaming tests. Standardize the output format for better readability.

Merge Readiness

The code adds valuable test cases, but the duplicated code and basic error handling should be addressed before merging. I recommend refactoring the code to reduce duplication and improve error handling. I am unable to approve this pull request, and other reviewers should review and approve this code before merging.

Comment on lines +144 to +159
def test_claude_v2_invoke(bedrock_runtime_client):
print("\n--- Test: anthropic.claude-v2 / invoke ---")
try:
response = bedrock_runtime_client.invoke_model(
modelId="anthropic.claude-v2",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [{"role": "user", "content": "Explain quantum computing"}]
}),
contentType="application/json"
)
result = json.loads(response["body"].read())
print(json.dumps(result, indent=2))
except Exception as e:
print("❌ Error:", e)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This test function and the following ones share a lot of common structure. Consider refactoring the common parts into a reusable helper function to reduce duplication and improve maintainability. For example, the model invocation logic, json parsing, and error handling could be generalized.

def invoke_model_and_print(bedrock_runtime_client, model_id, messages):
    try:
        response = bedrock_runtime_client.invoke_model(
            modelId=model_id,
            body=json.dumps({
                "anthropic_version": "bedrock-2023-05-31",
                "max_tokens": 100,
                "messages": messages
            }),
            contentType="application/json"
        )
        result = json.loads(response["body"].read())
        print(json.dumps(result, indent=2))
    except Exception as e:
        print("❌ Error:", e)

def test_claude_v2_invoke(bedrock_runtime_client):
    print("\n--- Test: anthropic.claude-v2 / invoke ---")
    invoke_model_and_print(bedrock_runtime_client, "anthropic.claude-v2",
                           [{"role": "user", "content": "Explain quantum computing"}])

Comment on lines +158 to +159
except Exception as e:
print("❌ Error:", e)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The error message here is very generic. Consider adding more context to the error message, such as the model ID or the specific operation that failed. This will make it easier to diagnose issues when they occur.

    except Exception as e:
        print(f"❌ Error invoking anthropic.claude-v2: {e}")

Comment on lines +161 to +181
def test_claude_v2_stream(bedrock_runtime_client):
print("\n--- Test: anthropic.claude-v2 / invoke-with-response-stream ---")
try:
response = bedrock_runtime_client.invoke_model_with_response_stream(
modelId="anthropic.claude-v2",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [{"role": "user", "content": "Tell me about LLMs"}]
}),
contentType="application/json"
)
output = ""
for part in response["body"]:
chunk = json.loads(part["chunk"]["bytes"].decode())
delta = chunk.get("delta", {}).get("text", "")
output += delta
print(delta, end="", flush=True)
print("\nStreamed Output Complete.")
except Exception as e:
print("❌ Error:", e)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the test_claude_v2_invoke function, this test function shares a lot of common code with other streaming test functions. Consider creating a reusable helper function to reduce duplication.

def invoke_model_stream_and_print(bedrock_runtime_client, model_id, messages):
    try:
        response = bedrock_runtime_client.invoke_model_with_response_stream(
            modelId=model_id,
            body=json.dumps({
                "anthropic_version": "bedrock-2023-05-31",
                "max_tokens": 100,
                "messages": messages
            }),
            contentType="application/json"
        )
        output = ""
        for part in response["body"]:
            chunk = json.loads(part["chunk"]["bytes"].decode())
            delta = chunk.get("delta", {}).get("text", "")
            output += delta
            print(delta, end="", flush=True)
        print("\nStreamed Output Complete.")
    except Exception as e:
        print("❌ Error:", e)

def test_claude_v2_stream(bedrock_runtime_client):
    print("\n--- Test: anthropic.claude-v2 / invoke-with-response-stream ---")
    invoke_model_stream_and_print(bedrock_runtime_client, "anthropic.claude-v2",
                                  [{"role": "user", "content": "Tell me about LLMs"}])

Comment on lines +274 to +289
# 5) Test anthropic.claude-v2 / invoke
print("\n--- Test: anthropic.claude-v2 / invoke ---")
try:
response = bedrock_runtime_client.invoke_model(
modelId="anthropic.claude-v2",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [{"role": "user", "content": "Explain quantum computing"}]
}),
contentType="application/json"
)
result = json.loads(response["body"].read())
print(json.dumps(result, indent=2))
except Exception as e:
print("Error in claude-v2 invoke:", e)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code is duplicated from the test_claude_v2_invoke function. Refactor this into a reusable function to avoid duplication.

Comment on lines +291 to +309
# 6) Test anthropic.claude-v2 / invoke-with-response-stream
print("\n--- Test: anthropic.claude-v2 / invoke-with-response-stream ---")
try:
response = bedrock_runtime_client.invoke_model_with_response_stream(
modelId="anthropic.claude-v2",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [{"role": "user", "content": "Tell me about LLMs"}]
}),
contentType="application/json"
)
for part in response["body"]:
chunk = json.loads(part["chunk"]["bytes"].decode())
delta = chunk.get("delta", {}).get("text", "")
print(delta, end="", flush=True)
print("\nStreamed Output Complete.")
except Exception as e:
print("Error in claude-v2 stream:", e)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code is duplicated from the test_claude_v2_stream function. Refactor this into a reusable function to avoid duplication.

Comment on lines +311 to +326
# 7) Test anthropic.claude-3-haiku-20240307-v1:0 / invoke
print("\n--- Test: anthropic.claude-3-haiku-20240307-v1:0 / invoke ---")
try:
response = bedrock_runtime_client.invoke_model(
modelId="anthropic.claude-3-haiku-20240307-v1:0",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [{"role": "user", "content": "What is generative AI?"}]
}),
contentType="application/json"
)
result = json.loads(response["body"].read())
print(json.dumps(result, indent=2))
except Exception as e:
print("Error in haiku invoke:", e)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code is duplicated from the test_claude_v2_invoke function. Refactor this into a reusable function to avoid duplication.

Comment on lines +328 to +346
# 8) Test anthropic.claude-3-haiku-20240307-v1:0 / invoke-with-response-stream
print("\n--- Test: anthropic.claude-3-haiku-20240307-v1:0 / invoke-with-response-stream ---")
try:
response = bedrock_runtime_client.invoke_model_with_response_stream(
modelId="anthropic.claude-3-haiku-20240307-v1:0",
body=json.dumps({
"anthropic_version": "bedrock-2023-05-31",
"max_tokens": 100,
"messages": [{"role": "user", "content": "What are AI guardrails?"}]
}),
contentType="application/json"
)
for part in response["body"]:
chunk = json.loads(part["chunk"]["bytes"].decode())
delta = chunk.get("delta", {}).get("text", "")
print(delta, end="", flush=True)
print("\nStreamed Output Complete.")
except Exception as e:
print("Error in haiku stream:", e)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This block of code is duplicated from the test_claude_v2_stream function. Refactor this into a reusable function to avoid duplication.

@dhruvj07 dhruvj07 merged commit e33436f into main Apr 10, 2025
6 of 7 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants