-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create a ShuttleAIModel for chatcompletions. #68
Comments
The test file will require parameterization and should include all free APIs: |
An example of parameterization: To execute a pytest while changing one variable each time, the best approach is to use parameterization. Pytest allows you to run a test function multiple times with different sets of arguments using the @pytest.mark.parametrize decorator. This way, you can easily vary one or more variables across different test runs. Here’s a step-by-step guide on how to do it:
def function_to_test(variable):
# Your function logic here
return variable * 2 # Example logic
import pytest
@pytest.mark.parametrize("variable", [1, 2, 3, 4, 5])
def test_function_to_test(variable):
result = function_to_test(variable)
assert result == variable * 2 # Example assertion
Handling Multiple Variables @pytest.mark.parametrize("variable1, variable2", [(1, 2), (3, 4), (5, 6)])
def test_function_to_test(variable1, variable2):
result = function_to_test(variable1, variable2)
assert result == expected_value # Replace with actual logic This will run the test function for each pair of (variable1, variable2) values. |
we will parameterize in a later version |
This will be for the
standard/llms/concrete/ShuttleAIModel
The API KEY name in the test file should be:
SHUTTLEAI_API_KEY
Here is documentation:
https://docs.shuttleai.app/getting-started/introduction
Our focus will only be on the chat completions endpoint:
https://docs.shuttleai.app/api-reference/endpoint/chat-completion
The allowed models should include:
shuttle-2-turbo
shuttle-turbo
gpt-4o-2024-05-13
gpt-4-turbo-2024-04-09
gpt-4-0125-preview
gpt-4-1106-preview
gpt-4-1106-vision-preview
gpt-4-0613
gpt-4-bing
gpt-4-turbo-bing
gpt-4-32k-0613
gpt-3.5-turbo-0125
gpt-3.5-turbo-1106
claude-3-opus-20240229
claude-3-sonnet-20240229
claude-3-haiku-20240307
claude-2.1
claude-2.0
claude-instant-1.2
claude-instant-1.1
claude-instant-1.0
meta-llama-3-70b-instruct
meta-llama-3-8b-instruct
llama-3-sonar-large-32k-online
llama-3-sonar-small-32k-online
llama-3-sonar-large-32k-chat
llama-3-sonar-small-32k-chat
blackbox
blackbox-code
wizardlm-2-8x22b
wizardlm-2-70b
dolphin-2.6-mixtral-8x7b
codestral-latest
mistral-large
mistral-next
mistral-medium
mistral-small
mistral-tiny
mixtral-8x7b-instruct-v0.1
mixtral-8x22b-instruct-v0.1
mistral-7b-instruct-v0.2
mistral-7b-instruct-v0.1
nous-hermes-2-mixtral-8x7b
gemini-1.5-pro-latest
gemini-1.0-pro-latest
gemini-1.0-pro-vision
lzlv-70b
figgs-rp
cinematika-7b
The text was updated successfully, but these errors were encountered: