Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Better selection of main input/output #938

Merged
merged 4 commits into from
Feb 28, 2024

Conversation

joshreini1
Copy link
Contributor

@joshreini1 joshreini1 commented Feb 26, 2024

This change better extracts the content from main_input and main_output to both display in the TruLens UI and use for helper methods such as .on_input() and .on_output(). Particularly, this change looks for content following the OpenAI pattern, e.g.

Inputs such as:

'{
     "model": "gpt-3.5-turbo",
     "messages": [{"role": "user", "content": "Say this is a test!"}],
     "temperature": 0.7
   }'

And outputs such as:

{
    "id": "chatcmpl-abc123",
    "object": "chat.completion",
    "created": 1677858242,
    "model": "gpt-3.5-turbo-0613",
    "usage": {
        "prompt_tokens": 13,
        "completion_tokens": 7,
        "total_tokens": 20
    },
    "choices": [
        {
            "message": {
                "role": "assistant",
                "content": "\n\nThis is a test!"
            },
            "logprobs": null,
            "finish_reason": "stop",
            "index": 0
        }
    ]
}

Notice in both cases, the text we actually care about is located as the value for the key, content. This pattern is widely used and adopted by frameworks such as Pinecone's Canopy and middleware such as LiteLLM.

See an example of this change below:

Example 1: Pinecone Canopy

Before:
This warning in notebook:

Unsure what the main input string is for the call to chat with args [[UserMessage(role=<Role.USER: 'user'>, content='How can you get started with Pinecone and TruLens?')]].
Unsure what the main output string is for the call to chat with return type <class 'canopy.models.api_models.ChatResponse'>.

And
Screenshot 2024-02-26 at 5 13 55 PM

After:
Screenshot 2024-02-26 at 5 12 05 PM

Example 2: OpenAI

Before:
Screenshot 2024-02-27 at 2 47 18 PM

After:

Screenshot 2024-02-27 at 2 49 28 PM

@joshreini1 joshreini1 marked this pull request as ready for review February 26, 2024 22:14
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Feb 26, 2024
@@ -731,7 +749,7 @@ def main_input(
callable_name(func), all_args
)

return None
return ""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Perhaps this can fallback to just returning str(all_args) maybe with a prefix "Could not determine main input/output of {str(all_args)}." ?

@dosubot dosubot bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Feb 27, 2024
@dosubot dosubot bot added the lgtm This PR has been approved by a maintainer label Feb 28, 2024
print -> logger
@joshreini1 joshreini1 merged commit c147bbc into main Feb 28, 2024
8 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lgtm This PR has been approved by a maintainer size:L This PR changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants