Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Show me the prompt! #731

Open
prescod opened this issue Apr 29, 2024 · 2 comments
Open

Show me the prompt! #731

prescod opened this issue Apr 29, 2024 · 2 comments

Comments

@prescod
Copy link

prescod commented Apr 29, 2024

As discussed on Discord, we need to know what prompts you are serving the evaluation LLM.

https://hamel.dev/blog/posts/prompt/

I need to see the prompt to help debug when the framework fails or even for debugging my own bugs.

Once I passed evaluation_steps = steps.split() when I meant evaluation_steps = steps.split("\n")

The former turns every word into a step, and the latter turns every line into a step. My error was obvious as soon as I looked at the LLM prompt.

The "true meaning" of various built-in metrics is also easier to understand when you read the prompt.

@lbux
Copy link
Contributor

lbux commented Apr 30, 2024

I think it would make sense to be able to return/print the prompt for the respective metric.

Perhaps even having a getter/setter would work (the setter for when we want to make minor changes to the built in prompt without having to write a whole new metric.

@prescod
Copy link
Author

prescod commented Apr 30, 2024

@lbux : I like your idea but just to be clear, what I'm asking for is to see the literal input and output of the LLM at runtime.

I also believe that that's what the unit of caching should be.

I do also like the idea of being able to read and write the prompt, however.

Or subclass.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants