Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Server #99

Closed
JohnPeng47 opened this issue Aug 3, 2023 · 4 comments
Closed

Feature Request: Server #99

JohnPeng47 opened this issue Aug 3, 2023 · 4 comments

Comments

@JohnPeng47
Copy link

JohnPeng47 commented Aug 3, 2023

Usecase: multiple, non-technical teammates have an interest in viewing the output of promptfoo. Exposing the output via a webserver interface would be extremely helpful in this regard.

Additionally, a server would open a path to a lot of other interesting features, such as storing history of prompts, easier sharing, history of evals, eval regressions (big feature) etc.. although maybe at this point this is sounding like a full-featured, paid service (a la https://openpipe.ai/). Big ask, but would be helpful to know if you are planning on moving in this direction, because the # of internal stakeholders asking for this feature is growing.

@JohnPeng47
Copy link
Author

Also, the nice thing about running a server right now is you can expose APIs, which, IMO, is alot easier to work with than text files. For example, right now I have a use case which requires a lot of janking with the text file regime. So for me, alot of my LLM usage is for static content that gets generated in sequential fashion. Essentially, its a chain of multiple choice prompts, where at each stage of the chain, the user can select from a set of statically generated LLM output, and I have generated all permutations of the chain and now want to evaluate random paths through the chain.

To accomplish this, I am basically thinking of writing a script to generate n custom python script providers, that each read the nth column of a csv file that I have (with each row containing a random path of length n). Then Im thinking of using a blank vars.txt file with a single variable to pass the row number to each of the provider files ... lol

Maybe this is kind of a psycho use case for promptfoo, since its mostly static content that Im dealing with (which btw, if there is a better solution, please suggest) but I think I still would prefer to use promptfoo because Im thinking I can re-use the flexibility of the custom script providers for a more dynamic setup in the future.

Anyways, point is, Im having to generate a lot of text files as part of the test script, and feel like if there was an API solution, it might be simpler to plug into my existing code. Would like to hear your thoughts on this, and would be down to help make this happen, in some limited capacity, although not a typescript guy

@typpo
Copy link
Collaborator

typpo commented Aug 8, 2023

Hi @JohnPeng47,

Thanks for the suggestion. Definitely planning to move in this direction, including a self-hosted server - I also work with a team that would benefit greatly.

Have you tried using promptfoo share (docs)? It generates a shareable URL, for example: https://app.promptfoo.dev/eval/f:3756cd5e-9ae9-4e91-9a57-cad229cd646f. Won't solve all the use cases you listed, but at least makes it easier to share with nontechnical people.

Roughly speaking, what would your ideal API look like?

@JohnPeng47
Copy link
Author

JohnPeng47 commented Aug 9, 2023

Yes, the share feature is actually great haha, love how the live server integration.

About the API design ...

Just spit-balling, I'm thinking maybe an API design that separates
a) defining the run configurations
b) actually running the test suite
c) some kind of pull/push/webhook based interface for the custom providers?

Separating a and b makes sense to me, especially for a web UI that would, presumably, let you define and persist previous run configurations and see their results. And maybe introduce a test suite abstraction over single tests, so you can run/view tests in batches.
c) seems like it would be the hardest to do? But IMO if this could be implemented, it would be super clutch. Not sure what the test execution would look like with your custom webhook inside their codebase ... but if you can design this nicely, it would be super sweet, because it would effectively be hooked right into their CI/CD. I think as LLM sophistication increases, custom provider is a no-brainer cuz custom model/custom post-processing/custom pre-processing, and CI/CD LLM evals would be absolutely critical.
Anyways, my 2Cs, would love to know what you think

typpo added a commit that referenced this issue Aug 27, 2023
Notable changes:

    Foundations for a shared database, so that a team can collaborate on evals
    Enable self-hosting, so that sharing is possible without relying on the www.promptfoo.dev host
    Users can run evals directly through the hosted web UI without running promptfoo locally (good for non-technical prompt writers)

Lots of tangentially related cleanup along the way, including:

    Making providers consistently handle config, and adding support for config.apiKey across providers
    Improve the formatting and verbosity of errors in the table view

Related to #99
@typpo
Copy link
Collaborator

typpo commented Jun 5, 2024

Closing this out as we implemented a server long ago

@typpo typpo closed this as completed Jun 5, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants