-
Notifications
You must be signed in to change notification settings - Fork 54
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Async Eval Functions #605
Async Eval Functions #605
Conversation
Client Side: ``` async def the_parent_function(): async with AsyncClient(app=fake_app, base_url="http://localhost:8000") as client: headers = {} if span := get_current_span(): headers.update(span.to_headers()) return await client.post("/fake-route", headers=headers) ``` Server Side: ``` @fake_app.post("/fake-route") async def fake_route(request: Request): with tracing_context(headers=request.headers): fake_function() return {"message": "Fake route response"} ``` If people like, we could add some fun middleware, but probably not necessary
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
>>> results = asyncio.run( | ||
... aevaluate( | ||
... apredict, | ||
... data=dataset_name, | ||
... evaluators=[helpfulness], | ||
... summary_evaluators=[precision], | ||
... experiment_prefix="My Helpful Experiment", | ||
... ) | ||
... ) # doctest: +ELLIPSIS | ||
View the evaluation results for experiment:... |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: use await instead of asyncio.run
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That breaks doctests since this is in the python repl
RunEvaluator, | ||
] | ||
], | ||
evaluators: Sequence[Union[EVALUATOR_T, AEVALUATOR_T]], |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not RunEvaluator
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's contained withinEVALUATOR_T
@@ -101,6 +102,9 @@ async def aevaluate_run( | |||
) | |||
|
|||
|
|||
_RUNNABLE_OUTPUT = Union[EvaluationResult, EvaluationResults, dict] | |||
|
|||
|
|||
class DynamicRunEvaluator(RunEvaluator): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not related to PR but docstring suggests we need to use @run_evaluator
decorator but don't think that's true anymore?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You could initialize it directly or via the run_evaluator docstring.
We wrap user functions using this within the evaluate()
function to promote them to RunEvaluator
's
No description provided.