Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Async Eval Functions #605

Merged
merged 8 commits into from
Apr 15, 2024
Merged

Async Eval Functions #605

merged 8 commits into from
Apr 15, 2024

Conversation

hinthornw
Copy link
Collaborator

No description provided.

Client Side:

```
async def the_parent_function():
    async with AsyncClient(app=fake_app, base_url="http://localhost:8000") as client:
        headers = {}
        if span := get_current_span():
            headers.update(span.to_headers())
        return await client.post("/fake-route", headers=headers)

```

Server Side:

```
@fake_app.post("/fake-route")
async def fake_route(request: Request):
    with tracing_context(headers=request.headers):
        fake_function()
    return {"message": "Fake route response"}

```

If people like, we could add some fun middleware, but probably not
necessary
Copy link
Contributor

@langchain-infra langchain-infra left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@hinthornw hinthornw merged commit 662ce5d into main Apr 15, 2024
7 checks passed
@hinthornw hinthornw deleted the wfh/async_evaluators branch April 15, 2024 23:09
Comment on lines +201 to +210
>>> results = asyncio.run(
... aevaluate(
... apredict,
... data=dataset_name,
... evaluators=[helpfulness],
... summary_evaluators=[precision],
... experiment_prefix="My Helpful Experiment",
... )
... ) # doctest: +ELLIPSIS
View the evaluation results for experiment:...
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: use await instead of asyncio.run

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That breaks doctests since this is in the python repl

RunEvaluator,
]
],
evaluators: Sequence[Union[EVALUATOR_T, AEVALUATOR_T]],
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not RunEvaluator?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's contained withinEVALUATOR_T

@@ -101,6 +102,9 @@ async def aevaluate_run(
)


_RUNNABLE_OUTPUT = Union[EvaluationResult, EvaluationResults, dict]


class DynamicRunEvaluator(RunEvaluator):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

not related to PR but docstring suggests we need to use @run_evaluator decorator but don't think that's true anymore?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could initialize it directly or via the run_evaluator docstring.

We wrap user functions using this within the evaluate() function to promote them to RunEvaluator's

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants