Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

.Net: ADR: OTel LLM requests #5963

Merged
merged 10 commits into from
May 2, 2024

Conversation

TaoChenOSU
Copy link
Contributor

Motivation and Context

Observing LLM applications has been a huge ask from customers and the community. This work aims to ensure that SK provides the best developer experience while complying with the industry standards for observability in generative-AI-based applications.

Description

This ADR outlines options which we can use to trace LLM requests from applications built with SK.

Contribution Checklist

@TaoChenOSU TaoChenOSU added PR: in progress Under development and/or addressing feedback .NET Issue or Pull requests regarding .NET code kernel Issues or pull requests impacting the core kernel documentation labels Apr 22, 2024
@TaoChenOSU TaoChenOSU self-assigned this Apr 22, 2024
@github-actions github-actions bot changed the title ADR: OTel LLM requests .Net: ADR: OTel LLM requests Apr 22, 2024
@markwallace-microsoft markwallace-microsoft removed .NET Issue or Pull requests regarding .NET code kernel Issues or pull requests impacting the core kernel labels Apr 22, 2024
@TaoChenOSU TaoChenOSU changed the title .Net: ADR: OTel LLM requests [WIP] .Net: ADR: OTel LLM requests Apr 22, 2024
@TaoChenOSU
Copy link
Contributor Author

TaoChenOSU commented Apr 26, 2024

The primary focus of our current deliberation centers on the necessity for uniformity across all AI connectors, specifically regarding the production of identical telemetry with a consistent data structure, irrespective of the variations in underlying libraries or implementations. One potential drawback of implementing such uniformity is the associated cost, as users might incur additional expenses due to the storage of redundant data. This is particularly relevant since libraries could generate identical telemetry for LLM requests. Conversely, a significant advantage of this approach is the convenience it offers users, who would not be required to configure telemetry each time they replace connectors, thus ensuring consistency.

@TaoChenOSU
Copy link
Contributor Author

An alternative strategy we might consider involves establishing uniformity at the connector level, wherein by default, all connectors generate telemetry for LLM requests. However, we would provide developers with the flexibility to disable this feature if they choose. This approach allows for a balance between maintaining consistency across our connectors and offering customization options to accommodate individual developer preferences.

@TaoChenOSU TaoChenOSU force-pushed the taochen/adr-otel-llm-requests branch from 318407c to 0de0ca0 Compare April 30, 2024 05:23
@TaoChenOSU TaoChenOSU changed the title [WIP] .Net: ADR: OTel LLM requests .Net: ADR: OTel LLM requests May 1, 2024
Copy link
Member

@stephentoub stephentoub left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

High level looks good. Will provide more detailed feedback on implementation.

@TaoChenOSU TaoChenOSU added this pull request to the merge queue May 2, 2024
Merged via the queue into microsoft:main with commit dd95583 May 2, 2024
12 checks passed
@TaoChenOSU TaoChenOSU deleted the taochen/adr-otel-llm-requests branch May 2, 2024 17:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation PR: in progress Under development and/or addressing feedback
Projects
Archived in project
Development

Successfully merging this pull request may close these issues.

None yet

5 participants