-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
.Net: ADR: OTel LLM requests #5963
.Net: ADR: OTel LLM requests #5963
Conversation
The primary focus of our current deliberation centers on the necessity for uniformity across all AI connectors, specifically regarding the production of identical telemetry with a consistent data structure, irrespective of the variations in underlying libraries or implementations. One potential drawback of implementing such uniformity is the associated cost, as users might incur additional expenses due to the storage of redundant data. This is particularly relevant since libraries could generate identical telemetry for LLM requests. Conversely, a significant advantage of this approach is the convenience it offers users, who would not be required to configure telemetry each time they replace connectors, thus ensuring consistency. |
An alternative strategy we might consider involves establishing uniformity at the connector level, wherein by default, all connectors generate telemetry for LLM requests. However, we would provide developers with the flexibility to disable this feature if they choose. This approach allows for a balance between maintaining consistency across our connectors and offering customization options to accommodate individual developer preferences. |
318407c
to
0de0ca0
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
High level looks good. Will provide more detailed feedback on implementation.
Co-authored-by: Liudmila Molkova <limolkova@microsoft.com>
Co-authored-by: Liudmila Molkova <limolkova@microsoft.com>
Motivation and Context
Observing LLM applications has been a huge ask from customers and the community. This work aims to ensure that SK provides the best developer experience while complying with the industry standards for observability in generative-AI-based applications.
Description
This ADR outlines options which we can use to trace LLM requests from applications built with SK.
Contribution Checklist