Skip to content

Conversation

@JonasKunz
Copy link
Contributor

@JonasKunz JonasKunz commented Feb 3, 2025

Closes #499 .
Adds muzzle-check for our instrumentations. This plugin verifies that the used methods and classes from the instrumented libraries exist and are compatible across a selected version range.

In the initial commit of this PR I intentionally configured the muzzle check of our openai-client-instrumentation to cover the currently the unsupported, newer client versions. This fails as expected: CI-run.

I think it is a best practice to have a fixed upper version bound when using muzzle-check and have that upgraded by renovate on separate branch. Using it without an upper bound would cause our build on main to suddenly fail when a new, not yet supported version of an instrumented library is released. With my approach, it will instead fail just on the renovate-branch.

Had some fun doing this, because unfortunately muzzle-check is surprisingly difficult to setup.
There is some logic in the plugin checking the executed command and simply doing nothing if it doesn't match.

E.g. running ./gradlew clean :instrumentation:muzzle works, while ./gradlew clean muzzle doesn't (even though it shows success).

@JonasKunz JonasKunz marked this pull request as ready for review February 3, 2025 15:33
@JonasKunz JonasKunz requested a review from a team February 3, 2025 15:33
Copy link
Contributor

@codefromthecrypt codefromthecrypt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for the failing test as I was curious if 0.20 worked or not.

copying @anuraaga for a look as i think he was in the middle of this discussion

group.set(openaiClientLib.group)
module.set(openaiClientLib.name)
versions.set("(,0.20.0]")
versions.set("(,${openaiClientLib.version}]")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ahah, that's where this is set.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Btw the failure of 0.20.0 is already detected via renovate opening a bump-pr:
#513

I see muzzle more as insurance that you don't break the code for older versions. E.g. if we now rewrote the instrumentation for 0.20.0 all tests would be green, but older openai client versions would stop working. Muzzle prevents this, at least based on what is detectable using static code analysis.

Copy link
Contributor

@anuraaga anuraaga left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since almost all failures on newer versions can be detected by muzzle seems reasonable to me

Copy link
Member

@SylvainJuge SylvainJuge left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM, do you think we could leverage this to implement a weekly job that tests for the latest versions (thus without upper bound) to ensure that we are properly notified when a new unsupported version of a supported library is introduced ?

@JonasKunz
Copy link
Contributor Author

do you think we could leverage this to implement a weekly job that tests for the latest versions (thus without upper bound) to ensure that we are properly notified when a new unsupported version of a supported library is introduced

No need to do this: When a new version is released, renovate will bump the version on a separate PR. On that PR, muzzle will run with the corresponding latest version.

@JonasKunz JonasKunz merged commit 21912b6 into elastic:main Feb 4, 2025
18 checks passed
@JonasKunz JonasKunz deleted the muzzle-check branch February 4, 2025 09:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add muzzle-check to OpenAI client instrumentation

4 participants