-
Notifications
You must be signed in to change notification settings - Fork 6
Refactor: encapsulate annotators used in safe tests #560
Conversation
|
MLCommons CLA bot All contributors have signed the MLCommons CLA ✍️ ✅ |
|
@dhosterman @bkorycki @wpietri for feedback please. I'm aware some of the tests are failing. :) |
|
Could you say a bit more about how this code fits into your overall cleanup plan? E.g., is this step 1, or all the steps? |
|
This looks great so far. It definitely makes sense to extract annotator configuration out of the test file. I don't have any high-level feedback, but let me know when it's ready for a full review! |
This is step 2 of potentially 6. I think this will enable faster iterations and integration of different models and evaluation functions, and I'd be OK leaving it like this if that's all we need. But if we do have additional needs, here are the steps. Step 1 was making the private annotators' public interface more consistent across annotators, to make it easier and less risky to add new ones to modelgauge as they were being developed and iterated on in the private repo. This was done loosely, i.e. there's no actual API contract with ABC or interfaces the annotators need to implement; just enough to make it easier for engineering to integrate into modelgauge quickly and safely enough, and easy enough for non-engineers to adhere to without requiring more advanced Python features like ABCs or refactoring their existing annotators. Step 2 (this) is to hide the internals of annotators and expose a simple interface to client code like safe tests, so that engineering can program to that interface. This keeps our client code cleaner, more readable, and testable, and easier to reason about. More interestingly, it allows people to create their own arbitrary groups of annotators and ensemble evaluation functions without engineering having to do much work to add them. They just create an Possible future steps if we find they are needed:
Another thing we may consider:
MISTRAL_8x22B_CONFIG.llm_config.api_key = self.configuration["together_api_key"]
LLAMA_3_70B_CONFIG.llm_config.api_key = self.configuration["together_api_key"]we could make the Mistral and Llama3 clients implement a Together mixin, and likewise have a HuggingFace mixin for services hitting models on HF, a VLLM mixin for models we host, etc. |
aeca2c9 to
e2f1728
Compare
wpietri
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Looks like good progress, but I'd like to take it a bit further if we can.
| HUGGINGFACE_KEY = InjectSecret(HuggingFaceKey) # was: os.getenv("HF_TOKEN", "") | ||
| # private annotators, if available | ||
| try: | ||
| from modelgauge.private_ensemble_annotator_set import EnsembleAnnotatorSet |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could all of this be moved to private code? If we have to have private stuff in public files, I'd at least like to keep it to the minimum.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I'll give it a go.
ba94a65 to
075fcbf
Compare
|
Note: the unit tests still don't all pass. I will fix them. The lifecycle of the injected secrets has made it necessary to instantiate the annotator set inside the test constructor, which requires passing all the necessary secrets to that constructor. Using references to This means we're almost back to square one:
This solution is more generic and does insulate the test class from the internals of the annotators, but it's pretty gross, and the annotators are no longer completely self-configuring: the client has to interrogate the @bkorycki @wpietri feedback appreciated! Thanks to @bkorycki for the assist! |
|
I should add that the original design here isn't Barbara's, but something we all inherited. Is it possible we could improve things here by separating concerns? One of my ongoing struggles with the Tests is that they do all of these, and maybe more
For a while, I've found the secrets stuff especially annoying, which is why ModelBench has the ModelGaugeSut, a wrapper that allows us to refer to a ModelGauge SUT without having to fully instantiate it. But over time, I'm finding the operational issues at least as much of a headache. For example, if many SUTs are calling the same Together API, to get maximum speed I need to centrally manage the connections to respect rate limits. Or looking at the HuggingFace stuff, having to manage instances and possibly wait many minutes for them shouldn't be hidden away in a variety of Test subcomponents; that's something better handled at a high level. That will become even more of an issue once we're using our own annotator VMs in earnest. I'm not sure what the right breakdown is, but I'm hoping we can keep the Test objects themselves to doing the defining and the straightforward calculations. And it would be great if we could make things more composable, too. Although we do eventually want a locked-down set of Test objects to go with a locked-down set of Benchmarks, for now we want to iterate quickly. Does that inspire any thoughts? |
…st from the internals of its annotator(s). No functional change.
…t constructor; they are not hydrated if they are inside an object passed in to the test constructor. So we restructured the AnnotatorSet classes and the annotator registration to use the AnnotatorSet class and references to injectable secrets to the test constructor, and having the test constructor instantiate the AnnotatorSet object with injected secrets.
4911c3f to
8ab1461
Compare
|
@wpietri thank you for your comments. They are helpful in suggesting next steps to look at. The tests are now passing. Question for the group: should we... (1) merge this in its current state, which partially but incompletely addresses the initial goal to move some of the internal config logic out of the test, then add William's suggested improvements after that? I do agree with William that more separation of concerns is desirable, and I do think these changes are in line with that idea, even if they don't completely achieve the goal they set out to meet due to the intricacies of secrets. And they will make it easier to create arbitrary ensembles and evaluator functions for those ensembles, which I believe is something Shaona was hoping for. So I'm in favor of either option (1) or (3) above. Please let me know what you think @bkorycki @wpietri @bollacker |
|
If you think this is a step in the right direction, I'm all for merging it now. |
|
@wpietri I do, but let's see what Barbara thinks. |
bkorycki
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for adding the private annotator tests! I agree that this is good to merge.
#557
Extracts the internals of annotator configuration out of safe test to separate concerns better.