[Core]Integrate the accelerator‑aware runtime into the InferenceService con…#289
[Core]Integrate the accelerator‑aware runtime into the InferenceService con…#289
Conversation
Summary of ChangesHello @pallasathena92, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This PR introduces accelerator-aware runtime selection for InferenceServices, enabling automatic matching of workloads to GPUs. It adds a new Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces significant new functionality for accelerator-aware runtime selection, which is a great addition. The code is generally well-structured, but I've identified several critical issues related to error handling and potential nil pointer dereferences that could cause the controller to panic. There are also some areas where code clarity and maintainability could be improved. Please review the detailed comments, especially the ones marked as critical and high severity.
| type AcceleratorCompatibilityError struct { | ||
| AcceleratorClassName string | ||
| Component string | ||
| Reason string | ||
| DetailedError error | ||
| } |
There was a problem hiding this comment.
The AcceleratorCompatibilityError struct is defined but doesn't implement the error interface because it's missing an Error() method. This prevents it from being used as a standard Go error, which seems to be the intention. To ensure it can be properly handled as an error throughout the codebase, you should add an Error() method.
| // ResourceRequests that should be applied to pods | ||
| ResourceRequests map[string]string |
There was a problem hiding this comment.
The ResourceRequests field is defined as map[string]string. For Kubernetes resource management, it's standard and safer to use corev1.ResourceList, which is map[corev1.ResourceName]resource.Quantity. Using map[string]string bypasses the validation and parsing that resource.Quantity provides, making it prone to errors and inconsistencies. Please change the type to corev1.ResourceList and add the necessary imports (k8s.io/api/core/v1 and k8s.io/apimachinery/pkg/api/resource).
6aaeffa to
73f92f4
Compare
What type of PR is this?
/kind feature
/kind design
What this PR does / why we need it:
Integrates accelerator-aware runtime selection into the InferenceService controller to automatically match workloads with appropriate GPU/accelerator hardware.
add acceleratorclassselector in pkg/acceleratorclassselector
Selector logic : Implements priority-based accelerator class selection for inference components (engine/decoder)
Priority: Component override → InferenceService selector → Runtime requirements
Fetcher: Queries cluster-scoped AcceleratorClass resources from Kubernetes API
Type definitions: Interfaces and data structures for accelerator selection
Error handling : Custom error types for not-found and compatibility issues
Controller Integration:
Added AcceleratorClassSelector to InferenceServiceReconciler
Selects accelerator classes for engine and decoder components during reconciliation
Passes accelerator specs to component builders
Component Enhancements and Spec Merging Logic:
Base/Engine/Decoder: Extended to accept accelerator class specs and names
New utilities to merge runtime and InferenceService specs
Handles accelerator requirements, resource requests, and node selectors
The main merging logic is:
args: customer's args + runtime
env: customer's env + runtime
resource: choice the max number from customer's setting, runtime, AC
nodeSelector: customer's + AC
Remove default value for AcceleratorSelectionPolicy
the current main AC select logic is:
Implements priority-based accelerator class selection for inference components (engine/decoder)
if no AC specified, ac selection based on policy.
here is a following improvment for policy selection.
Does this PR introduce a user-facing change?
None, all field is optional. If customer doesn't set it, they won't meet any different behavior.