Replace AggregateObjective with ScalarizedObjective, drop normalization#26
Merged
Replace AggregateObjective with ScalarizedObjective, drop normalization#26
Conversation
9d540d9 to
400f5ac
Compare
400f5ac to
b72c173
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Replaces
AggregateObjectivewithScalarizedObjective, dropping the flowboost-side outcome-normalization layer entirely. Ax's default transform stack already handles outcome scale (Winsorize → BilogY → StandardizeY at the modelbridge layer + BoTorch'sStandardizeon the GP); our layer was redundant, and by running post-scalarization it discarded the per-metric information Ax's modelbridge uses.AxBackend defers to Ax's native
ax.core.objective.ScalarizedObjectiveso each inner metric gets its own surrogate and StandardizeY; other backends would compute the weighted sum at the flowboost layer if they lack an equivalent primitive.Addresses the concrete
AggregateObjectivecomplaints in #8; mixed minimize/maximize inner objectives, different-scale objectives, the absent normalization story, and thebatch_processcrash reported in the comment. Leaves the broader Pareto-MOO discussion (weighting / thresholds for separate objectives) open.Migration
Drop any
normalization_step=orattach_post_processing_step(...)fromObjectivedefinitions; the backend handles outcome scale. For domain transforms (e.g. log-distributed measurements), useObjective(static_transform=math.log, ...).Validation
ScalarizedObjective.__init__rejects at construction:Objectiveinner terms (nestedScalarizedObjective,Constraint, etc.)Objective.threshold(MOO-only; pointless under scalarization)minimizemust agree with the direction implied byouter.minimizeand the weight's sign)A single zero weight warns and skips the direction check for that term, so users can temporarily mute a contribution during tuning.
Tests
Unit tests cover every validation branch. A new convergence canary runs BO on scale-mismatched inner metrics and fails if per-metric
StandardizeYor theAxScalarizedObjectivepost-create swap regresses. Existing normalization canaries still pass.