Skip to content

Replace AggregateObjective with ScalarizedObjective, drop normalization#26

Merged
499602D2 merged 1 commit intomainfrom
refactor/scalarized-objective
Apr 24, 2026
Merged

Replace AggregateObjective with ScalarizedObjective, drop normalization#26
499602D2 merged 1 commit intomainfrom
refactor/scalarized-objective

Conversation

@499602D2
Copy link
Copy Markdown
Owner

@499602D2 499602D2 commented Apr 22, 2026

Replaces AggregateObjective with ScalarizedObjective, dropping the flowboost-side outcome-normalization layer entirely. Ax's default transform stack already handles outcome scale (Winsorize → BilogY → StandardizeY at the modelbridge layer + BoTorch's Standardize on the GP); our layer was redundant, and by running post-scalarization it discarded the per-metric information Ax's modelbridge uses.

AxBackend defers to Ax's native ax.core.objective.ScalarizedObjective so each inner metric gets its own surrogate and StandardizeY; other backends would compute the weighted sum at the flowboost layer if they lack an equivalent primitive.

Addresses the concrete AggregateObjective complaints in #8; mixed minimize/maximize inner objectives, different-scale objectives, the absent normalization story, and the batch_process crash reported in the comment. Leaves the broader Pareto-MOO discussion (weighting / thresholds for separate objectives) open.

Migration

# Before
agg = AggregateObjective(
    name="LiftToDrag", minimize=False,
    objectives=[lift, drag], weights=[0.7, 0.3], threshold=None,
)

# After: signed weights encode direction; negative flips drag's contribution
agg = ScalarizedObjective(
    name="LiftToDrag", minimize=False,
    objectives=[lift, drag], weights=[0.7, -0.3],
)

Drop any normalization_step= or attach_post_processing_step(...) from Objective definitions; the backend handles outcome scale. For domain transforms (e.g. log-distributed measurements), use Objective(static_transform=math.log, ...).

Validation

ScalarizedObjective.__init__ rejects at construction:

  • Non-finite weights (NaN, ±inf)
  • All-zero weight vectors (no signal to fit)
  • Duplicate inner objective names (would collide in Ax's metric registry)
  • Non-Objective inner terms (nested ScalarizedObjective, Constraint, etc.)
  • Inner Objective.threshold (MOO-only; pointless under scalarization)
  • Direction/weight-sign mismatch (inner minimize must agree with the direction implied by outer.minimize and the weight's sign)

A single zero weight warns and skips the direction check for that term, so users can temporarily mute a contribution during tuning.

Tests

Unit tests cover every validation branch. A new convergence canary runs BO on scale-mismatched inner metrics and fails if per-metric StandardizeY or the AxScalarizedObjective post-create swap regresses. Existing normalization canaries still pass.

@499602D2 499602D2 changed the base branch from fix/ci-speed-and-test-coverage to main April 24, 2026 09:46
@499602D2 499602D2 force-pushed the refactor/scalarized-objective branch 2 times, most recently from 9d540d9 to 400f5ac Compare April 24, 2026 10:32
@499602D2 499602D2 force-pushed the refactor/scalarized-objective branch from 400f5ac to b72c173 Compare April 24, 2026 10:38
@499602D2 499602D2 merged commit 1f9b385 into main Apr 24, 2026
6 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant