Skip to content

Refactor RCActor props overload#795

Merged
hellolittlej merged 2 commits intomasterfrom
ref-rca-prop
Sep 26, 2025
Merged

Refactor RCActor props overload#795
hellolittlej merged 2 commits intomasterfrom
ref-rca-prop

Conversation

@hellolittlej
Copy link
Collaborator

Context

We don't need overload the props because the only 3 usages are all in test, we can just pass the null.

The duplicate of props make it hard to add new params to the resourceclusterActor coz we need duplicate it to multiple constructor.
the new one was introduced from e57a596

Checklist

  • ./gradlew build compiles code correctly
  • Added new tests where applicable
  • ./gradlew test passes all tests
  • Extended README or added javadocs where applicable

@github-actions
Copy link

github-actions bot commented Sep 25, 2025

Test Results

152 files  ±0  152 suites  ±0   8m 59s ⏱️ -10s
661 tests ±0  650 ✅ +1  11 💤 ±0  0 ❌  - 1 
661 runs   - 1  650 ✅ ±0  11 💤 ±0  0 ❌  - 1 

Results for commit ef1ffd1. ± Comparison against base commit 795e2b5.

♻️ This comment has been updated with latest results.

@hellolittlej hellolittlej merged commit b0ad6e4 into master Sep 26, 2025
6 of 7 checks passed
@hellolittlej hellolittlej deleted the ref-rca-prop branch September 26, 2025 19:04
andresgalindo-stripe pushed a commit to andresgalindo-stripe/mantis that referenced this pull request Oct 30, 2025
Co-authored-by: ggao <ggao@netflix.com>
andresgalindo-stripe added a commit that referenced this pull request Nov 6, 2025
* Add variety of cleanups, fix warnings, improve code/performance (#771)

* More fixes

* Review feedback, add more

* Update nebula.netflixoss use sonatype central portal (#774)

* Use com.netflix.nebula.netflixoss 11.6.0 to move publishing to Sonatype Central Portal from Sonatype Legacy OSSRH

* Github action: checkout v4

* Introduce batching into worker discovery during scaling (#773)

* Fix worker state filtering and scheduling update gaps during scaling. This reduces scaling update storms from N individual updates to 1-3 batched updates.
  - Filter JobSchedulingInfo to only include Started
  workers, preventing downstream connection failures
  - Add smart refresh batching with pending worker
  detection to avoid premature flag resets
  - Implement WorkerState.isPendingState() helper for
   consistent state checking
  - Add comprehensive tests covering scaling
  scenarios and flag reset edge cases
  - Include detailed context and analysis documentation of
  connection mechanisms and scaling optimizations

* try stablize flaky ut

* add analysis context doc

* remove refresh discovery trigger on scaleup request

* Fix Worker Request flow to properly use batching (#775)

* Introduce batching into worker discovery during scaling (#773)

* Fix worker state filtering and scheduling update gaps during scaling. This reduces scaling update storms from N individual updates to 1-3 batched updates.
  - Filter JobSchedulingInfo to only include Started
  workers, preventing downstream connection failures
  - Add smart refresh batching with pending worker
  detection to avoid premature flag resets
  - Implement WorkerState.isPendingState() helper for
   consistent state checking
  - Add comprehensive tests covering scaling
  scenarios and flag reset edge cases
  - Include detailed context and analysis documentation of
  connection mechanisms and scaling optimizations

* try stablize flaky ut

* add analysis context doc

* remove refresh discovery trigger on scaleup request

* Fix Worker Request flow to properly use batching (#775)

* Support default tag config as fallback on artifact loading failure (#778)

* increase max stage concurrency (#779)

* Fix a typo in the Group By docs (#783)

* Fix a typo in the Group By docs

* Fix broken link to heartbeat documentation

* Handle out of sync restarted TE (#784)

* Handle out of sync restarted TE

* use terminte event on heartbeat

* clean up + tests

* Revert "Fix Worker Request flow to properly use batching (#775)" (#785)

This reverts commit 3b0c92f.

* Move common code to utils and cleanup (#789)

Co-authored-by: ggao <ggao@netflix.com>

* Add job id to log and add running worker failure metrics (#790)

Co-authored-by: ggao <ggao@netflix.com>

* add job clusters update metrics (#791)

* Update worker failure metric (#792)

Co-authored-by: ggao <ggao@netflix.com>

* Refactor RCActor props overload (#795)

Co-authored-by: ggao <ggao@netflix.com>

* Add log to check #TE archived was not in disabled state (#793)

Co-authored-by: ggao <ggao@netflix.com>

* Update CODEOWNERS (#796)

* Cleanup autoscaler metric subscriptions on shutdown (#798)

* fix leaked auto scaler instance (#801)

* Fix test race condition (#803)

When disabling a job cluster, the response would sometimes return before
the associated jobs were killed.  The Delete action would then fail
because the job was still active.  I was able to reliably reproduce by
adding a 200ms sleep in the JobActor.onJobKill.

To fix, we just check if the response is returning that error. If so, we
retry.  Otherwise, we perform the standard checks.

[CI Example](https://github.com/Netflix/mantis/pull/797/checks?check_run_id=52633941202)

* Fixed up test

* Debugging

* Validating breakage is from rate limiting

* Updating rate limit

---------

Co-authored-by: Michael Braun <n3ca88@gmail.com>
Co-authored-by: OdysseusLives <achipman@netflix.com>
Co-authored-by: Andy Zhang <87735571+Andyz26@users.noreply.github.com>
Co-authored-by: Daniel Trager <43889268+dtrager02@users.noreply.github.com>
Co-authored-by: eliot-stripe <58606410+eliot-stripe@users.noreply.github.com>
Co-authored-by: Gigi Gao <ggjbetty@gmail.com>
Co-authored-by: ggao <ggao@netflix.com>
Co-authored-by: timmartin-stripe <131782471+timmartin-stripe@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants