Skip to content

Update nebula.netflixoss use sonatype central portal#774

Merged
Andyz26 merged 2 commits intomasterfrom
update-nebula.netflixoss-use-sonatype-central-portal
Jun 18, 2025
Merged

Update nebula.netflixoss use sonatype central portal#774
Andyz26 merged 2 commits intomasterfrom
update-nebula.netflixoss-use-sonatype-central-portal

Conversation

@OdysseusLives
Copy link
Contributor

Context

Update nebula.netflixoss use sonatype central portal rather than Legacy Portal

Checklist

  • ./gradlew build compiles code correctly
  • Added new tests where applicable (none)
  • ./gradlew test passes all tests
  • Extended README or added javadocs where applicable (none)

@github-actions
Copy link

Test Results

150 files  ±0  150 suites  ±0   8m 51s ⏱️ -55s
650 tests ±0  639 ✅ +1  11 💤 ±0  0 ❌  - 1 
650 runs   - 1  639 ✅ ±0  11 💤 ±0  0 ❌  - 1 

Results for commit ec570f3. ± Comparison against base commit 573980e.

@Andyz26 Andyz26 merged commit 160a480 into master Jun 18, 2025
9 of 10 checks passed
@Andyz26 Andyz26 deleted the update-nebula.netflixoss-use-sonatype-central-portal branch June 18, 2025 18:29
andresgalindo-stripe pushed a commit to andresgalindo-stripe/mantis that referenced this pull request Oct 30, 2025
* Use com.netflix.nebula.netflixoss 11.6.0 to move publishing to Sonatype Central Portal from Sonatype Legacy OSSRH

* Github action: checkout v4
andresgalindo-stripe added a commit that referenced this pull request Nov 6, 2025
* Add variety of cleanups, fix warnings, improve code/performance (#771)

* More fixes

* Review feedback, add more

* Update nebula.netflixoss use sonatype central portal (#774)

* Use com.netflix.nebula.netflixoss 11.6.0 to move publishing to Sonatype Central Portal from Sonatype Legacy OSSRH

* Github action: checkout v4

* Introduce batching into worker discovery during scaling (#773)

* Fix worker state filtering and scheduling update gaps during scaling. This reduces scaling update storms from N individual updates to 1-3 batched updates.
  - Filter JobSchedulingInfo to only include Started
  workers, preventing downstream connection failures
  - Add smart refresh batching with pending worker
  detection to avoid premature flag resets
  - Implement WorkerState.isPendingState() helper for
   consistent state checking
  - Add comprehensive tests covering scaling
  scenarios and flag reset edge cases
  - Include detailed context and analysis documentation of
  connection mechanisms and scaling optimizations

* try stablize flaky ut

* add analysis context doc

* remove refresh discovery trigger on scaleup request

* Fix Worker Request flow to properly use batching (#775)

* Introduce batching into worker discovery during scaling (#773)

* Fix worker state filtering and scheduling update gaps during scaling. This reduces scaling update storms from N individual updates to 1-3 batched updates.
  - Filter JobSchedulingInfo to only include Started
  workers, preventing downstream connection failures
  - Add smart refresh batching with pending worker
  detection to avoid premature flag resets
  - Implement WorkerState.isPendingState() helper for
   consistent state checking
  - Add comprehensive tests covering scaling
  scenarios and flag reset edge cases
  - Include detailed context and analysis documentation of
  connection mechanisms and scaling optimizations

* try stablize flaky ut

* add analysis context doc

* remove refresh discovery trigger on scaleup request

* Fix Worker Request flow to properly use batching (#775)

* Support default tag config as fallback on artifact loading failure (#778)

* increase max stage concurrency (#779)

* Fix a typo in the Group By docs (#783)

* Fix a typo in the Group By docs

* Fix broken link to heartbeat documentation

* Handle out of sync restarted TE (#784)

* Handle out of sync restarted TE

* use terminte event on heartbeat

* clean up + tests

* Revert "Fix Worker Request flow to properly use batching (#775)" (#785)

This reverts commit 3b0c92f.

* Move common code to utils and cleanup (#789)

Co-authored-by: ggao <ggao@netflix.com>

* Add job id to log and add running worker failure metrics (#790)

Co-authored-by: ggao <ggao@netflix.com>

* add job clusters update metrics (#791)

* Update worker failure metric (#792)

Co-authored-by: ggao <ggao@netflix.com>

* Refactor RCActor props overload (#795)

Co-authored-by: ggao <ggao@netflix.com>

* Add log to check #TE archived was not in disabled state (#793)

Co-authored-by: ggao <ggao@netflix.com>

* Update CODEOWNERS (#796)

* Cleanup autoscaler metric subscriptions on shutdown (#798)

* fix leaked auto scaler instance (#801)

* Fix test race condition (#803)

When disabling a job cluster, the response would sometimes return before
the associated jobs were killed.  The Delete action would then fail
because the job was still active.  I was able to reliably reproduce by
adding a 200ms sleep in the JobActor.onJobKill.

To fix, we just check if the response is returning that error. If so, we
retry.  Otherwise, we perform the standard checks.

[CI Example](https://github.com/Netflix/mantis/pull/797/checks?check_run_id=52633941202)

* Fixed up test

* Debugging

* Validating breakage is from rate limiting

* Updating rate limit

---------

Co-authored-by: Michael Braun <n3ca88@gmail.com>
Co-authored-by: OdysseusLives <achipman@netflix.com>
Co-authored-by: Andy Zhang <87735571+Andyz26@users.noreply.github.com>
Co-authored-by: Daniel Trager <43889268+dtrager02@users.noreply.github.com>
Co-authored-by: eliot-stripe <58606410+eliot-stripe@users.noreply.github.com>
Co-authored-by: Gigi Gao <ggjbetty@gmail.com>
Co-authored-by: ggao <ggao@netflix.com>
Co-authored-by: timmartin-stripe <131782471+timmartin-stripe@users.noreply.github.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants