Skip to content

[GOBBLIN-1996] Add ability for Yarn app to terminate on finishing of temporal flow#3865

Merged
Will-Lo merged 6 commits intoapache:masterfrom
homatthew:mh-terminate-on-finish
Feb 13, 2024
Merged

[GOBBLIN-1996] Add ability for Yarn app to terminate on finishing of temporal flow#3865
Will-Lo merged 6 commits intoapache:masterfrom
homatthew:mh-terminate-on-finish

Conversation

@homatthew
Copy link
Copy Markdown
Contributor

@homatthew homatthew commented Jan 24, 2024

Dear Gobblin maintainers,

Please accept this PR. I understand that it will not be reviewed until I have checked off all the steps below!

JIRA

Description

  • Here are some details about my PR, including screenshots (if applicable):
    image
    In the above diagram, Yarn application will terminate when it's done exeucting the submitted Temporal workflow. And the monitoring Azkaban job will terminate when it sees the yarn application finish

Tests

  • My PR adds the following unit tests OR does not need testing for this extremely good reason:

Commits

  • My commits all reference JIRA issues in their subject lines, and I have squashed multiple commits if they address the same issue. In addition, my commits follow the guidelines from "How to write a good git commit message":
    1. Subject is separated from body by a blank line
    2. Subject is limited to 50 characters
    3. Subject does not end with a period
    4. Subject uses the imperative mood ("add", not "adding")
    5. Body wraps at 72 characters
    6. Body explains "what" and "why", not "how"

@homatthew homatthew force-pushed the mh-terminate-on-finish branch from 6666bdd to f479355 Compare January 29, 2024 18:42
@homatthew homatthew changed the title Mh terminate on finish [GOBBLIN-1996] Add ability for Yarn app to terminate on finishing of temporal flow Jan 30, 2024
@homatthew homatthew marked this pull request as ready for review January 30, 2024 02:31
// Start the application
this.serviceManager.startAsync().awaitHealthy();
try {
this.serviceManager.startAsync().awaitHealthy(startupTimeout.getSeconds(), TimeUnit.SECONDS);
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One of the services, the FSConfigurationManager, has some main thread interrupting behavior that causes the start call to last indefinitely.

This prevents users from being able to call the shutdown method because this method is synchronized. I'm adding a configuration for the timeout to be configurable, with the default being indefinitely waiting

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does it mean for the "start call [to] last indefinitely"?

  • the startAsync (it's not actually async)?

  • or the awaitHealthy (e.g. the FSConfigMgr never actually becomes healthy)?

also, if awaitHealthy never returns does that impede our ability to fail-fast when there's a legit problem, like the deployment is DOA?

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this case, startAsync is not really working as expected when using the FSConfigurationManager. The actual work being done in cluster is done via the FSConfigurationManager when that class takes that config and produces a source out of the .conf file.

The awaitHealthy call here waits indefinitely and suspends this current thread in the current implementation. It does not impede our ability to fail-fast because often just hangs if there's an issue. If anything, we can fail faster because a configurable timeout allows us to unblock this thread and trigger the shutdown when the Workflow finishes (successfully or unsuccessfully)

properties.setProperty(ServiceBasedAppLauncher.APP_STOP_TIME_SECONDS, Long.toString(300));
}
this.applicationLauncher = new ServiceBasedAppLauncher(properties, this.clusterName);
this.applicationLauncher = new ServiceBasedAppLauncherWithoutMetrics(properties, this.clusterName);
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Metrics in these pipelines are causing noisy logging and the metrics here are currently not used by Temporal this temporal in any meaningful way, so I've disabled them here.

Disabling metrics via the gobblin metris key is not desirable because we still need GTE emission to work


LOGGER.info("Stopping the Gobblin Cluster Manager");

if (this.idleProcessThread != null) {
Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Temporal needs to run in yarn mode. And so we do not need the standalone cluster / bare metal code

Copy link
Copy Markdown
Contributor

@phet phet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nice crisp solution... I like it!

// Start the application
this.serviceManager.startAsync().awaitHealthy();
try {
this.serviceManager.startAsync().awaitHealthy(startupTimeout.getSeconds(), TimeUnit.SECONDS);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what does it mean for the "start call [to] last indefinitely"?

  • the startAsync (it's not actually async)?

  • or the awaitHealthy (e.g. the FSConfigMgr never actually becomes healthy)?

also, if awaitHealthy never returns does that impede our ability to fail-fast when there's a legit problem, like the deployment is DOA?

Comment thread gobblin-temporal/src/main/java/org/apache/gobblin/temporal/yarn/YarnService.java Outdated
Comment thread gobblin-temporal/src/main/java/org/apache/gobblin/temporal/yarn/YarnService.java Outdated
@homatthew homatthew force-pushed the mh-terminate-on-finish branch 2 times, most recently from ba5e03e to f1ae369 Compare February 1, 2024 18:26
@homatthew homatthew force-pushed the mh-terminate-on-finish branch from f1ae369 to 87a478b Compare February 6, 2024 17:56
@homatthew homatthew force-pushed the mh-terminate-on-finish branch from 87a478b to 4d2f1c7 Compare February 6, 2024 17:56
public static class Factory {
private static final ActivityOptions DEFAULT_OPTS = ActivityOptions.newBuilder().build();
private static final ActivityOptions DEFAULT_OPTS = ActivityOptions.newBuilder()
.setStartToCloseTimeout(Duration.ofHours(24))
Copy link
Copy Markdown
Contributor

@phet phet Feb 9, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what are the ramifications of this? a job could still run >24 hours right? does this merely constrain the activity that attempts to send GTEs to finish performing the send within 24 hours?

Copy link
Copy Markdown
Contributor Author

@homatthew homatthew Feb 13, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Correct. the job itself can still run for > 24 hours. But if for some reason kafka is down, this operation can take an arbitrary amount of time. So let's cap it at a reasonably high number (24 hours)

There are 2 things to consider here:

  1. GaaS won't be able to detect if a job has finished until this activity succeeds. If Kafka is down, we want to be able to retry this task until kafka is back.
  2. If the job itself is completed but this timer is unable to send the message, I think 24 hours is a reasonable amount of time to retry for before giving up. We ideally don't want to have to retry the work just because the timer failed for a brief period of time (e.g. a few hours of downtime)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure that's fine. for this, even 8 or 12 hours should be far sufficient. please add clarifying comment that this is max time to attempt to send, potentially waiting out a kafka outage

@homatthew homatthew force-pushed the mh-terminate-on-finish branch from 0c53cca to 40153a4 Compare February 13, 2024 04:18
Copy link
Copy Markdown
Contributor

@phet phet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we're there, or at least super close!

@codecov-commenter
Copy link
Copy Markdown

codecov-commenter commented Feb 13, 2024

Codecov Report

Attention: Patch coverage is 15.68627% with 43 lines in your changes missing coverage. Please review.

Project coverage is 46.72%. Comparing base (aec3401) to head (6239d42).
Report is 156 commits behind head on master.

Files with missing lines Patch % Lines
...obblin/temporal/ddm/NoWorkUnitsInfiniteSource.java 0.00% 12 Missing ⚠️
...emporal/cluster/GobblinTemporalClusterManager.java 0.00% 9 Missing ⚠️
...e/gobblin/runtime/app/ServiceBasedAppLauncher.java 66.66% 3 Missing and 1 partial ⚠️
...mporal/joblauncher/GobblinTemporalJobLauncher.java 0.00% 4 Missing ⚠️
...cluster/ServiceBasedAppLauncherWithoutMetrics.java 0.00% 3 Missing ⚠️
...bblin/temporal/joblauncher/GobblinJobLauncher.java 0.00% 3 Missing ⚠️
...temporal/workflows/metrics/TemporalEventTimer.java 0.00% 3 Missing ⚠️
.../org/apache/gobblin/temporal/yarn/YarnService.java 0.00% 2 Missing ⚠️
...oral/ddm/launcher/ProcessWorkUnitsJobLauncher.java 0.00% 1 Missing ⚠️
.../loadgen/launcher/GenArbitraryLoadJobLauncher.java 0.00% 1 Missing ⚠️
... and 1 more
Additional details and impacted files
@@              Coverage Diff              @@
##             master    #3865       +/-   ##
=============================================
+ Coverage          0   46.72%   +46.72%     
- Complexity        0    11131    +11131     
=============================================
  Files             0     2212     +2212     
  Lines             0    87430    +87430     
  Branches          0     9613     +9613     
=============================================
+ Hits              0    40855    +40855     
- Misses            0    42884    +42884     
- Partials          0     3691     +3691     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@Will-Lo Will-Lo merged commit 64bdc70 into apache:master Feb 13, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants