-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
MLCOMPUTE-949 | Pick spark ui port from a preferred port range #128
Conversation
e78ff28
to
f975cc9
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor comment re: typo/possible upstreaming, but otherwise seems fine :)
service_configuration_lib/utils.py
Outdated
@@ -28,3 +35,52 @@ def load_spark_srv_conf(preset_values=None) -> Tuple[Mapping, Mapping, Mapping, | |||
except Exception as e: | |||
log.warning(f'Failed to load {DEFAULT_SPARK_RUN_CONFIG}: {e}') | |||
raise e | |||
|
|||
|
|||
def ephemeral_port_reserve_range(preffered_port_start: int, preferred_port_end: int, ip='127.0.0.1') -> int: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
it'd be nice to upstream this into the ephemeral_port_reserve repo, but we can wait to see how this works for y'all's usecase first if you'd like :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure! I added comments explaining the purpose of adding this function and linked to the upstream repository for reference. Let's leave it here for now and test it out first
Pick the Spark UI (API) port from a preferred port range, for scraping metrics from multiple Spark sessions in a single Jupyter server.
Tested in a jupyter server and launched 11 Spark sessions, the port of the 11th session is a random ephemeral port that is currently available on the host/pod.