Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[NuclioRuntime] Always pass serving graph to pod using config-map #5598

Merged
merged 4 commits into from
May 29, 2024

Conversation

alxtkr77
Copy link
Member

@alxtkr77 alxtkr77 force-pushed the ML-6223 branch 3 times, most recently from 27bf99d to fdae2e6 Compare May 21, 2024 10:30
@assaf758 assaf758 requested a review from gtopper May 22, 2024 20:40
@assaf758 assaf758 changed the title [Nuclio] Always pass graph as a CM file [Nuclio runtime] Always pass serving graph to pod using config-map May 22, 2024
Copy link
Collaborator

@gtopper gtopper left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but I'm wondering if we can keep the flexibility so we're not married to a hard coded path under /tmp.

mlrun/utils/helpers.py Outdated Show resolved Hide resolved
tests/serving/test_serving.py Outdated Show resolved Hide resolved
tests/serving/test_serving.py Outdated Show resolved Hide resolved
@liranbg liranbg changed the title [Nuclio runtime] Always pass serving graph to pod using config-map [NuclioRuntime] Always pass serving graph to pod using config-map May 23, 2024
Copy link
Member

@theSaarco theSaarco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alxtkr77 - looking at this, I actually think there is an issue here with BC. If the user is running a function that runs with an older mlrun client version that doesn't support passing the spec as CM (i.e. reading it from file), with this change there is no way to make it work except upgrading the client version.
Before making this change constant, most serving graphs would pass as-is in env variable, and the issue may happen only for large serving graphs (which arguable wouldn't work anyway due to their size). Now we're introducing a breaking change that will happen for every serving function deployed using an older client.
We need to give the user an option to keep using env variables for the serving spec using a configuration on the runtime or something similar. Trying to automatically deduce whether the client is old or not is problematic since we don't always know what version of mlrun is included in the image used.

@alxtkr77 alxtkr77 force-pushed the ML-6223 branch 3 times, most recently from 409e356 to 847477e Compare May 26, 2024 09:12
@alxtkr77 alxtkr77 requested a review from theSaarco May 26, 2024 13:47
Copy link
Member

@theSaarco theSaarco left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - one small fix needed in the test.

Comment on lines 488 to 495
server.api.utils.singletons.k8s.get_k8s_helper().v1api.list_namespaced_config_map = unittest.mock.Mock(
return_value=config_map
)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You need to revert these changes at the end of the test. Since you have monkeypatch available, best to use it to perform these patching, as it will take care of reverting automatically.

@assaf758 assaf758 merged commit 9263dc2 into mlrun:development May 29, 2024
11 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants