-
Notifications
You must be signed in to change notification settings - Fork 393
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[SPIKE] cases: list of ideas (related to prod envs) #2490
Comments
And a side question: is this direction a higher priority than Experiments-related use cases atm? (see #2270) |
Some thoughts:
Some ideas for this list:
|
This comment has been minimized.
This comment has been minimized.
OK we're going to try to make this into a spike to come up with actionable items within 7 days or less hopefully. Please help if you can guys. I'll tag people via chat... ⌛ |
It might help to start with thesis statements instead of topics. Thesis statements would be like single-sentence use cases arguing for the utility of the products in given scenarios. Use cases are more persuasive writing compared to the explanatory writing of other docs, so a topic may not clarify what we plan to say about it. This will probably take more time and debate, but hopefully we will have more clarity in deciding which use cases to pursue and in writing the use cases. What do you think? |
Because we also have cases: Experiments #2270. That seemed like a totally different direction from all the previous ideas summarized here (mainly from #820), which I think at least somewhat relate to MLOps? Happy to change the title but this is not the a comprehensive list of use case ideas in all possible product directions. |
I have a feeling that model registries aren't different enough from data registries to write another full use case on that. But maybe it can be part of a Model Mgmt/Lifecycle use case. I like that idea! It could also cover or mention some of the topics above (training remotely, deployment, real-time predictions). |
The way I initially understood the title The title for that ticket you mention about experiments was about one specific use case to my mind.
it's a matter of what we are optimizing here. I would not be trying to generalize by sacrificing the initial goal - more people come, see the high level title that resonates with them . It's fine that they will overlap internally. In this specific case - I think model registry can be significantly different. |
Sure, it all connects. But here I'm thinking mostly about solutions for deploying and using ml models via DVC/CML e.g. production environments, model deployment, etc. Sorry for the confusion... So it looks like so far the better-defined scenarios are
|
|
Thanks, @jorgeorpinel! I didn't mean to suggest that you should bear responsibility for developing each thesis statement, or that each one needs to be perfected.
We have a few use case ideas around "production" and/or "deployment," and it's not clear to me what they mean. There are different scenarios that I have seen described as production deployments: I'd probably vote to focus on
As @shcheklein has mentioned, this can either be about a single model or many models, which might be different use cases. For a single model, track, visualize, and analyze everything about your experiment, including code, parameters, metrics, plots, data, training DAG, and any other artifacts included in your repo. For many models, try many different experiments and track them, enabling you to compare, select, reproduce, and iterate on any experiments. |
This could or could not be considered related to "in production". Training somewhere seems rather like a pre-requisite. I think it has more to do with CI/CD (which can be part of a prod deployment workflow, so there's overlap). This can probably be covered initially in #2404 indeed. Cc @casperdcl
Is this basically ETL where E=get chunk of data, T=run pre-trained model, L=store/upload scores ? That could be part of a use case but may still not be high-level enough.
Not sure I get how DVC play a part in this. Probably just in the way to deploy the model (e.g. via the DVC API which would be similar to this -- going back to the "model registry" idea). Still not high-level enough IMO but b and c def. seem related. |
Hmmm... By many models do you mean actually different models with different goals (would relate with "model registry'), or multiple versions of a same model in development? I usually assume the typical ML pipeline/project ends up in a single model. BTW can we clarify what we mean by "model lifecycle"? Maybe training, active, inactive (related to "in production") or planning, data eng, modeling (much broader topic). Cc @shcheklein
Going back to this (which is why titles are important too), I think "DVC in Production" is a really good umbrella concept to begin with, keeping in mind it would be the first use case in this direction. It can have a story (maybe sections) that cover several of the scenarios we've discussed above. Later on we could split into multiple use cases if that's better. WDYT? UPDATE: See quick draft (idea) in #2506 |
Yup, although T could include other things in your pipeline (feature engineering).
Right, other than the model registry idea, there's not much of a clear pattern here for how to use DVC.
Sorry, I meant many experiments from the same pipeline. |
More feedback (from https://iterativeai.slack.com/archives/C6YHPP2TB/p1621617453043300): From @mnrozhkov
From @dmpetrov
☝️ From these comments I take 1) there's support for covering the "batch scoring" scenario, 2) there's interest in certain integrations, specifically Airflow (I need to play with it ⌛) -- maybe also MLFlow? and 3) an e2e case could be a meaningful way to present some of these topics. Also, @shcheklein shared https://neptune.ai/blog/model-registry-makes-mlops-work with me (on the "model registry" idea). I think this answers the Q of how model registries relate to MLOps/ "in production". Summary:
|
Nice, @jorgeorpinel! The comments on batch scoring and model registry use cases look good to me.
Yes to Airflow since it is the default choice for pipeline orchestration, although might be worth looking into some alternatives like prefect (see https://neptune.ai/blog/best-workflow-and-pipeline-orchestration-tools). MLFlow is probably better left for the experiment management use case since its focus is on tracking and comparing experiments rather than executing pipelines. |
Summary (again)Here's a list proposal with 4 big ideas that group most of the concepts we've discussed (with overlaps): DVC in Production (rel. #2506) (intro to MLOps) ML Model Registry Production Integrations End-to-end scenario with a combination from above e.g.: |
Thanks @jorgeorpinel ! Sounds good, what/where can we get the full list of uses case that we write/consider to write, etc? (I assume that this ticket is still about "prod envs"? E.g. where should we put "Experiments tracking/management" / "ML bookkeeping" case, for example? |
Resulting list of ideas: #2544 Closing spike. |
CI/CD for ML (WIP) docs: use cases: add CI/CD for ML #2404 is a bit broad so far but in the process of being narrowed down. Not clear what it will cover exactly but likely at least continuous integration using DVC + CML mentions.Cc @casperdclzooregistry api: high level dvc.api import/export meets data catalog dvc#2719 (comment). Whether using a DVC repo to centralize ML model management is fundamentally different from general data registries (possibly our most popular UC) has been a recurrent question. I think that probably yes but not sure how exactly. Cc @shchekleinUPDATE: Jump to #2490 (comment)
The text was updated successfully, but these errors were encountered: