New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add additional tutorials to OpenAI flavor docs #10700
Conversation
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Documentation preview for 54ed1d5 will be available here when this CircleCI job completes successfully. More info
|
")\n", | ||
"\n", | ||
"# Save the base OpenAI model with the included instruction set (prompt)\n", | ||
"mlflow.openai.save_model(\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we use log_model
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call. Updated and removed the directory 'cleanup' script on the cell above.
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"def cosine_similarity(embedding1, embedding2):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to define these functions? Can we import them from sklearn?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
great point! Adjusted the text, removed the implementation, and provided a much simpler implementation :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks Ben for the PR!
I took a pass over the openai-code-helper notebook, which is amazingly good! left some minor comments on nit and style.
"By the end of this tutorial, you will:\n", | ||
"\n", | ||
"1. **Master OpenAI's GPT-4 for Code Assistance**: Understand how to leverage OpenAI's GPT-4 model for providing real-time coding assistance. Learn to harness its capabilities for generating code suggestions, explanations, and improving overall coding efficiency.\n", | ||
"2. **Utilize MLflow for Enhanced Model Tracking**: Delve into MLflow's powerful tracking systems to manage machine learning experiments. Learn how to adapt the `Python Model` from within MLflow to control how the output of an LLM is displayed from within an interactive coding environment.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Highlighting Python Model
looks a bit strange, shall we say pyfunc model
instead?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed in one place, added additional context in another. In my opinion, the 'pyfunc' references everywhere are confusing when, in order to craft one, you have to subclass PythonModel
. I added additional parenthetical reference to point out that they're synonyms.
"### Key Concepts Covered\n", | ||
"\n", | ||
"1. **MLflow's Model Management**: Explore MLflow's features for tracking experiments, packaging code into reproducible runs, and managing and deploying models.\n", | ||
"2. **Custom Python Model**: Learn how to adapt the OpenAI flavor with MLflow's Custom `Python Model` implementation to provide customized formatting to the output text returned from an OpenAI Model.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a bit hard to understand for readers without MLflow context. I would prefer calling out things like "Learn how to wrap OpenAI model in your custom function and save your function as an MLflow model", so that the purpose is more clear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
reworded and simplified
"2. **Custom Python Model**: Learn how to adapt the OpenAI flavor with MLflow's Custom `Python Model` implementation to provide customized formatting to the output text returned from an OpenAI Model.\n", | ||
"3. **Python Decorators and Functional Programming**: Learn about advanced Python concepts like decorators and functional programming for efficient code evaluation and enhancement.\n", | ||
"\n", | ||
"### MLflow's Significance\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
optional - I kinda prefer "Why MLflow" as the subtitle.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea
"\n", | ||
"\n", | ||
"# Run a quick validation that we have an entry for the OPEN_API_KEY within environment variables\n", | ||
"assert \"OPENAI_API_KEY\" in os.environ, \"OPENAI_API_KEY environment variable must be set\"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we just inline the code to set up openai key?
import getpass
openai_api_key = getpass("Please enter your OpenAI API key: ")
os.environ["OPENAI_API_KEY"] = openai_api_key
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just maintaining consistency across all of our other tutorials that use any sort of API Key. They all have this similar warning and instruction set in them now.
} | ||
], | ||
"source": [ | ||
"mlflow.set_experiment(\"Code Helper\")" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We are skipping the process of connecting to tracking server, can we include a cell with mlflow.login()
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, good catch. I'll add in the updated commentary that points to the CE docs
"model_path = \"/tmp/code-helper\"\n", | ||
"\n", | ||
"# This path cleanup is used to remove the model path if it already exists, provided in case you need to re-run this notebook in its entirety.\n", | ||
"\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove the extra empty line
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
entire cell deleted
" - The `params` schema includes two parameters: `max_tokens` and `temperature`, each with a default value and data type defined.\n", | ||
"\n", | ||
"2. **Saving the Base OpenAI Model**:\n", | ||
" - Using `mlflow.openai.save_model`, we save the base OpenAI model (`gpt-4`) along with the `instruction` set we defined earlier.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not related to the doc, but I am actually confused about this - should we recommend using save_model
or log_model
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changed to log model for this instance. They both work for the purposes of this tutorial, but technically we should recommend log model when interfacing with a native instance of a model.
"\n", | ||
"In this section, we introduce a custom Python Model, `CodeHelper`, which significantly improves the user experience when interacting with the OpenAI model in an interactive development environment like Jupyter Notebook. The `CodeHelper` class is designed to format the output from the OpenAI model, making it more readable and visually appealing, similar to a chat interface. Here's how it works:\n", | ||
"\n", | ||
"1. **Initialization and Model Loading**:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is my personal preference, but put it here for discussion and reference: instead of going through the implementation details before showing the code, I prefer to briefly discussing what's the purpose. For this example, we can say something like "We define the CodeHelper
class to load openAI model and format the request and response".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, if some code is complex enough, I'd add minimum inline comment and keep the leading paragraph simple:)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, understood. Vetoed. ;) Sticking with the Manning style guidelines for technical teaching in our notebooks. Explain, show, recap. There's an entire arm of educational research that has proven this methodology makes complex topics sink in more for first time readers of something complex.
The preface, show, explain methodology usually means people skip the text and don't retain the information.
"# Define the location of the\n", | ||
"artifacts = {\"model_path\": model_path}\n", | ||
"\n", | ||
"with mlflow.start_run():\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we show a screenshot of what users would see on MLflow UI?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea!
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"## Building a Code Assistant with OpenAI & MLflow\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we add a Budget
section to give user an estimated cost of executing this notebook? GPT4 is pretty expensive, as a user i will be interested in that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Awesome contents and fun apps, thanks!! Left a few minor comments on the format and structure.
docs/source/llms/index.rst
Outdated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you change the tentative link in L265 to the new OpenAI guide?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah GREAT catch. Updated!!
"outputs": [], | ||
"source": [ | ||
"# Define the model signature that will be used for both the base model and the eventual custom pyfunc implementation later.\n", | ||
"signature = ModelSignature(\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Can we use schema inference here, or at least show it as an alternative option? (iirc we also set schema automatically based on task type for openai)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added an explicit note about schema inference and how we're showing a manual definition here for educational purposes only
"\n", | ||
"In this section, we introduce a custom Python Model, `CodeHelper`, which significantly improves the user experience when interacting with the OpenAI model in an interactive development environment like Jupyter Notebook. The `CodeHelper` class is designed to format the output from the OpenAI model, making it more readable and visually appealing, similar to a chat interface. Here's how it works:\n", | ||
"\n", | ||
"1. **Initialization and Model Loading**:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1, if some code is complex enough, I'd add minimum inline comment and keep the leading paragraph simple:)
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Conclusion: Harnessing the Power of MLflow in AI-Assisted Development\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As we received similar feedback recently, what about adding "What's next?" section to every tutorial?
I know the inline link doesn't work with nbsphinx, but just plane URL or only navigation text could be better than nothing.
Eg.
What's next?
- Improve your tool with prompt engineering: see https:XXX to how to leverage MLflow Prompt Engineering UI to .....
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The links existed at the bottom, but they weren't really legible. Added a new h3 header section
"\n", | ||
"1. **Set MLflow Experiment**: We begin by setting the experiment context in MLflow, specifically for document similarity, using `mlflow.set_experiment(\"Documentation Similarity\")`.\n", | ||
"\n", | ||
"2. **Logging the Model in MLflow**: We initiate an MLflow run and log the OpenAI model. The model in focus is \"text-embedding-ada-002\", chosen for its robust embedding capabilities. During this step, we detail the model, the embedding task, input/output schemas, and parameters like batch size.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Probably it's worth mention that we don't load/store the model weight locally, it just logs metadata and environment. This may leads to mis-conception that it stores a local model and thus free.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added words to this effect :) good call
"\n", | ||
"This section of the tutorial introduces functions designed to extract and prepare text from webpages, a crucial step before applying embedding models for analysis.\n", | ||
"\n", | ||
"#### Overview of Functions:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel this section (and similar ones in other cells) is redundant to docstring, can we remove either one?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Removed the actual functions in favor of using sklearn's implementations :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding to this one - can we also simplify the function description and move Detailed Workflow after the code? My first impression is that reading this text is a bit hard without seeing the code first. Non-blocking tho.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reviewed the openai-embedding notebook, great work Ben!
Left some minor comments on the style.
"\n", | ||
"Welcome to this advanced guide on implementing OpenAI embeddings within the MLflow framework. This tutorial delves into the configuration and utilization of OpenAI's powerful embeddings, a key component in modern machine learning models.\n", | ||
"\n", | ||
"### Understanding Embeddings:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: shall we delete the :
after title?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. Now that you bring that up, it's pretty annoying. Removed them from all of the H4 headings.
"\n", | ||
"#### Key Steps:\n", | ||
"\n", | ||
"1. **Set MLflow Experiment**: We begin by setting the experiment context in MLflow, specifically for document similarity, using `mlflow.set_experiment(\"Documentation Similarity\")`.\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: Set MLflow Experiment => Setting MLflow Experiment for consistency.
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Webpage Text Extraction for Embedding Analysis\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
optional - Can we put up an overview cell that summarizes what we are doing in this notebook?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Added a few sentences to the 'in this tutorial' section to explain what this use case is really for: SEO.
"\n", | ||
"This section of the tutorial introduces functions designed to extract and prepare text from webpages, a crucial step before applying embedding models for analysis.\n", | ||
"\n", | ||
"#### Overview of Functions:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Adding to this one - can we also simplify the function description and move Detailed Workflow after the code? My first impression is that reading this text is a bit hard without seeing the code first. Non-blocking tho.
" # Find the div with class 'section' and id 'llms'\n", | ||
" target_div = soup.find(\"div\", {\"class\": \"section\", \"id\": id})\n", | ||
"\n", | ||
" if target_div:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
minor nit: can we check the reverse side to reduce the level of control flow? now we have 4 levels.
if target_div is None:
return "Target element not found."
insert_space_after_tags(target_div, ["strong", "a"])
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good call. Simplified the implementation :)
" content_tags.append(tag)\n", | ||
"\n", | ||
" return \"\\n\".join(\n", | ||
" tag.get_text(separator=\" \", strip=True)\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
optional - I feel this one-line code is a bit too long for one-line, would prefer writing it as a for-loop block.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
switched to a comprehension
"\n", | ||
"In this next part of the tutorial, we utilize two functions from `sklearn` to measure the similarity and distance between document embeddings, essential for evaluating and comparing text-based machine learning models.\n", | ||
"\n", | ||
"#### Function Overviews:\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: remove trailing :
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM!
"cell_type": "markdown", | ||
"metadata": {}, | ||
"source": [ | ||
"### Important Cost Considerations for GPT-4 Usage\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess this supposed to be at higher position (not between signature description and its code)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OOF yeah I had intended to move that up to the intro section :)
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: can we use the official mlflow logo instead of the drawn logo?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we're going to do that, I'd like to change all of them in a separate PR. There are quite a few usages of the drawn logo throughout diagrams now. I'll solicit some feedback about it from the designers and see what they think and if they feel strongly, we'll handle in a separate PR that fixes all of them to be consistent.
"artifacts = {\"model_path\": model_info.model_uri}\n", | ||
"\n", | ||
"with mlflow.start_run():\n", | ||
" mlflow.pyfunc.save_model(\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
" mlflow.pyfunc.save_model(\n", | |
" mlflow.pyfunc.log_model(\n", |
because save_model
doesn't upload artifacts.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ah, forgot that one. Thanks!
" self.model = mlflow.pyfunc.load_model(context.artifacts[\"model_path\"])\n", | ||
"\n", | ||
" @staticmethod\n", | ||
" def _format_response(response):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Curious why we need to wrap long lines.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Jupyter notebooks don't auto-wrap long text on display when rendered as html (it will just create a horizontal scroll)
"### Understanding the `code_inspector` Decorator Function\n", | ||
"\n", | ||
"The `code_inspector` function is a Python decorator designed to augment functions with automatic code review capabilities using an MLflow pyfunc model. Here's a breakdown of how it works:\n", | ||
"\n", | ||
"1. **Decorator Function Setup**:\n", | ||
" - `code_inspector` takes an MLflow model as an argument. This model is used to evaluate the code of any function it decorates.\n", | ||
" - Inside, it defines `decorator_check_my_function`, a function that creates the actual decorator.\n", | ||
"\n", | ||
"2. **Wrapper Function**:\n", | ||
" - `decorator_check_my_function` further defines `wrapper`, which will wrap around the original function.\n", | ||
" - `wrapper` accepts arbitrary arguments and keyword arguments, allowing it to decorate any function.\n", | ||
" - It uses `inspect.getsource` to extract the source code of the decorated function.\n", | ||
"\n", | ||
"3. **Code Analysis and Feedback**:\n", | ||
" - The source code is then analyzed by the MLflow model using `model.predict`.\n", | ||
" - The model's feedback, which may include code improvements, error identification, or suggestions, is printed out.\n", | ||
" - In case of exceptions during model prediction or formatting, the error is printed.\n", | ||
" - After printing the feedback, `wrapper` executes the original function and returns its result.\n", | ||
"\n", | ||
"4. **Application**:\n", | ||
" - Apply `code_inspector` as a decorator to functions for real-time code quality checks and feedback.\n", | ||
" - This is particularly useful for learning and improving coding practices, as it provides insights into code quality and best practices.\n", | ||
"\n", | ||
"This decorator enhances the functionality of functions, allowing them to be automatically reviewed for code quality and correctness using an MLflow pyfunc model, thereby enriching the development and learning experience." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we can insert comments in the code instead of explaining what's happening in the code here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
decorator_check_my_function further defines wrapper, which will wrap around the original function.
Do we need this explanation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
removed all of the explanations in that block and did some brief inline comments
"metadata": {}, | ||
"outputs": [], | ||
"source": [ | ||
"def code_inspector(model):\n", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we need to use decorator? Can we use the following code?
def review(f):
return model.predict(inspect.getsource(f))
review(my_func)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great idea :) Added this as the first utilization example and wrote a section on a comparison between the two approaches to show the benefits of each (the review function, while simple, doesn't execute the code, which can be helpful during evaluation of the results from the LLM, but is more straight-forward for quick manual access).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The left top diagram might give a wrong impression that API keys are logged/saved as a part of Model (as the other two - prompt and config - are saved).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good point. Updated the image to make it clear that we're just using the key when accessing the model (on both sides)
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
Signed-off-by: Ben Wilson <benjamin.wilson@databricks.com>
🛠 DevTools 🛠
Install mlflow from this PR
Checkout with GitHub CLI
Related Issues/PRs
#xxxWhat changes are proposed in this pull request?
Follow up work to #10622 with the remaining tutorials
How is this PR tested?
Does this PR require documentation update?
Release Notes
Is this a user-facing change?
Added additional advanced tutorials to the OpenAI flavor documentation, covering Custom Python Model implementations for applied use of openai flavor models.
What component(s), interfaces, languages, and integrations does this PR affect?
Components
area/artifacts
: Artifact stores and artifact loggingarea/build
: Build and test infrastructure for MLflowarea/deployments
: MLflow Deployments client APIs, server, and third-party Deployments integrationsarea/docs
: MLflow documentation pagesarea/examples
: Example codearea/model-registry
: Model Registry service, APIs, and the fluent client calls for Model Registryarea/models
: MLmodel format, model serialization/deserialization, flavorsarea/recipes
: Recipes, Recipe APIs, Recipe configs, Recipe Templatesarea/projects
: MLproject format, project running backendsarea/scoring
: MLflow Model server, model deployment tools, Spark UDFsarea/server-infra
: MLflow Tracking server backendarea/tracking
: Tracking Service, tracking client APIs, autologgingInterface
area/uiux
: Front-end, user experience, plotting, JavaScript, JavaScript dev serverarea/docker
: Docker use across MLflow's components, such as MLflow Projects and MLflow Modelsarea/sqlalchemy
: Use of SQLAlchemy in the Tracking Service or Model Registryarea/windows
: Windows supportLanguage
language/r
: R APIs and clientslanguage/java
: Java APIs and clientslanguage/new
: Proposals for new client languagesIntegrations
integrations/azure
: Azure and Azure ML integrationsintegrations/sagemaker
: SageMaker integrationsintegrations/databricks
: Databricks integrationsHow should the PR be classified in the release notes? Choose one:
rn/none
- No description will be included. The PR will be mentioned only by the PR number in the "Small Bugfixes and Documentation Updates" sectionrn/breaking-change
- The PR will be mentioned in the "Breaking Changes" sectionrn/feature
- A new user-facing feature worth mentioning in the release notesrn/bug-fix
- A user-facing bug fix worth mentioning in the release notesrn/documentation
- A user-facing documentation change worth mentioning in the release notes