diff --git a/docs/source/deep-learning/index.rst b/docs/source/deep-learning/index.rst index cec4e6c2f93ce..3ea423d6c54e4 100644 --- a/docs/source/deep-learning/index.rst +++ b/docs/source/deep-learning/index.rst @@ -47,27 +47,27 @@ The officially supported integrations for deep learning libraries in MLflow enco
- pytorch Logo + pytorch Logo
- keras Logo + keras Logo
- TensorFlow Logo + TensorFlow Logo
- spaCy Logo + spaCy Logo
- fast.ai Logo + fast.ai Logo
diff --git a/docs/source/deployment/deploy-model-to-kubernetes/index.rst b/docs/source/deployment/deploy-model-to-kubernetes/index.rst index c03dc5a5efcb3..80d961497f90e 100644 --- a/docs/source/deployment/deploy-model-to-kubernetes/index.rst +++ b/docs/source/deployment/deploy-model-to-kubernetes/index.rst @@ -126,6 +126,7 @@ Next, use the MLflow UI to compare the models that you have produced. In the sam as the one that contains the ``mlruns`` run: .. code-section:: + .. code-block:: shell mlflow ui diff --git a/docs/source/getting-started/index.rst b/docs/source/getting-started/index.rst index d99945f0d550c..03e741e09fceb 100644 --- a/docs/source/getting-started/index.rst +++ b/docs/source/getting-started/index.rst @@ -30,7 +30,7 @@ If you would like to get started immediately by interactively running the notebo .. raw:: html - Download the Notebook
+ Download the Notebook
Quickstart elements ^^^^^^^^^^^^^^^^^^^ @@ -76,7 +76,7 @@ If you would like to get started immediately by interactively running the notebo .. raw:: html - Download the Notebook
+ Download the Notebook
Guide sections ^^^^^^^^^^^^^^ diff --git a/docs/source/getting-started/intro-quickstart/index.rst b/docs/source/getting-started/intro-quickstart/index.rst index a8302ddb82379..035c4bf7ca0d5 100644 --- a/docs/source/getting-started/intro-quickstart/index.rst +++ b/docs/source/getting-started/intro-quickstart/index.rst @@ -41,6 +41,7 @@ Step 1 - Get MLflow MLflow is available on PyPI. If you don't already have it installed on your system, you can install it with: .. code-section:: + .. code-block:: bash :name: download-mlflow @@ -53,6 +54,7 @@ We're going to start a local MLflow Tracking Server, which we will connect to fo From a terminal, run: .. code-section:: + .. code-block:: bash :name: tracking-server-start @@ -72,6 +74,7 @@ In this section, we're going to log a model with MLflow. A quick overview of the .. code-section:: + .. code-block:: python :name: train-model @@ -132,6 +135,7 @@ The steps that we will take are: to ensure that the loggable content (parameters, metrics, artifacts, and the model) are fully materialized prior to logging. .. code-section:: + .. code-block:: python :name: log-model @@ -177,6 +181,7 @@ After logging the model, we can perform inference by: below. .. code-section:: + .. code-block:: python :name: load-model diff --git a/docs/source/getting-started/logging-first-model/step1-tracking-server.rst b/docs/source/getting-started/logging-first-model/step1-tracking-server.rst index 0b102cd6d8a65..52e2713bbec6f 100644 --- a/docs/source/getting-started/logging-first-model/step1-tracking-server.rst +++ b/docs/source/getting-started/logging-first-model/step1-tracking-server.rst @@ -15,6 +15,7 @@ Step 1: Install MLflow from PyPI MLflow is conveniently available on PyPI. Installing it is as simple as running a pip command. .. code-section:: + .. code-block:: bash :name: download-mlflow @@ -27,6 +28,7 @@ To begin, you'll need to initiate the MLflow Tracking Server. Remember to keep t running during the tutorial, as closing it will shut down the server. .. code-section:: + .. code-block:: bash :name: tracking-server-start diff --git a/docs/source/getting-started/logging-first-model/step2-mlflow-client.rst b/docs/source/getting-started/logging-first-model/step2-mlflow-client.rst index e57d35cee4fd1..0f70e76bdd4cd 100644 --- a/docs/source/getting-started/logging-first-model/step2-mlflow-client.rst +++ b/docs/source/getting-started/logging-first-model/step2-mlflow-client.rst @@ -18,6 +18,7 @@ Importing Dependencies In order to use the MLflowClient API, the initial step involves importing the necessary modules. .. code-section:: + .. code-block:: python :name: imports :emphasize-lines: 1 @@ -43,6 +44,7 @@ assigned the server when we started it. The two components that we submitted as ``host`` and the ``port``. Combined, these form the ``tracking_uri`` argument that we will specify to start an instance of the client. .. code-section:: + .. code-block:: python :name: client @@ -70,6 +72,7 @@ The first thing that we're going to do is to view the metadata associated with t use of the :py:func:`mlflow.client.MlflowClient.search_experiments` API. Let's issue a search query to see what the results are. .. code-section:: + .. code-block:: python all_experiments = client.search_experiments() @@ -91,6 +94,7 @@ To get familiar with accessing elements from returned collections from MLflow AP query and extract these attributes into a dict. .. code-section:: + .. code-block:: python default_experiment = [ diff --git a/docs/source/getting-started/logging-first-model/step3-create-experiment.rst b/docs/source/getting-started/logging-first-model/step3-create-experiment.rst index 131e5991478e7..c73cc7da240f6 100644 --- a/docs/source/getting-started/logging-first-model/step3-create-experiment.rst +++ b/docs/source/getting-started/logging-first-model/step3-create-experiment.rst @@ -105,6 +105,7 @@ Creating the Apples Experiment with Meaningful tags --------------------------------------------------- .. code-section:: + .. code-block:: python # Provide an Experiment description that will appear in the UI diff --git a/docs/source/getting-started/logging-first-model/step4-experiment-search.rst b/docs/source/getting-started/logging-first-model/step4-experiment-search.rst index 3567a9b6a3e8e..d73cd8ea1b6c0 100644 --- a/docs/source/getting-started/logging-first-model/step4-experiment-search.rst +++ b/docs/source/getting-started/logging-first-model/step4-experiment-search.rst @@ -51,6 +51,7 @@ tags, note the particular syntax used. The custom tag names are wrapped with bac condition is wrapped in single quotes. .. code-section:: + .. code-block:: python # Use search_experiments() to search on the project_name tag key diff --git a/docs/source/getting-started/logging-first-model/step5-synthetic-data.rst b/docs/source/getting-started/logging-first-model/step5-synthetic-data.rst index 8c420512df629..8d3e90d075bc7 100644 --- a/docs/source/getting-started/logging-first-model/step5-synthetic-data.rst +++ b/docs/source/getting-started/logging-first-model/step5-synthetic-data.rst @@ -21,6 +21,7 @@ We can introduce this correlation by crafting a relationship between our feature The random elements of some of the factors will handle the unexplained variance portion. .. code-section:: + .. code-block:: python import pandas as pd diff --git a/docs/source/getting-started/logging-first-model/step6-logging-a-run.rst b/docs/source/getting-started/logging-first-model/step6-logging-a-run.rst index 81b51d83897bd..55b87b4595257 100644 --- a/docs/source/getting-started/logging-first-model/step6-logging-a-run.rst +++ b/docs/source/getting-started/logging-first-model/step6-logging-a-run.rst @@ -76,6 +76,7 @@ using MLflow to tracking a training iteration. To start with, we will need to import our required modules. .. code-section:: + .. code-block:: python import mlflow @@ -94,6 +95,7 @@ In order to use the ``fluent`` API, we'll need to set the global reference to th address. We do this via the following command: .. code-section:: + .. code-block:: python mlflow.set_tracking_uri("http://127.0.0.1:8080") @@ -104,6 +106,7 @@ to log runs to. The parent-child relationship of Experiments to Runs and its uti clear once we start iterating over some ideas and need to compare the results of our tests. .. code-section:: + .. code-block:: python # Sets the current active experiment to the "Apple_Models" experiment and @@ -123,6 +126,7 @@ Firstly, let's look at what we're going to be running. Following the code displa an annotated version of the code. .. code-section:: + .. code-block:: python # Split the data into features and target and drop irrelevant date field and target field diff --git a/docs/source/getting-started/quickstart-1/index.rst b/docs/source/getting-started/quickstart-1/index.rst index e96a1f5fcaa6c..01386695e8227 100644 --- a/docs/source/getting-started/quickstart-1/index.rst +++ b/docs/source/getting-started/quickstart-1/index.rst @@ -86,6 +86,7 @@ In addition, or if you are using a library for which ``autolog`` is not yet supp This example demonstrates the use of these functions: .. code-section:: + .. code-block:: python import os diff --git a/docs/source/getting-started/quickstart-2/index.rst b/docs/source/getting-started/quickstart-2/index.rst index 7a7002ab719bd..8b419ce91848f 100644 --- a/docs/source/getting-started/quickstart-2/index.rst +++ b/docs/source/getting-started/quickstart-2/index.rst @@ -195,7 +195,7 @@ Choose **Chart view**. Choose the **Parallel coordinates** graph and configure i class="align-center" id="chart-view" alt="Screenshot of MLflow tracking UI parallel coordinates graph showing runs" - > + /> The red graphs on this graph are runs that fared poorly. The lowest one is a baseline run with both **lr** and **momentum** set to 0.0. That baseline run has an RMSE of ~0.89. The other red lines show that high **momentum** can also lead to poor results with this problem and architecture. diff --git a/docs/source/llms/custom-pyfunc-for-llms/index.rst b/docs/source/llms/custom-pyfunc-for-llms/index.rst index dbaa694acbe78..65de43cb7e620 100644 --- a/docs/source/llms/custom-pyfunc-for-llms/index.rst +++ b/docs/source/llms/custom-pyfunc-for-llms/index.rst @@ -27,7 +27,7 @@ Explore the Tutorial .. raw:: html - View the Custom Pyfunc for LLMs Tutorial
+ View the Custom Pyfunc for LLMs Tutorial
.. toctree:: :maxdepth: 1 diff --git a/docs/source/llms/custom-pyfunc-for-llms/notebooks/index.rst b/docs/source/llms/custom-pyfunc-for-llms/notebooks/index.rst index 3eb14978ee03b..13717fa8e229e 100644 --- a/docs/source/llms/custom-pyfunc-for-llms/notebooks/index.rst +++ b/docs/source/llms/custom-pyfunc-for-llms/notebooks/index.rst @@ -64,7 +64,7 @@ If you'd like to run a copy of the notebooks locally in your environment, you ca .. raw:: html - Download the LLM Custom Pyfunc notebook
+ Download the LLM Custom Pyfunc notebook
.. note:: To execute the notebooks, ensure you either have a local MLflow Tracking Server running or adjust the ``mlflow.set_tracking_uri()`` to point to an active MLflow Tracking Server instance. diff --git a/docs/source/llms/gateway/guides/step1-create-gateway.rst b/docs/source/llms/gateway/guides/step1-create-gateway.rst index 734fa5374fa36..c8b52447b470d 100644 --- a/docs/source/llms/gateway/guides/step1-create-gateway.rst +++ b/docs/source/llms/gateway/guides/step1-create-gateway.rst @@ -8,6 +8,7 @@ dependencies, including ``uvicorn`` and ``fastapi``. Note that direct dependenci unnecessary, as all supported providers are abstracted from the developer. .. code-section:: + .. code-block:: bash :name: install-gateway @@ -22,6 +23,7 @@ of leaking the token in code. The AI Gateway, when started, will read the value variable without any additional action required. .. code-section:: + .. code-block:: bash :name: token @@ -37,6 +39,7 @@ service restart is not required for changes to take effect and can instead be do configuration file that is defined at server start, permitting dynamic route creation without downtime of the service. .. code-section:: + .. code-block:: yaml :name: configure-gateway @@ -85,6 +88,7 @@ the URL: ``http://localhost:5000``. To modify these default settings, use the ``mlflow gateway --help`` command to view additional configuration options. .. code-section:: + .. code-block:: bash :name: start-gateway diff --git a/docs/source/llms/gateway/guides/step2-query-gateway.rst b/docs/source/llms/gateway/guides/step2-query-gateway.rst index 1e4204052ed2f..3226877192e3d 100644 --- a/docs/source/llms/gateway/guides/step2-query-gateway.rst +++ b/docs/source/llms/gateway/guides/step2-query-gateway.rst @@ -22,6 +22,7 @@ Setup First, import the necessary functions and define the gateway URI. .. code-section:: + .. code-block:: python :name: setup @@ -39,6 +40,7 @@ which is the string the Language Model (LLM) will respond to. The gateway also a various other parameters. For detailed information, please refer to the documentation. .. code-section:: + .. code-block:: python :name: completions @@ -72,6 +74,7 @@ takes a list of dictionaries formatted as follows: For further details, please consult the documentation. .. code-section:: + .. code-block:: python :name: chat @@ -103,6 +106,7 @@ string or a list of strings. The gateway then processes these strings and return respective numerical vectors. Let's proceed with an example... .. code-section:: + .. code-block:: python :name: embeddings diff --git a/docs/source/llms/gateway/index.rst b/docs/source/llms/gateway/index.rst index 55392bcfd2cb5..2e395ae0e46dc 100644 --- a/docs/source/llms/gateway/index.rst +++ b/docs/source/llms/gateway/index.rst @@ -42,7 +42,7 @@ as fast as possible, the guides below will be your best first stop. .. raw:: html - View the AI Gateway Getting Started Guide
+ View the AI Gateway Getting Started Guide
.. _gateway-quickstart: diff --git a/docs/source/llms/index.rst b/docs/source/llms/index.rst index 8a353126c5f7c..ba6754ef4ee14 100644 --- a/docs/source/llms/index.rst +++ b/docs/source/llms/index.rst @@ -67,47 +67,47 @@ configuration and management of your LLM serving needs, select the provider that
- OpenAI Logo + OpenAI Logo
- MosaicML Logo + MosaicML Logo
- Anthropic Logo + Anthropic Logo
- Cohere Logo + Cohere Logo
- MLflow Logo + MLflow Logo
- AWS Logo + AWS Logo
- PaLM Logo + PaLM Logo
- ai21Labs Logo + ai21Labs Logo
- Hugging Face Logo + Hugging Face Logo
@@ -246,25 +246,25 @@ Select the integration below to read the documentation on how to leverage MLflow
- HuggingFace Logo + HuggingFace Logo
- Sentence Transformers Logo + Sentence Transformers Logo
- LangChain Logo + LangChain Logo
- OpenAI Logo + OpenAI Logo
diff --git a/docs/source/llms/llm-evaluate/index.rst b/docs/source/llms/llm-evaluate/index.rst index 67463fe80d060..20f8033a988d7 100644 --- a/docs/source/llms/llm-evaluate/index.rst +++ b/docs/source/llms/llm-evaluate/index.rst @@ -23,7 +23,7 @@ functionality for LLMs, please navigate to the notebook collection below: .. raw:: html - View the Notebook Guides
+ View the Notebook Guides
Quickstart ---------- diff --git a/docs/source/llms/llm-evaluate/notebooks/index.rst b/docs/source/llms/llm-evaluate/notebooks/index.rst index 82c4af9f14a18..db43cfef8c989 100644 --- a/docs/source/llms/llm-evaluate/notebooks/index.rst +++ b/docs/source/llms/llm-evaluate/notebooks/index.rst @@ -21,13 +21,13 @@ If you would like a copy of this notebook to execute in your environment, downlo .. raw:: html - Download the notebook
+ Download the notebook
To follow along and see the sections of the notebook guide, click below: .. raw:: html - View the Notebook
+ View the Notebook
RAG Evaluation Notebook @@ -37,12 +37,12 @@ If you would like a copy of this notebook to execute in your environment, downlo .. raw:: html - Download the notebook
+ Download the notebook
To follow along and see the sections of the notebook guide, click below: .. raw:: html - View the Notebook
+ View the Notebook
diff --git a/docs/source/llms/rag/index.rst b/docs/source/llms/rag/index.rst index b40351ea0a6f4..f82d88bf4ba47 100644 --- a/docs/source/llms/rag/index.rst +++ b/docs/source/llms/rag/index.rst @@ -41,7 +41,7 @@ Explore the Tutorial .. raw:: html - View the RAG Question Generation Tutorial
+ View the RAG Question Generation Tutorial
.. toctree:: :maxdepth: 1 diff --git a/docs/source/llms/rag/notebooks/index.rst b/docs/source/llms/rag/notebooks/index.rst index 6737f769e7bc7..0fc5aa0a6c687 100644 --- a/docs/source/llms/rag/notebooks/index.rst +++ b/docs/source/llms/rag/notebooks/index.rst @@ -22,10 +22,10 @@ If you would like a copy of this notebook to execute in your environment, downlo .. raw:: html - Download the notebook
+ Download the notebook
To follow along and see the sections of the notebook guide, click below: .. raw:: html - View the Notebook
+ View the Notebook
diff --git a/docs/source/traditional-ml/creating-custom-pyfunc/notebooks/index.rst b/docs/source/traditional-ml/creating-custom-pyfunc/notebooks/index.rst index 85d4feba57ef3..3c2990bdccb53 100644 --- a/docs/source/traditional-ml/creating-custom-pyfunc/notebooks/index.rst +++ b/docs/source/traditional-ml/creating-custom-pyfunc/notebooks/index.rst @@ -177,9 +177,9 @@ clicking the respective links to each notebook in this guide: .. raw:: html - Download the Introduction notebook
- Download the Basic Pyfunc notebook
- Download the Predict Override notebook
+ Download the Introduction notebook
+ Download the Basic Pyfunc notebook
+ Download the Predict Override notebook
.. note:: In order to run the notebooks, please ensure that you either have a local MLflow Tracking Server started or modify the diff --git a/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/notebooks/index.rst b/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/notebooks/index.rst index ce6b31e16a563..e5e9d09fc5770 100644 --- a/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/notebooks/index.rst +++ b/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/notebooks/index.rst @@ -70,9 +70,9 @@ clicking the respective links to each notebook in this guide: .. raw:: html - Download the main notebook
- Download the Parent-Child Runs notebook
- Download the Plot Logging in MLflow notebook
+ Download the main notebook
+ Download the Parent-Child Runs notebook
+ Download the Plot Logging in MLflow notebook
.. note:: In order to run the notebooks, please ensure that you either have a local MLflow Tracking Server started or modify the diff --git a/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part1-child-runs.rst b/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part1-child-runs.rst index abdc887d36fe1..0b195d673e568 100644 --- a/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part1-child-runs.rst +++ b/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part1-child-runs.rst @@ -105,6 +105,7 @@ relatively performance amongst our iterative trials. If we were to use each iteration as its own MLflow run, our code might look something like this: .. code-section:: + .. code-block:: python import random @@ -162,6 +163,7 @@ What happens when we need to run this again with some slight modifications? Our code might change in-place with the values being tested: .. code-section:: + .. code-block:: python def log_run(run_name, test_no): @@ -204,6 +206,7 @@ Adapting for Parent and Child Runs The code below demonstrates these modifications to our original hyperparameter tuning example. .. code-section:: + .. code-block:: python import random @@ -276,6 +279,7 @@ The real benefit of this nested architecture becomes much more apparent when we with different conditions of hyperparameter selection criteria. .. code-section:: + .. code-block:: python # Execute modified hyperparameter tuning runs with custom parameter choices @@ -291,6 +295,7 @@ with different conditions of hyperparameter selection criteria. ... and even more runs ... .. code-section:: + .. code-block:: python param_1_values = ["b", "c"] diff --git a/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part2-logging-plots.rst b/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part2-logging-plots.rst index e9c2afb6fe41b..fff4293a53435 100644 --- a/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part2-logging-plots.rst +++ b/docs/source/traditional-ml/hyperparameter-tuning-with-child-runs/part2-logging-plots.rst @@ -105,6 +105,7 @@ materialized plots, which, if not regenerated after data modification, can lead and errors in data representation. .. code-section:: + .. code-block:: python def plot_box_weekend(df, style="seaborn", plot_size=(10, 8)): @@ -152,6 +153,7 @@ remains seamlessly compatible with MLflow, ensuring the same level of organizati with additional flexibility in plot access and usage. .. code-section:: + .. code-block:: python def plot_correlation_matrix_and_save( @@ -213,6 +215,7 @@ the more generic artifact writer (it supports any file type) ``mlflow.log_artifa .. code-section:: + .. code-block:: python mlflow.set_tracking_uri("http://127.0.0.1:8080") diff --git a/docs/source/traditional-ml/index.rst b/docs/source/traditional-ml/index.rst index 72ad577ef7047..bcfafa7b78a02 100644 --- a/docs/source/traditional-ml/index.rst +++ b/docs/source/traditional-ml/index.rst @@ -32,37 +32,37 @@ The officially supported integrations for traditional ML libraries include:
- scikit learn + scikit learn
- XGBoost Logo + XGBoost Logo
- Spark Logo + Spark Logo
- LightGBM Logo + LightGBM Logo
- CatBoost Logo + CatBoost Logo
- Statsmodels Logo + Statsmodels Logo
- Prophet Logo + Prophet Logo