Skip to content

Commit

Permalink
Adding LLM artifact docs
Browse files Browse the repository at this point in the history
Signed-off-by: Sunish Sheth <sunishsheth2009@gmail.com>
  • Loading branch information
sunishsheth2009 committed Apr 14, 2023
1 parent 2bac047 commit 0bef6de
Show file tree
Hide file tree
Showing 5 changed files with 81 additions and 1 deletion.
1 change: 1 addition & 0 deletions docs/source/index.rst
Original file line number Diff line number Diff line change
Expand Up @@ -26,6 +26,7 @@ Get started using the :ref:`quickstart` or by reading about the :ref:`key concep
tutorials-and-examples/index
concepts
tracking
llm-tracking
projects
models
model-registry
Expand Down
72 changes: 72 additions & 0 deletions docs/source/llm-tracking.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,72 @@
.. _llm-tracking:

=====================
MLflow LLM Tracking
=====================

The MLflow LLM Tracking component is an API and UI for logging LLM inputs, outputs and prompts
when running your machine learning code and for later visualizing the results.
MLflow LLM Tracking lets you log evaluation results using :py:func:`mlflow.llm.log_predictions`

.. contents:: Table of Contents
:local:
:depth: 2

.. _llm-tracking-concepts:

Concepts
==========

MLflow LLM Tracking is organized around the concept of *runs*, which are executions of some piece of
data science code. Each run records the following information:

Parameters
Key-value input parameters of your choice. Both keys and values are strings. These could be LLM
parameters like top_k, temperature, etc.

Metrics
Key-value metrics, where the value is numeric. Each metric can be updated throughout the
course of the run (for example, to track how your model's loss function is converging), and
MLflow records and lets you visualize the metric's full history.

Predictions
For offline evaluation, you can log predictions for your model by passing in inputs, outputs
and prompts. These predictions are logged as csv as part of MLflow artifacts.

Artifacts
Output files in any format. For example, you can record images (for example, PNGs), models
(for example, a pickled openai model), and data files (for example, a
`Parquet <https://parquet.apache.org/>`_ file) as artifacts.

You can optionally organize runs into *experiments*, which group and compare together runs for a
specific task. You can create an experiment using the ``mlflow experiments`` CLI, with
:py:func:`mlflow.create_experiment`, or using the corresponding REST parameters. The MLflow API and
UI let you create and search for experiments.

Once your runs have been recorded, you can query them and compare predictions using the :ref:`tracking_ui`.

.. _how_llm_predictions_recorded:

How LLM Tracking Information Recorded
=======================================
Parameters: :py:func:`mlflow.log_param` logs a single key-value param in the currently active run. The key and
value are both strings. Use :py:func:`mlflow.log_params` to log multiple params at once.

Metrics: :py:func:`mlflow.log_metric` logs a single key-value metric. The value must always be a number.
MLflow remembers the history of values for each metric. Use :py:func:`mlflow.log_metrics` to log
multiple metrics at once.

Predictions: :py:func:`mlflow.llm.log_predictions` logs inputs, outputs and prompts. Inputs and prompts could either
be a list of strings or list of dict where as output would be a list of strings.

Artifacts: :py:func:`mlflow.log_artifact` logs a local file or directory as an artifact, optionally taking an
``artifact_path`` to place it in within the run's artifact URI. Run artifacts can be organized into
directories, so you can place the artifact in a directory this way.

.. _where_llm_tracking_information_are_recorded:

Where LLM Tracking Information Are Recorded
=============================================
All the tracking information is recorded as part of MLflow Experiment run.


6 changes: 6 additions & 0 deletions docs/source/python_api/mlflow.llm.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
mlflow.llm
============

.. automodule:: mlflow.llm
:members:
:undoc-members:
1 change: 1 addition & 0 deletions mlflow/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -97,6 +97,7 @@
"pmdarima",
"diviner",
"transformers",
"llm",
]
except ImportError as e:
# We are conditional loading these commands since the skinny client does
Expand Down
2 changes: 1 addition & 1 deletion mlflow/llm.py
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
"""
The ``mlflow.llm`` module provides a utility for Large Language Models (LLMs).
The ``mlflow.llm`` module provides utilities for Large Language Models (LLMs).
"""

from mlflow.tracking.llm_utils import log_predictions
Expand Down

0 comments on commit 0bef6de

Please sign in to comment.