Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We鈥檒l occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve docs for Azure OpenAI environment vars #10441

Merged
merged 2 commits into from Nov 16, 2023
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
7 changes: 7 additions & 0 deletions docs/source/python_api/mlflow.metrics.rst
Expand Up @@ -204,3 +204,10 @@ You can also create your own generative AI :py:class:`EvaluationMetric <mlflow.m
When using generative AI :py:class:`EvaluationMetric <mlflow.metrics.EvaluationMetric>`\s, it is important to pass in an :py:class:`EvaluationExample <mlflow.metrics.genai.EvaluationExample>`

.. autoclass:: mlflow.metrics.genai.EvaluationExample

Users must set the appropriate environment variables for the LLM service they are using for
evaluation. For example, if you are using OpenAI's API, you must set the ``OPENAI_API_KEY``
environment variable. If using Azure OpenAI, you must also set the ``OPENAI_API_TYPE``,
``OPENAI_API_VERSION``, ``OPENAI_API_BASE``, and ``OPENAI_DEPLOYMENT_NAME`` environment variables.
See `Azure OpenAI documentation <https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/switching-endpoints>`_
Users do not need to set these environment variables if they are using a gateway route.