Conversation
WalkthroughAdds a new Evidently solution guide, updates Featureform storage guidance, and introduces an Evidently quickstart bundle: two Python examples (LLM evaluation and data/model checks), a requirements.txt, and a shell script for environment setup. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Script as llm_evaluation.py
participant Evidently as Evidently SDK
participant LLM as LLM Provider
participant WS as Evidently RemoteWorkspace
User->>Script: Run script
Script->>Script: Setup logging, load env
Script->>Script: Prepare DataFrame (Q&A samples)
Script->>Evidently: Build Dataset(DataDefinition + Descriptors)
Note right of Evidently: Sentiment, TextLength, DeclineLLMEval
Evidently->>LLM: Evaluate answer (provider/model options)
LLM-->>Evidently: Eval results
Script->>Evidently: Generate Report (TextEvals)
Script->>WS: Connect (URL, secret)
WS-->>Script: Workspace handle
Script->>WS: Get or create project "llm_evaluation"
Script->>WS: Save run (exclude raw data)
WS-->>Script: Run saved
Script-->>User: Success & cleanup (close litellm)
sequenceDiagram
autonumber
actor User
participant Script as data_and_ml_checks.py
participant Evidently as Evidently SDK
participant WS as Evidently RemoteWorkspace
User->>Script: Run script
Script->>Script: Setup logging, load env
Script->>Script: Load OpenML "adult" data
Script->>Script: Split into reference/production
Script->>Evidently: Build Datasets (DataDefinition)
Script->>Evidently: Generate Report (DataDriftPreset)
Script->>WS: Connect (URL, secret)
WS-->>Script: Workspace handle
Script->>WS: Get or create project
Script->>WS: Save run (exclude raw data)
WS-->>Script: Run saved
Script-->>User: Success
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Pre-merge checks (3 passed)✅ Passed checks (3 passed)
Poem
Tip 👮 Agentic pre-merge checks are now available in preview!Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.
Please see the documentation for more information. Example: reviews:
pre_merge_checks:
custom_checks:
- name: "Undocumented Breaking Changes"
mode: "warning"
instructions: |
Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).Please share your feedback with us on this Discord post. ✨ Finishing Touches
🧪 Generate unit tests
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (10)
docs/public/evidently/quickstart/requirements.txt (1)
1-4: Consider loosening Evidently pin to allow patch updates, and bump NumPy lower bound for newer Python.0.7.x patch bumps often include fixes; NumPy 1.22 is EOL and not built for Python 3.12. Up to you, but this can reduce environment friction.
Example:
-evidently[llm]==0.7.14 -numpy>=1.22.0,<2.0.0 +evidently[llm]>=0.7.14,<0.8 +numpy>=1.24.0,<2.0.0If you prefer to stay pinned, at least confirm compatibility:
#!/bin/bash python -V python - <<'PY' import sys, pkgutil print("OK: Python", sys.version) PY pip install 'evidently[llm]==0.7.14' 'numpy>=1.24,<2' 'pandas>=1.5,<3' 'scikit-learn>=1.2' 'litellm>=1.70' python - <<'PY' import pandas, sklearn, evidently from evidently.presets import TextEvals, DataDriftPreset print("Imports OK", pandas.__version__) PYdocs/en/solutions/How_to_Install_and_use_Featureform.md (1)
218-218: Tighten phrasing: “pre-provisioned” instead of “pre-prepared”.Minor English polish and clarity.
- The cluster needs to have CSI pre-installed or `PersistentVolume` pre-prepared. + Ensure a CSI driver is installed in the cluster, or pre-provision a `PersistentVolume`.docs/public/evidently/quickstart/setup-env.sh (1)
8-12: Align defaults with docs and add guidance for API URL.Docs say default provider is “openai” while this file sets “deepseek”. Either is fine—just be consistent, and hint API URLs for hosted providers.
-export LLM_PROVIDER="deepseek" -export LLM_API_KEY="your-api-key" -export LLM_API_URL="" -export LLM_MODEL="deepseek-chat" +export LLM_PROVIDER="openai" # or keep "deepseek" but match the docs +export LLM_API_KEY="your-api-key" # required for most hosted providers +# For DeepSeek: https://api.deepseek.com ; for OpenAI: https://api.openai.com/v1 +export LLM_API_URL="" +export LLM_MODEL="gpt-4o-mini" # or "deepseek-chat" if using DeepSeekdocs/en/solutions/How_to_Install_and_use_Evidently.md (2)
196-201: Keep version mentions consistent with requirements.Docs reference
evidently.ALL.v0.7.14-1.tgzwhile requirements pinevidently[llm]==0.7.14. Make sure these stay in sync to avoid user confusion.Would you like me to scan the repo for other Evidently version references and open a follow-up issue if mismatches are found?
339-351: Note about secrets and UI mode.Since the quickstart scripts use RemoteWorkspace, explicitly warn that calls will fail if the UI is deployed with a secret and
EVIDENTLY_SECRETis unset.export EVIDENTLY_SECRET="your-secret" export DEBUG="false" + +Note: If the UI is deployed with a secret, `EVIDENTLY_SECRET` must be set; otherwise SDK operations (create project, add run) will fail.docs/public/evidently/quickstart/data_and_ml_checks.py (2)
85-87: Use logging.exception() to keep tracebacks in logs.Preserves stack traces for easier debugging without changing control flow.
- except Exception as e: - logger.error(f"Dataset creation failed: {e}") + except Exception: + logger.exception("Dataset creation failed") raise @@ - except Exception as e: - logger.error(f"Report generation failed: {e}") + except Exception: + logger.exception("Report generation failed") raise @@ - except Exception as e: - logger.error(f"Workspace preparation failed: {e}") + except Exception: + logger.exception("Workspace preparation failed") raise @@ -except Exception as e: - logger.error(f"Execution failed: {e}") +except Exception: + logger.exception("Execution failed") raiseAlso applies to: 104-106, 133-135, 159-161
1-1: Shebang present but file may not be executable.Either make it executable (
chmod +x) or drop the shebang since it’s invoked viapython file.pyin docs.docs/public/evidently/quickstart/llm_evaluation.py (3)
63-70: Don’t silently swallow cleanup errors.Log at debug when litellm cleanup fails so issues are observable without being noisy.
- except Exception: - pass + except Exception as err: + logging.getLogger(__name__).debug("litellm cleanup skipped: %s", err)
137-139: Use logging.exception() for richer error logs.Same rationale as the other script.
- except Exception as e: - logger.error(f"Dataset creation failed: {e}") + except Exception: + logger.exception("Dataset creation failed") raise @@ - except Exception as e: - logger.error(f"Report generation failed: {e}") + except Exception: + logger.exception("Report generation failed") raise @@ - except Exception as e: - logger.error(f"Workspace preparation failed: {e}") + except Exception: + logger.exception("Workspace preparation failed") raise @@ -except Exception as e: - logger.error(f"Execution failed: {e}") +except Exception: + logger.exception("Execution failed") raiseAlso applies to: 156-158, 185-187, 211-213
1-1: Shebang present but file may not be executable.Either make it executable, or keep running via
python llm_evaluation.pyand remove the shebang.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (6)
docs/en/solutions/How_to_Install_and_use_Evidently.md(1 hunks)docs/en/solutions/How_to_Install_and_use_Featureform.md(1 hunks)docs/public/evidently/quickstart/data_and_ml_checks.py(1 hunks)docs/public/evidently/quickstart/llm_evaluation.py(1 hunks)docs/public/evidently/quickstart/requirements.txt(1 hunks)docs/public/evidently/quickstart/setup-env.sh(1 hunks)
🧰 Additional context used
🪛 Ruff (0.12.2)
docs/public/evidently/quickstart/data_and_ml_checks.py
1-1: Shebang is present but file is not executable
(EXE001)
83-83: Consider moving this statement to an else block
(TRY300)
86-86: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
102-102: Consider moving this statement to an else block
(TRY300)
105-105: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
131-131: Consider moving this statement to an else block
(TRY300)
134-134: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
160-160: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
docs/public/evidently/quickstart/llm_evaluation.py
1-1: Shebang is present but file is not executable
(EXE001)
68-69: try-except-pass detected, consider logging the exception
(S110)
68-68: Do not catch blind exception: Exception
(BLE001)
135-135: Consider moving this statement to an else block
(TRY300)
138-138: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
154-154: Consider moving this statement to an else block
(TRY300)
157-157: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
183-183: Consider moving this statement to an else block
(TRY300)
186-186: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
212-212: Use logging.exception instead of logging.error
Replace with exception
(TRY400)
|
|
||
| **Python Version Requirements:** | ||
| - Supports Python 3.10 | ||
|
|
||
| ```bash | ||
| pip install -r requirements.txt | ||
| ``` | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Clarify working directory before pip install.
Without cd, pip install -r requirements.txt may run from the wrong directory.
-```bash
-pip install -r requirements.txt
-```
+```bash
+cd docs/public/evidently/quickstart
+pip install -r requirements.txt
+```🤖 Prompt for AI Agents
In docs/en/solutions/How_to_Install_and_use_Evidently.md around lines 318 to
325, the installation step runs pip install -r requirements.txt without
specifying the working directory; update the markdown code block to first change
directory to docs/public/evidently/quickstart (or the correct project quickstart
path) then run pip install -r requirements.txt so users execute the command from
the intended directory, and ensure the fenced bash block includes both commands
in order.
| try: | ||
| # Initialize workspace connection | ||
| ws = RemoteWorkspace( | ||
| base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"), | ||
| secret=os.getenv("EVIDENTLY_SECRET") | ||
| ) | ||
| logger.debug("Workspace connection established") | ||
|
|
||
| projects = ws.search_project(PROJECT_NAME) |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fail fast if EVIDENTLY_SECRET is required but missing.
Avoids confusing 401 errors later.
try:
- # Initialize workspace connection
- ws = RemoteWorkspace(
- base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"),
- secret=os.getenv("EVIDENTLY_SECRET")
- )
+ # Initialize workspace connection
+ secret = os.getenv("EVIDENTLY_SECRET")
+ if not secret:
+ logger.warning("EVIDENTLY_SECRET is not set. If the UI is secured, SDK calls will fail.")
+ ws = RemoteWorkspace(
+ base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"),
+ secret=secret,
+ )📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| # Initialize workspace connection | |
| ws = RemoteWorkspace( | |
| base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"), | |
| secret=os.getenv("EVIDENTLY_SECRET") | |
| ) | |
| logger.debug("Workspace connection established") | |
| projects = ws.search_project(PROJECT_NAME) | |
| try: | |
| # Initialize workspace connection | |
| secret = os.getenv("EVIDENTLY_SECRET") | |
| if not secret: | |
| logger.warning("EVIDENTLY_SECRET is not set. If the UI is secured, SDK calls will fail.") | |
| ws = RemoteWorkspace( | |
| base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"), | |
| secret=secret, | |
| ) | |
| logger.debug("Workspace connection established") | |
| projects = ws.search_project(PROJECT_NAME) |
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/data_and_ml_checks.py around lines 113 to
121, the code initializes RemoteWorkspace using EVIDENTLY_SECRET without
validating it; add an explicit check after reading os.getenv("EVIDENTLY_SECRET")
to fail fast if the secret is required and missing by logging an error and
raising SystemExit (or ValueError) with a clear message (or return/exit), so the
script terminates immediately with a helpful message instead of later producing
confusing 401 errors when the workspace is used.
| llm_provider = os.getenv("LLM_PROVIDER", "openai") | ||
| llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini") | ||
| llm_api_key = os.getenv("LLM_API_KEY") | ||
| llm_api_url = os.getenv("LLM_API_URL") | ||
|
|
||
| logger.info(f"LLM Configuration: {llm_provider} - {llm_model}") | ||
| logger.debug(f"API Key: {'Set' if llm_api_key else 'Not set'}") | ||
| logger.debug(f"API URL: {llm_api_url or 'Not set'}") | ||
|
|
There was a problem hiding this comment.
🛠️ Refactor suggestion
Validate LLM API key when a hosted provider is selected.
Prevents confusing provider/auth errors later.
llm_provider = os.getenv("LLM_PROVIDER", "openai")
llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini")
llm_api_key = os.getenv("LLM_API_KEY")
llm_api_url = os.getenv("LLM_API_URL")
logger.info(f"LLM Configuration: {llm_provider} - {llm_model}")
+ if llm_provider not in {"ollama"} and not llm_api_key:
+ raise ValueError(f"LLM_API_KEY is required for provider '{llm_provider}'. Set it in setup-env.sh.")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| llm_provider = os.getenv("LLM_PROVIDER", "openai") | |
| llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini") | |
| llm_api_key = os.getenv("LLM_API_KEY") | |
| llm_api_url = os.getenv("LLM_API_URL") | |
| logger.info(f"LLM Configuration: {llm_provider} - {llm_model}") | |
| logger.debug(f"API Key: {'Set' if llm_api_key else 'Not set'}") | |
| logger.debug(f"API URL: {llm_api_url or 'Not set'}") | |
| llm_provider = os.getenv("LLM_PROVIDER", "openai") | |
| llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini") | |
| llm_api_key = os.getenv("LLM_API_KEY") | |
| llm_api_url = os.getenv("LLM_API_URL") | |
| logger.info(f"LLM Configuration: {llm_provider} - {llm_model}") | |
| if llm_provider not in {"ollama"} and not llm_api_key: | |
| raise ValueError(f"LLM_API_KEY is required for provider '{llm_provider}'. Set it in setup-env.sh.") | |
| logger.debug(f"API Key: {'Set' if llm_api_key else 'Not set'}") | |
| logger.debug(f"API URL: {llm_api_url or 'Not set'}") |
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/llm_evaluation.py around lines 99 to 107,
the code logs LLM config but does not validate that an API key is provided when
a hosted provider is selected; add a check after reading LL
M_PROVIDER/LLM_API_KEY that, for hosted providers (e.g., "openai", "anthropic",
etc. — anything other than local/mock), ensures llm_api_key is present; if
missing, log an explicit error with provider name and exit/raise (process exit 1
or raise RuntimeError) so the script fails fast with a clear message rather than
producing confusing auth/provider errors later.
| provider_options = { | ||
| "deepseek": DeepSeekOptions, | ||
| "anthropic": AnthropicOptions, | ||
| "gemini": GeminiOptions, | ||
| "vertex_ai": VertexAIOptions, | ||
| "mistral": MistralOptions, | ||
| "ollama": OllamaOptions, | ||
| "nebius": NebiusOptions, | ||
| } | ||
| # Choose appropriate options based on provider | ||
| options_class = provider_options.get(llm_provider, LLMOptions) | ||
| options = options_class(api_key=llm_api_key, api_url=llm_api_url) | ||
|
|
There was a problem hiding this comment.
💡 Verification agent
🧩 Analysis chain
Double-check provider option class names for Evidently 0.7.14.
If any of these moved/renamed, imports will fail. Safer to verify against the installed version.
Generate web check:
If any differ, switch to the correct import path or fallback to LLMOptions with provider hints.
🌐 Web query:
In Evidently 0.7.14, are DeepSeekOptions, AnthropicOptions, GeminiOptions, VertexAIOptions, MistralOptions, OllamaOptions, and NebiusOptions available under evidently.llm.utils.wrapper?
💡 Result:
Short answer: No — AnthropicOptions, GeminiOptions, VertexAIOptions, MistralOptions and OllamaOptions are exposed via the LLM wrapper, but DeepSeekOptions and NebiusOptions are not present in Evidently v0.7.14.
Sources:
- Evidently docs showing Options import and examples (Anthropic, Gemini, Vertex, Mistral, Ollama providers). [1]
- GitHub issue referencing GeminiOptions in the wrapper/examples. [2]
- PyPI release page for evidently 0.7.14 (version reference). [3]
Replace unsupported DeepSeekOptions and NebiusOptions with the generic fallback
In Evidently 0.7.14 the LLM wrapper does not expose DeepSeekOptions or NebiusOptions (imports will fail); update the provider_options mapping in docs/public/evidently/quickstart/llm_evaluation.py (lines 111–123) to use LLMOptions for "deepseek" and "nebius".
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/llm_evaluation.py around lines 111 to 123,
the provider_options mapping references DeepSeekOptions and NebiusOptions which
are not available in Evidently 0.7.14; update the mapping so that the keys
"deepseek" and "nebius" use the generic LLMOptions instead of DeepSeekOptions
and NebiusOptions respectively, leaving the other providers unchanged.
| # Evidently LLM Demo Requirements | ||
| evidently[llm]==0.7.14 | ||
| litellm>=1.70.0 | ||
| numpy>=1.22.0,<2.0.0 |
There was a problem hiding this comment.
🛠️ Refactor suggestion
Add missing deps needed by quickstarts (pandas, scikit-learn).
Both Python examples import pandas, and data_and_ml_checks.py uses scikit-learn. Add them to avoid runtime import errors.
Apply:
# Evidently LLM Demo Requirements
evidently[llm]==0.7.14
litellm>=1.70.0
numpy>=1.22.0,<2.0.0
+Pillow>=9.0.0 # frequently required by matplotlib in Evidently reports
+pandas>=1.5,<3.0
+scikit-learn>=1.2📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Evidently LLM Demo Requirements | |
| evidently[llm]==0.7.14 | |
| litellm>=1.70.0 | |
| numpy>=1.22.0,<2.0.0 | |
| # Evidently LLM Demo Requirements | |
| evidently[llm]==0.7.14 | |
| litellm>=1.70.0 | |
| numpy>=1.22.0,<2.0.0 | |
| Pillow>=9.0.0 # frequently required by matplotlib in Evidently reports | |
| pandas>=1.5,<3.0 | |
| scikit-learn>=1.2 |
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/requirements.txt lines 1-4, the requirements
file is missing packages required by the quickstarts (pandas and scikit-learn),
causing runtime import errors; update the file to include pandas and
scikit-learn (pin or set compatible minimum versions consistent with the other
deps) by adding entries for pandas and scikit-learn so both examples and
data_and_ml_checks.py can import them successfully.
| @@ -0,0 +1,12 @@ | |||
| #! /bin/bash | |||
There was a problem hiding this comment.
🛠️ Refactor suggestion
Fix shebang and add safe bash flags.
There’s an extra space after #!. Also add strict mode for safer env loading.
-#! /bin/bash
+#!/usr/bin/env bash
+set -euo pipefail📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| #! /bin/bash | |
| #!/usr/bin/env bash | |
| set -euo pipefail |
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/setup-env.sh around line 1, the shebang has
an extra space and the script lacks safe bash flags; remove the space so it
reads #!/bin/bash and enable strict mode by adding e.g. set -euo pipefail and
IFS=$'\n\t' near the top to fail fast on errors, treat unset variables as
errors, propagate pipe failures, and set a safer IFS.
Summary by CodeRabbit
New Features
Documentation