Skip to content

add document for Evidently#58

Merged
davidwtf merged 1 commit intomainfrom
add/evidently
Sep 10, 2025
Merged

add document for Evidently#58
davidwtf merged 1 commit intomainfrom
add/evidently

Conversation

@davidwtf
Copy link
Copy Markdown
Contributor

@davidwtf davidwtf commented Sep 10, 2025

Summary by CodeRabbit

  • New Features

    • Added Evidently quickstart assets: runnable examples for LLM evaluation and data drift monitoring, plus environment setup and dependency requirements to streamline local trials.
    • Enabled workspace integration and result saving in the UI for the demos.
  • Documentation

    • Introduced a comprehensive “How to Install and use Evidently” guide covering architecture, core concepts, Local UI deployment, configuration, and step-by-step quickstarts.
    • Expanded guidance on descriptors, presets, tracing, and advanced LLM evaluation.
    • Updated Featureform installation docs: simplified storage preparation text and removed the local storage subsection for clarity.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Sep 10, 2025

Walkthrough

Adds a new Evidently solution guide, updates Featureform storage guidance, and introduces an Evidently quickstart bundle: two Python examples (LLM evaluation and data/model checks), a requirements.txt, and a shell script for environment setup.

Changes

Cohort / File(s) Summary of Changes
Evidently solution guide
docs/en/solutions/How_to_Install_and_use_Evidently.md
New documentation detailing Evidently architecture, concepts, data formats, UI deployment, quickstarts (LLM evaluation, data drift), configs, and usage steps.
Featureform docs update
docs/en/solutions/How_to_Install_and_use_Featureform.md
Revised storage preparation text; removed “Creating Local Storage” subsection and related PV YAML guidance.
Evidently quickstart scripts
docs/public/evidently/quickstart/llm_evaluation.py, docs/public/evidently/quickstart/data_and_ml_checks.py
Added runnable examples: LLM response evaluation with descriptors and TextEvals; data drift detection using DataDriftPreset. Both connect to RemoteWorkspace and persist runs.
Evidently quickstart env/deps
docs/public/evidently/quickstart/requirements.txt, docs/public/evidently/quickstart/setup-env.sh
Added dependencies (evidently[llm], litellm, numpy) and an env setup script exporting Evidently and LLM config variables.

Sequence Diagram(s)

sequenceDiagram
  autonumber
  actor User
  participant Script as llm_evaluation.py
  participant Evidently as Evidently SDK
  participant LLM as LLM Provider
  participant WS as Evidently RemoteWorkspace

  User->>Script: Run script
  Script->>Script: Setup logging, load env
  Script->>Script: Prepare DataFrame (Q&A samples)
  Script->>Evidently: Build Dataset(DataDefinition + Descriptors)
  Note right of Evidently: Sentiment, TextLength, DeclineLLMEval
  Evidently->>LLM: Evaluate answer (provider/model options)
  LLM-->>Evidently: Eval results
  Script->>Evidently: Generate Report (TextEvals)
  Script->>WS: Connect (URL, secret)
  WS-->>Script: Workspace handle
  Script->>WS: Get or create project "llm_evaluation"
  Script->>WS: Save run (exclude raw data)
  WS-->>Script: Run saved
  Script-->>User: Success & cleanup (close litellm)
Loading
sequenceDiagram
  autonumber
  actor User
  participant Script as data_and_ml_checks.py
  participant Evidently as Evidently SDK
  participant WS as Evidently RemoteWorkspace

  User->>Script: Run script
  Script->>Script: Setup logging, load env
  Script->>Script: Load OpenML "adult" data
  Script->>Script: Split into reference/production
  Script->>Evidently: Build Datasets (DataDefinition)
  Script->>Evidently: Generate Report (DataDriftPreset)
  Script->>WS: Connect (URL, secret)
  WS-->>Script: Workspace handle
  Script->>WS: Get or create project
  Script->>WS: Save run (exclude raw data)
  WS-->>Script: Run saved
  Script-->>User: Success
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Possibly related PRs

  • update featureform docs #50 — Also updates docs/en/solutions/How_to_Install_and_use_Featureform.md; likely overlapping storage guidance adjustments.

Pre-merge checks (3 passed)

✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title Check ✅ Passed The title “add document for Evidently” clearly indicates the primary purpose of the pull request, which is to introduce new documentation related to Evidently. While it could be slightly more precise, it accurately conveys the main change without misleading readers about the pull request’s intent.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

Poem

A rabbit taps keys with delighted might,
New docs bloom bright, and scripts take flight.
Drift is detected, LLMs appraised—
Secrets set, reports all saved.
In verdant fields of YAML and py,
We hop to the UI and watch metrics fly. 🐇✨

Tip

👮 Agentic pre-merge checks are now available in preview!

Pro plan users can now enable pre-merge checks in their settings to enforce checklists before merging PRs.

  • Built-in checks – Quickly apply ready-made checks to enforce title conventions, require pull request descriptions that follow templates, validate linked issues for compliance, and more.
  • Custom agentic checks – Define your own rules using CodeRabbit’s advanced agentic capabilities to enforce organization-specific policies and workflows. For example, you can instruct CodeRabbit’s agent to verify that API documentation is updated whenever API schema files are modified in a PR. Note: Upto 5 custom checks are currently allowed during the preview period. Pricing for this feature will be announced in a few weeks.

Please see the documentation for more information.

Example:

reviews:
  pre_merge_checks:
    custom_checks:
      - name: "Undocumented Breaking Changes"
        mode: "warning"
        instructions: |
          Pass/fail criteria: All breaking changes to public APIs, CLI flags, environment variables, configuration keys, database schemas, or HTTP/GraphQL endpoints must be documented in the "Breaking Change" section of the PR description and in CHANGELOG.md. Exclude purely internal or private changes (e.g., code not exported from package entry points or explicitly marked as internal).

Please share your feedback with us on this Discord post.

✨ Finishing Touches
  • 📝 Generate Docstrings
🧪 Generate unit tests
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch add/evidently

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (10)
docs/public/evidently/quickstart/requirements.txt (1)

1-4: Consider loosening Evidently pin to allow patch updates, and bump NumPy lower bound for newer Python.

0.7.x patch bumps often include fixes; NumPy 1.22 is EOL and not built for Python 3.12. Up to you, but this can reduce environment friction.

Example:

-evidently[llm]==0.7.14
-numpy>=1.22.0,<2.0.0
+evidently[llm]>=0.7.14,<0.8
+numpy>=1.24.0,<2.0.0

If you prefer to stay pinned, at least confirm compatibility:

#!/bin/bash
python -V
python - <<'PY'
import sys, pkgutil
print("OK: Python", sys.version)
PY
pip install 'evidently[llm]==0.7.14' 'numpy>=1.24,<2' 'pandas>=1.5,<3' 'scikit-learn>=1.2' 'litellm>=1.70'
python - <<'PY'
import pandas, sklearn, evidently
from evidently.presets import TextEvals, DataDriftPreset
print("Imports OK", pandas.__version__)
PY
docs/en/solutions/How_to_Install_and_use_Featureform.md (1)

218-218: Tighten phrasing: “pre-provisioned” instead of “pre-prepared”.

Minor English polish and clarity.

-  The cluster needs to have CSI pre-installed or `PersistentVolume` pre-prepared.
+  Ensure a CSI driver is installed in the cluster, or pre-provision a `PersistentVolume`.
docs/public/evidently/quickstart/setup-env.sh (1)

8-12: Align defaults with docs and add guidance for API URL.

Docs say default provider is “openai” while this file sets “deepseek”. Either is fine—just be consistent, and hint API URLs for hosted providers.

-export LLM_PROVIDER="deepseek"
-export LLM_API_KEY="your-api-key"
-export LLM_API_URL=""
-export LLM_MODEL="deepseek-chat"
+export LLM_PROVIDER="openai"        # or keep "deepseek" but match the docs
+export LLM_API_KEY="your-api-key"   # required for most hosted providers
+# For DeepSeek: https://api.deepseek.com ; for OpenAI: https://api.openai.com/v1
+export LLM_API_URL=""
+export LLM_MODEL="gpt-4o-mini"      # or "deepseek-chat" if using DeepSeek
docs/en/solutions/How_to_Install_and_use_Evidently.md (2)

196-201: Keep version mentions consistent with requirements.

Docs reference evidently.ALL.v0.7.14-1.tgz while requirements pin evidently[llm]==0.7.14. Make sure these stay in sync to avoid user confusion.

Would you like me to scan the repo for other Evidently version references and open a follow-up issue if mismatches are found?


339-351: Note about secrets and UI mode.

Since the quickstart scripts use RemoteWorkspace, explicitly warn that calls will fail if the UI is deployed with a secret and EVIDENTLY_SECRET is unset.

 export EVIDENTLY_SECRET="your-secret"
 export DEBUG="false"
+
+Note: If the UI is deployed with a secret, `EVIDENTLY_SECRET` must be set; otherwise SDK operations (create project, add run) will fail.
docs/public/evidently/quickstart/data_and_ml_checks.py (2)

85-87: Use logging.exception() to keep tracebacks in logs.

Preserves stack traces for easier debugging without changing control flow.

-    except Exception as e:
-        logger.error(f"Dataset creation failed: {e}")
+    except Exception:
+        logger.exception("Dataset creation failed")
         raise
@@
-    except Exception as e:
-        logger.error(f"Report generation failed: {e}")
+    except Exception:
+        logger.exception("Report generation failed")
         raise
@@
-    except Exception as e:
-        logger.error(f"Workspace preparation failed: {e}")
+    except Exception:
+        logger.exception("Workspace preparation failed")
         raise
@@
-except Exception as e:
-    logger.error(f"Execution failed: {e}")
+except Exception:
+    logger.exception("Execution failed")
     raise

Also applies to: 104-106, 133-135, 159-161


1-1: Shebang present but file may not be executable.

Either make it executable (chmod +x) or drop the shebang since it’s invoked via python file.py in docs.

docs/public/evidently/quickstart/llm_evaluation.py (3)

63-70: Don’t silently swallow cleanup errors.

Log at debug when litellm cleanup fails so issues are observable without being noisy.

-    except Exception:
-        pass
+    except Exception as err:
+        logging.getLogger(__name__).debug("litellm cleanup skipped: %s", err)

137-139: Use logging.exception() for richer error logs.

Same rationale as the other script.

-    except Exception as e:
-        logger.error(f"Dataset creation failed: {e}")
+    except Exception:
+        logger.exception("Dataset creation failed")
         raise
@@
-    except Exception as e:
-        logger.error(f"Report generation failed: {e}")
+    except Exception:
+        logger.exception("Report generation failed")
         raise
@@
-    except Exception as e:
-        logger.error(f"Workspace preparation failed: {e}")
+    except Exception:
+        logger.exception("Workspace preparation failed")
         raise
@@
-except Exception as e:
-    logger.error(f"Execution failed: {e}")
+except Exception:
+    logger.exception("Execution failed")
     raise

Also applies to: 156-158, 185-187, 211-213


1-1: Shebang present but file may not be executable.

Either make it executable, or keep running via python llm_evaluation.py and remove the shebang.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 3a325d5 and e6b1f80.

📒 Files selected for processing (6)
  • docs/en/solutions/How_to_Install_and_use_Evidently.md (1 hunks)
  • docs/en/solutions/How_to_Install_and_use_Featureform.md (1 hunks)
  • docs/public/evidently/quickstart/data_and_ml_checks.py (1 hunks)
  • docs/public/evidently/quickstart/llm_evaluation.py (1 hunks)
  • docs/public/evidently/quickstart/requirements.txt (1 hunks)
  • docs/public/evidently/quickstart/setup-env.sh (1 hunks)
🧰 Additional context used
🪛 Ruff (0.12.2)
docs/public/evidently/quickstart/data_and_ml_checks.py

1-1: Shebang is present but file is not executable

(EXE001)


83-83: Consider moving this statement to an else block

(TRY300)


86-86: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


102-102: Consider moving this statement to an else block

(TRY300)


105-105: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


131-131: Consider moving this statement to an else block

(TRY300)


134-134: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


160-160: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

docs/public/evidently/quickstart/llm_evaluation.py

1-1: Shebang is present but file is not executable

(EXE001)


68-69: try-except-pass detected, consider logging the exception

(S110)


68-68: Do not catch blind exception: Exception

(BLE001)


135-135: Consider moving this statement to an else block

(TRY300)


138-138: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


154-154: Consider moving this statement to an else block

(TRY300)


157-157: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


183-183: Consider moving this statement to an else block

(TRY300)


186-186: Use logging.exception instead of logging.error

Replace with exception

(TRY400)


212-212: Use logging.exception instead of logging.error

Replace with exception

(TRY400)

Comment on lines +318 to +325

**Python Version Requirements:**
- Supports Python 3.10

```bash
pip install -r requirements.txt
```

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Clarify working directory before pip install.

Without cd, pip install -r requirements.txt may run from the wrong directory.

-```bash
-pip install -r requirements.txt
-```
+```bash
+cd docs/public/evidently/quickstart
+pip install -r requirements.txt
+```
🤖 Prompt for AI Agents
In docs/en/solutions/How_to_Install_and_use_Evidently.md around lines 318 to
325, the installation step runs pip install -r requirements.txt without
specifying the working directory; update the markdown code block to first change
directory to docs/public/evidently/quickstart (or the correct project quickstart
path) then run pip install -r requirements.txt so users execute the command from
the intended directory, and ensure the fenced bash block includes both commands
in order.

Comment on lines +113 to +121
try:
# Initialize workspace connection
ws = RemoteWorkspace(
base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"),
secret=os.getenv("EVIDENTLY_SECRET")
)
logger.debug("Workspace connection established")

projects = ws.search_project(PROJECT_NAME)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fail fast if EVIDENTLY_SECRET is required but missing.

Avoids confusing 401 errors later.

     try:
-        # Initialize workspace connection
-        ws = RemoteWorkspace(
-            base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"),
-            secret=os.getenv("EVIDENTLY_SECRET")
-        )
+        # Initialize workspace connection
+        secret = os.getenv("EVIDENTLY_SECRET")
+        if not secret:
+            logger.warning("EVIDENTLY_SECRET is not set. If the UI is secured, SDK calls will fail.")
+        ws = RemoteWorkspace(
+            base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"),
+            secret=secret,
+        )
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
try:
# Initialize workspace connection
ws = RemoteWorkspace(
base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"),
secret=os.getenv("EVIDENTLY_SECRET")
)
logger.debug("Workspace connection established")
projects = ws.search_project(PROJECT_NAME)
try:
# Initialize workspace connection
secret = os.getenv("EVIDENTLY_SECRET")
if not secret:
logger.warning("EVIDENTLY_SECRET is not set. If the UI is secured, SDK calls will fail.")
ws = RemoteWorkspace(
base_url=os.getenv("EVIDENTLY_URL", "http://localhost:8000"),
secret=secret,
)
logger.debug("Workspace connection established")
projects = ws.search_project(PROJECT_NAME)
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/data_and_ml_checks.py around lines 113 to
121, the code initializes RemoteWorkspace using EVIDENTLY_SECRET without
validating it; add an explicit check after reading os.getenv("EVIDENTLY_SECRET")
to fail fast if the secret is required and missing by logging an error and
raising SystemExit (or ValueError) with a clear message (or return/exit), so the
script terminates immediately with a helpful message instead of later producing
confusing 401 errors when the workspace is used.

Comment on lines +99 to +107
llm_provider = os.getenv("LLM_PROVIDER", "openai")
llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini")
llm_api_key = os.getenv("LLM_API_KEY")
llm_api_url = os.getenv("LLM_API_URL")

logger.info(f"LLM Configuration: {llm_provider} - {llm_model}")
logger.debug(f"API Key: {'Set' if llm_api_key else 'Not set'}")
logger.debug(f"API URL: {llm_api_url or 'Not set'}")

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Validate LLM API key when a hosted provider is selected.

Prevents confusing provider/auth errors later.

     llm_provider = os.getenv("LLM_PROVIDER", "openai")
     llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini")
     llm_api_key = os.getenv("LLM_API_KEY")
     llm_api_url = os.getenv("LLM_API_URL")
 
     logger.info(f"LLM Configuration: {llm_provider} - {llm_model}")
+    if llm_provider not in {"ollama"} and not llm_api_key:
+        raise ValueError(f"LLM_API_KEY is required for provider '{llm_provider}'. Set it in setup-env.sh.")
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
llm_provider = os.getenv("LLM_PROVIDER", "openai")
llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini")
llm_api_key = os.getenv("LLM_API_KEY")
llm_api_url = os.getenv("LLM_API_URL")
logger.info(f"LLM Configuration: {llm_provider} - {llm_model}")
logger.debug(f"API Key: {'Set' if llm_api_key else 'Not set'}")
logger.debug(f"API URL: {llm_api_url or 'Not set'}")
llm_provider = os.getenv("LLM_PROVIDER", "openai")
llm_model = os.getenv("LLM_MODEL", "gpt-4o-mini")
llm_api_key = os.getenv("LLM_API_KEY")
llm_api_url = os.getenv("LLM_API_URL")
logger.info(f"LLM Configuration: {llm_provider} - {llm_model}")
if llm_provider not in {"ollama"} and not llm_api_key:
raise ValueError(f"LLM_API_KEY is required for provider '{llm_provider}'. Set it in setup-env.sh.")
logger.debug(f"API Key: {'Set' if llm_api_key else 'Not set'}")
logger.debug(f"API URL: {llm_api_url or 'Not set'}")
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/llm_evaluation.py around lines 99 to 107,
the code logs LLM config but does not validate that an API key is provided when
a hosted provider is selected; add a check after reading LL
M_PROVIDER/LLM_API_KEY that, for hosted providers (e.g., "openai", "anthropic",
etc. — anything other than local/mock), ensures llm_api_key is present; if
missing, log an explicit error with provider name and exit/raise (process exit 1
or raise RuntimeError) so the script fails fast with a clear message rather than
producing confusing auth/provider errors later.

Comment on lines +111 to +123
provider_options = {
"deepseek": DeepSeekOptions,
"anthropic": AnthropicOptions,
"gemini": GeminiOptions,
"vertex_ai": VertexAIOptions,
"mistral": MistralOptions,
"ollama": OllamaOptions,
"nebius": NebiusOptions,
}
# Choose appropriate options based on provider
options_class = provider_options.get(llm_provider, LLMOptions)
options = options_class(api_key=llm_api_key, api_url=llm_api_url)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Verification agent

🧩 Analysis chain

Double-check provider option class names for Evidently 0.7.14.

If any of these moved/renamed, imports will fail. Safer to verify against the installed version.

Generate web check:

If any differ, switch to the correct import path or fallback to LLMOptions with provider hints.


🌐 Web query:

In Evidently 0.7.14, are DeepSeekOptions, AnthropicOptions, GeminiOptions, VertexAIOptions, MistralOptions, OllamaOptions, and NebiusOptions available under evidently.llm.utils.wrapper?

💡 Result:

Short answer: No — AnthropicOptions, GeminiOptions, VertexAIOptions, MistralOptions and OllamaOptions are exposed via the LLM wrapper, but DeepSeekOptions and NebiusOptions are not present in Evidently v0.7.14.

Sources:

  • Evidently docs showing Options import and examples (Anthropic, Gemini, Vertex, Mistral, Ollama providers). [1]
  • GitHub issue referencing GeminiOptions in the wrapper/examples. [2]
  • PyPI release page for evidently 0.7.14 (version reference). [3]

Replace unsupported DeepSeekOptions and NebiusOptions with the generic fallback
In Evidently 0.7.14 the LLM wrapper does not expose DeepSeekOptions or NebiusOptions (imports will fail); update the provider_options mapping in docs/public/evidently/quickstart/llm_evaluation.py (lines 111–123) to use LLMOptions for "deepseek" and "nebius".

🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/llm_evaluation.py around lines 111 to 123,
the provider_options mapping references DeepSeekOptions and NebiusOptions which
are not available in Evidently 0.7.14; update the mapping so that the keys
"deepseek" and "nebius" use the generic LLMOptions instead of DeepSeekOptions
and NebiusOptions respectively, leaving the other providers unchanged.

Comment on lines +1 to +4
# Evidently LLM Demo Requirements
evidently[llm]==0.7.14
litellm>=1.70.0
numpy>=1.22.0,<2.0.0
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add missing deps needed by quickstarts (pandas, scikit-learn).

Both Python examples import pandas, and data_and_ml_checks.py uses scikit-learn. Add them to avoid runtime import errors.

Apply:

 # Evidently LLM Demo Requirements
 evidently[llm]==0.7.14
 litellm>=1.70.0
 numpy>=1.22.0,<2.0.0
+Pillow>=9.0.0           # frequently required by matplotlib in Evidently reports
+pandas>=1.5,<3.0
+scikit-learn>=1.2
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
# Evidently LLM Demo Requirements
evidently[llm]==0.7.14
litellm>=1.70.0
numpy>=1.22.0,<2.0.0
# Evidently LLM Demo Requirements
evidently[llm]==0.7.14
litellm>=1.70.0
numpy>=1.22.0,<2.0.0
Pillow>=9.0.0 # frequently required by matplotlib in Evidently reports
pandas>=1.5,<3.0
scikit-learn>=1.2
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/requirements.txt lines 1-4, the requirements
file is missing packages required by the quickstarts (pandas and scikit-learn),
causing runtime import errors; update the file to include pandas and
scikit-learn (pin or set compatible minimum versions consistent with the other
deps) by adding entries for pandas and scikit-learn so both examples and
data_and_ml_checks.py can import them successfully.

@@ -0,0 +1,12 @@
#! /bin/bash
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Fix shebang and add safe bash flags.

There’s an extra space after #!. Also add strict mode for safer env loading.

-#! /bin/bash
+#!/usr/bin/env bash
+set -euo pipefail
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
#! /bin/bash
#!/usr/bin/env bash
set -euo pipefail
🤖 Prompt for AI Agents
In docs/public/evidently/quickstart/setup-env.sh around line 1, the shebang has
an extra space and the script lacks safe bash flags; remove the space so it
reads #!/bin/bash and enable strict mode by adding e.g. set -euo pipefail and
IFS=$'\n\t' near the top to fail fast on errors, treat unset variables as
errors, propagate pipe failures, and set a safer IFS.

@davidwtf davidwtf merged commit f5d3bd2 into main Sep 10, 2025
2 checks passed
@davidwtf davidwtf deleted the add/evidently branch September 10, 2025 14:49
changluyi pushed a commit to changluyi/knowledge that referenced this pull request Apr 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant