add How_to_Use_Pipeline_to_Evaluate_AI_Modes.md#67
Conversation
WalkthroughAdds a new English documentation page describing an end-to-end Alauda DevOps pipeline to evaluate AI models (example: YOLOv5) using Volcano jobs, COCO-style validation, and Evidently integration; includes prerequisites, RBAC/helpers, YAML and VolcanoJob examples, parameters, triggers, and monitoring steps (≤50 words). Changes
Sequence Diagram(s)sequenceDiagram
autonumber
actor User
participant Pipeline as Alauda DevOps Pipeline
participant Volcano as Volcano Scheduler
participant Job as Eval Container
participant YOLO as YOLOv5 val.py
participant Evidently as Evidently Service
rect rgb(235,245,255)
User->>Pipeline: create PipelineRun (repos, params)
Pipeline->>Volcano: submit VolcanoJob (YAML, env, volumes)
end
rect rgb(245,255,235)
Volcano->>Job: schedule GPU task
Job->>Job: clone repos, prepare COCO data (LFS handling)
Job->>YOLO: run validation -> produce COCO metrics & artifacts
YOLO-->>Job: metrics + artifacts
end
rect rgb(255,245,235)
Job->>Evidently: upload report / create project (API key)
Evidently-->>Job: report URL / status
Job-->>Pipeline: attach artifacts & completion status
Pipeline-->>User: PipelineRun status + report link
end
alt Failure
Job-->>Pipeline: failure + logs
Pipeline-->>User: failure notification
end
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Poem
Pre-merge checks and finishing touches❌ Failed checks (2 warnings)
✅ Passed checks (1 passed)
✨ Finishing touches🧪 Generate unit tests (beta)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: CHILL Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧠 Learnings (2)📚 Learning: 2025-09-29T08:32:26.877ZApplied to files:
📚 Learning: 2025-09-29T08:33:21.808ZApplied to files:
🪛 LanguageTooldocs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Models.md[style] ~25-~25: This adverb was used twice in the sentence. Consider removing one of them or replacing them with a synonym. (ADVERB_REPETITION_PREMIUM) 🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md (1)
50-56: Add a language hint to this code fence
Our docs tooling (markdownlint MD040) requires every fenced block to declare a language. Tag this one astext(or another appropriate lexer) to unblock the lint step.-``` +```text images/ val2017/ # val2017.zip extracted content annotations/ # annotations_trainval2017.zip extracted content val2017.txt # ...
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md
50-50: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🪛 GitHub Actions: Build and Update
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md
[error] 1-1: MDX compile error: Dead link found. The link ".." points to a non-existent HTML page (How_to_Use_Pipeline_to_Train_AI_Models.html).
There was a problem hiding this comment.
Actionable comments posted: 1
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md(1 hunks)
🧰 Additional context used
🪛 markdownlint-cli2 (0.18.1)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md
50-50: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md (2)
90-100: Clean up Dockerfile deps (duplicate numpy; apt cleanup).
- Remove duplicate/conflicting numpy pins.
- Prefer
apt-get clean.-RUN apt-get update && \ - export DEBIAN_FRONTEND=noninteractive && \ - apt-get install -yq --no-install-recommends git git-lfs unzip curl ffmpeg libfreetype6-dev && \ - apt clean && rm -rf /var/lib/apt/lists/* +RUN apt-get update && \ + export DEBIAN_FRONTEND=noninteractive && \ + apt-get install -yq --no-install-recommends git git-lfs unzip curl ffmpeg libfreetype6-dev && \ + apt-get clean && rm -rf /var/lib/apt/lists/* ... - "numpy<2.0.0" \ + "numpy<2.0.0" \ "opencv-python<4.12.0" \ - "numpy>=1.18.5" \ "PyYAML>=5.3.1" \Also applies to: 95-96
1145-1147: Wording nit: “Check” instead of “Checkout”.Minor grammar polish for the section header.
-### Checkout PipelineRun status and logs +### Check PipelineRun status and logs
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md(1 hunks)
🔇 Additional comments (4)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Modes.md (4)
50-56: Good fix: code fence language added.
The directory layout fence now declarestext, satisfying markdownlint MD040.
302-305: Weights default may be incompatible with YOLOv5 val.py.Default
models/model.torchscriptmay not be supported byval.py(expects .pt). Please verify or clarify in the doc.If needed, adjust defaults and copy step:
- default: "models/model.torchscript" + default: "models/model.pt" ... - cp -f "/mnt/workspace/output/${OUTPUT_MODEL_PATH}" "/mnt/workspace/model/${EVALUATE_ARG_WEIGHTS}" + cp -f "/mnt/workspace/output/${OUTPUT_MODEL_PATH}" "/mnt/workspace/model/${EVALUATE_ARG_WEIGHTS}" + # Ensure weights format matches YOLOv5 val.py expectations (.pt by default)Also applies to: 471-476
42-43: Add a valid cross-link to the training guide (fix docs build and navigation).The text references the training doc but doesn’t link it. Please link to the actual page and ensure the path exists to avoid link-check failures.
Apply this diff:
-The model to be evaluated comes from the output of the `yolov5-training` pipeline. Refer to the **How to Use Pipeline to Train AI Models** for details on how to train models and obtain the trained model files. +The model to be evaluated comes from the output of the `yolov5-training` pipeline. Refer to **[How to Use Pipeline to Train AI Models](How_to_Use_Pipeline_to_Train_AI_Models.md)** for details on training and obtaining model files.To verify the target exists, run:
#!/bin/bash fd -a 'How_to_Use_Pipeline_to_Train_AI_Models.md' docs | nl -ba fd -a 'How_to_Install_and_use_Evidently.md' docs | nl -ba
609-621: Avoid disabling TLS verification for Git (security posture).Cloning with
-c http.sslVerify=falseundermines TLS. Use default verification and make any bypass opt‑in if absolutely required.- git -c http.sslVerify=false -c lfs.activitytimeout=36000 lfs pull + git -c lfs.activitytimeout=36000 lfs pull ... - GIT_LFS_SKIP_SMUDGE=1 git -c http.sslVerify=false -c lfs.activitytimeout=36000 clone -b $branch "$clone_url" . + GIT_LFS_SKIP_SMUDGE=1 git -c lfs.activitytimeout=36000 clone -b $branch "$clone_url" . ... - GIT_LFS_SKIP_SMUDGE=1 git -c http.sslVerify=false -c lfs.activitytimeout=36000 clone "$clone_url" . + GIT_LFS_SKIP_SMUDGE=1 git -c lfs.activitytimeout=36000 clone "$clone_url" . ... - git -c http.sslVerify=false -c lfs.activitytimeout=36000 lfs pull + git -c lfs.activitytimeout=36000 lfs pullAlso applies to: 614-617, 621-621
⛔ Skipped due to learnings
Learnt from: davidwtf PR: alauda/knowledge#60 File: docs/en/solutions/How_to_Use_Pipeline_to_Train_AI_Models.md:565-583 Timestamp: 2025-09-23T02:29:55.305Z Learning: In enterprise environments using Alauda DevOps pipelines for AI model training, self-signed certificates are commonly used for internal Git repositories, requiring the `-c http.sslVerify=false` option to be supported rather than completely removed.
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (3)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Models.md (3)
574-586: Avoid embedding credentials in clone URLs; prefer helpers.Current approach URL‑encodes user/token and places them in the URL, which can leak via process lists and error messages. Use a credential helper or HTTP extra headers.
Example (no URL mutation):
# Prior to clone: AUTH="$(printf '%s:%s' "$GIT_USER" "$GIT_TOKEN" | base64 -w0)" git -c http.extraheader="AUTHORIZATION: Basic ${AUTH}" clone -b "$branch" "$url" . git -c lfs.activitytimeout=36000 lfs pullOr use GIT_ASKPASS to supply credentials on demand.
Also applies to: 592-627
90-106: Consolidate duplicate numpy constraints and pin compatible versions.You list
numpy<2.0.0andnumpy>=1.18.5separately; combine to a single spec. Also ensure versions are compatible with the base PyTorch image.Apply:
- "numpy<2.0.0" \ - "opencv-python<4.12.0" \ - "numpy>=1.18.5" \ + "numpy>=1.18.5,<2.0.0" \ + "opencv-python<4.12.0" \
46-47: Prefer HTTPS for dataset downloads.Switch to https to avoid MITM risks.
Apply:
-Download the validation images from [val2017.zip](http://images.cocodataset.org/zips/val2017.zip) and the annotation information from [annotations_trainval2017.zip](http://images.cocodataset.org/annotations/annotations_trainval2017.zip). +Download the validation images from [val2017.zip](https://images.cocodataset.org/zips/val2017.zip) and the annotation information from [annotations_trainval2017.zip](https://images.cocodataset.org/annotations/annotations_trainval2017.zip).
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Models.md(1 hunks)
🔇 Additional comments (5)
docs/en/solutions/How_to_Use_Pipeline_to_Evaluate_AI_Models.md (5)
524-524: Do not print secrets to logs (EVIDENTLY_API_KEY leakage).Remove/redact the API key echo to avoid credential exposure in PipelineRun/Pod logs.
Apply:
- echo "Evidently api key: $EVIDENTLY_API_KEY" + # Do not log secrets + echo "Evidently api key: [REDACTED]"
977-977: Fix mAP_0_75 parsing (wrong awk field).Use field 2 like the others; field 3 yields empty.
Apply:
- mAP_0_75=$(grep -E '^mAP_0_75:' /tmp/report.log | awk '{print $3}') + mAP_0_75=$(grep -E '^mAP_0_75:' /tmp/report.log | awk '{print $2}')
993-1000: Tidy metric descriptions (remove stray “)”).Unbalanced punctuation in two descriptions.
Apply:
- "description": "Average Precision, IoU=0.50:0.95 area=all maxDets=100)", + "description": "Average Precision, IoU=0.50:0.95 area=all maxDets=100", @@ - "description": "Average Precision, IoU=0.50 area=all maxDets=100)", + "description": "Average Precision, IoU=0.50 area=all maxDets=100",
609-617: Avoid disabling TLS verification in git commands.
-c http.sslVerify=falseweakens security and risks MITM. Remove it or document trusted CA configuration instead.Apply:
- git -c http.sslVerify=false -c lfs.activitytimeout=36000 lfs pull + git -c lfs.activitytimeout=36000 lfs pull @@ - GIT_LFS_SKIP_SMUDGE=1 git -c http.sslVerify=false -c lfs.activitytimeout=36000 clone -b \$branch \"\$clone_url\" . + GIT_LFS_SKIP_SMUDGE=1 git -c lfs.activitytimeout=36000 clone -b \$branch \"\$clone_url\" . @@ - GIT_LFS_SKIP_SMUDGE=1 git -c http.sslVerify=false -c lfs.activitytimeout=36000 clone \"\$clone_url\" . + GIT_LFS_SKIP_SMUDGE=1 git -c lfs.activitytimeout=36000 clone \"\$clone_url\" . @@ - git -c http.sslVerify=false -c lfs.activitytimeout=36000 lfs pull + git -c lfs.activitytimeout=36000 lfs pullAlso applies to: 621-621
⛔ Skipped due to learnings
Learnt from: davidwtf PR: alauda/knowledge#60 File: docs/en/solutions/How_to_Use_Pipeline_to_Train_AI_Models.md:565-583 Timestamp: 2025-09-23T02:29:55.305Z Learning: In enterprise environments using Alauda DevOps pipelines for AI model training, self-signed certificates are commonly used for internal Git repositories, requiring the `-c http.sslVerify=false` option to be supported rather than completely removed.
629-639: Incorrect prediction filename assumption
YOLOv5 v7.x saves the JSON as<weights_stem>_predictions.jsoninruns/val/exp(e.g.best_predictions.json), not a barepredictions.json. Using${MODEL_NAME}_predictions.jsonis correct ifMODEL_NAMEmatches the weight filename stem.Likely an incorrect or invalid review comment.
0ff0096 to
4f073d0
Compare
* add How_to_Use_Pipeline_to_Evaluate_AI_Modes.md * update * update * update * update * fix typo HAMi
Summary by CodeRabbit