Conversation
|
Caution Review failedThe pull request is closed. ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (6)
Walkthrough모델 아티팩트 경로 해결 유틸리티와 환경변수( Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Env as Env (AI_MODEL_WEIGHTS_DIR)
participant Config as Trading Config
participant Artifact as Artifact Resolver
participant Pipeline as Pipeline / Model Manager
participant Wrapper as Model Wrapper
Config->>Artifact: resolve_model_artifacts(model_name, mode, config_weights_dir)
Env->>Artifact: (optional) AI_MODEL_WEIGHTS_DIR present
Artifact->>Artifact: select root (env -> config_weights_dir -> default)
Artifact-->>Pipeline: return ModelArtifactPaths(model_path, scaler_path, metadata_path, model_dir)
Pipeline->>Wrapper: initialize with resolved paths
Wrapper->>Pipeline: load model, scaler, metadata (or report missing)
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60분 Possibly related PRs
Suggested reviewers
시
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 10
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
AI/pipelines/daily_routine.py (1)
112-119:⚠️ Potential issue | 🟠 Major모델 초기화 예외를 호출부에서 처리해 파이프라인 중단을 제어해 주세요.
Line 118에서
run_mode전달로 artifact 해석 경로 의존성이 커졌는데,initialize_models예외가 현재 전파되어 전체 루틴이 바로 종료될 수 있습니다. 호출부에서FileNotFoundError/ValueError를 처리해 운영 로그를 명확히 남기고 안전 종료하는 편이 좋습니다.수정 예시
- model_wrappers = initialize_models( - loader=loader, - data_config=trading_config.data, - model_config=trading_config.model, - feature_columns=feature_columns, - active_models=active_models, - run_mode=mode, - ) + try: + model_wrappers = initialize_models( + loader=loader, + data_config=trading_config.data, + model_config=trading_config.model, + feature_columns=feature_columns, + active_models=active_models, + run_mode=mode, + ) + except (FileNotFoundError, ValueError) as model_init_error: + print(f"[DailyRoutine] Model initialization failed: {model_init_error}") + return🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI/pipelines/daily_routine.py` around lines 112 - 119, Wrap the initialize_models(...) call (the expression assigned to model_wrappers) in a try/except that catches FileNotFoundError and ValueError, logs a clear operational message including the exception (use the existing logger/process_logger), and performs a safe shutdown/early return of the pipeline instead of letting the exception propagate; ensure you reference initialize_models, model_wrappers, loader, trading_config (data and model), feature_columns and mode/run_mode so the handler can log context, and re-raise or let other unexpected exceptions propagate after logging.
🧹 Nitpick comments (4)
AI/modules/signal/models/PatchTST/wrapper.py (2)
107-108: 배치 차원 중복 추가 가능성Line 108에서
np.expand_dims(window, axis=0)로 배치 차원을 추가한 후predict()를 호출합니다. 그러나predict()내부(lines 76-77)에서도 2D 입력에 대해 배치 차원을 추가합니다.
window는 이미 2D(seq_len, features)형태이고,expand_dims후 3D(1, seq_len, features)가 됩니다.predict()는 3D 입력을 그대로 처리하므로 현재 코드는 정상 동작하지만,expand_dims호출이 중복되어 혼란을 줄 수 있습니다.♻️ 명확성을 위한 수정 제안
window = df[self.features].iloc[-self.seq_len :].to_numpy(dtype=np.float32) - probs = self.predict(np.expand_dims(window, axis=0)).reshape(-1) + # predict() handles batch dimension internally for 2D input + probs = self.predict(window).reshape(-1) score = float(probs[0]) if probs.size else 0.5🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI/modules/signal/models/PatchTST/wrapper.py` around lines 107 - 108, The call site is redundantly adding a batch axis before calling predict: variable window (shape (seq_len, features)) is expanded via np.expand_dims(window, axis=0) and then passed into predict(), but predict() already handles 2D inputs by adding the batch dim internally. Remove the np.expand_dims call and pass window directly to predict(window) (referencing the variable window and the predict method) so the input batching logic is only in one place and the code is clearer.
91-96: 자동 feature 선택의 잠재적 비결정성
features가 미설정된 경우 DataFrame의 첫 N개 숫자 컬럼을 자동 선택합니다. 컬럼 순서는 DataFrame 생성 방식에 따라 달라질 수 있어, 실행 간 다른 feature가 선택될 수 있습니다.이는 fallback 경로이므로 심각한 문제는 아니지만, 프로덕션에서는 명시적 feature 설정을 권장합니다. 경고 로그 추가를 고려하세요.
♻️ 경고 로그 추가 제안
if not self.features: numeric_columns = [col for col in df.columns if pd.api.types.is_numeric_dtype(df[col])] self.features = numeric_columns[: int(self.config.get("enc_in", 7))] + import warnings + warnings.warn( + f"PatchTST features not configured, auto-selected: {self.features}", + UserWarning, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI/modules/signal/models/PatchTST/wrapper.py` around lines 91 - 96, The code path that auto-selects features when self.features is empty can yield non-deterministic column order; update the fallback in the PatchTST wrapper to (1) emit a warning via the component logger (e.g., self.logger.warning or module logger) that features are being auto-selected and recommend explicit configuration, and (2) choose features deterministically (e.g., sort numeric_columns before slicing or otherwise enforce a stable order) when using self.config.get("enc_in", 7) so repeated runs pick the same columns; reference the existing self.features assignment and the config key "enc_in" when making the change.AI/modules/signal/models/TCN/train.py (1)
210-214: 모듈 로드 시점 설정 로드 실패 시 fallback 부재
default_output_dir계산이 모듈 로드 시점에 실행됩니다.load_trading_config()가 실패하면 모듈 import 자체가 실패합니다.PatchTST/train.py는 try-except로 fallback을 제공하지만, 이 파일에는 없습니다.CLI 스크립트로만 사용되므로 심각한 문제는 아니지만, 일관성을 위해 동일한 패턴 적용을 고려하세요.
♻️ Fallback 패턴 적용 제안
def parse_args() -> argparse.Namespace: + try: + default_output_dir = resolve_model_artifacts( + model_name="tcn", + config_weights_dir=load_trading_config().model.weights_dir, + ).model_dir + except Exception: + default_output_dir = resolve_model_artifacts(model_name="tcn").model_dir - default_output_dir = resolve_model_artifacts( - model_name="tcn", - config_weights_dir=load_trading_config().model.weights_dir, - ).model_dir parser = argparse.ArgumentParser(description="Train TCN signal model.")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI/modules/signal/models/TCN/train.py` around lines 210 - 214, The module currently computes default_output_dir at import time using load_trading_config() which can raise and break imports; move this computation into parse_args() and add a fallback pattern: inside parse_args(), call resolve_model_artifacts(...) and load_trading_config() within a try-except, and on exception set default_output_dir to a safe fallback (e.g., a temp or relative "artifacts/tcn" directory) before building the argparse.Namespace defaults; refer to the parse_args function and symbols default_output_dir, resolve_model_artifacts, and load_trading_config when making the change.AI/modules/signal/models/PatchTST/train.py (1)
13-21: 모듈 로드 시점의 예외 처리 및 bareException사용에 대한 검토
_default_model_save_path()가 모듈 로드 시점(line 31)에 호출됩니다.load_trading_config()실패 시 fallback 경로를 사용하는 것은 합리적이나, bareException을 catch하면KeyboardInterrupt등 예상치 못한 예외도 무시될 수 있습니다.정적 분석 도구(Ruff BLE001)에서도 지적된 사항입니다. 보다 구체적인 예외 타입을 사용하는 것을 권장합니다.
♻️ 구체적인 예외 타입 사용 제안
def _default_model_save_path() -> str: try: trading_config = load_trading_config() return resolve_model_artifacts( model_name="patchtst", config_weights_dir=trading_config.model.weights_dir, ).model_path - except Exception: + except (FileNotFoundError, KeyError, AttributeError, ValueError): return resolve_model_artifacts(model_name="patchtst").model_path🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI/modules/signal/models/PatchTST/train.py` around lines 13 - 21, The function _default_model_save_path currently swallows all exceptions by catching bare Exception when calling load_trading_config and resolve_model_artifacts; update it to catch only expected errors (e.g., FileNotFoundError, OSError, ValueError, or your config-specific exception thrown by load_trading_config) so that signals like KeyboardInterrupt or other unexpected errors are not suppressed, and keep the existing fallback to resolve_model_artifacts(model_name="patchtst") in the except block; reference the functions _default_model_save_path, load_trading_config, and resolve_model_artifacts when making this change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AI/config/trading.py`:
- Line 120: The code reads model_weights_dir from
os.getenv(MODEL_WEIGHTS_DIR_ENV_VAR, raw["model"]["weights_dir"]) but doesn't
handle empty or whitespace-only env values; update the logic so you retrieve the
env var, trim it, and if it's None/empty after strip() fall back to
raw["model"]["weights_dir"] (apply the same guard to the analogous env fallback
at line 146); reference model_weights_dir and MODEL_WEIGHTS_DIR_ENV_VAR when
making this change.
In `@AI/modules/signal/core/artifact_paths.py`:
- Around line 99-107: The patchtst branch in artifact_paths.py treats model_dir
differently than other models; update the normalized_model == "patchtst" case to
compute resolved_model_dir = _resolve_absolute(model_dir) if model_dir else
(root_dir / "patchtst") so it follows the same root_dir/"patchtst" convention as
itransformer and tcn, and keep returning a ModelArtifactPaths with model_path
built from resolved_model_dir / "PatchTST_best.pt" and scaler_path/metadata_path
as before.
In `@AI/modules/signal/models/itransformer/wrapper.py`:
- Around line 65-68: The scaler_path and metadata_path overrides are applied but
load_scaler()/load_metadata() are not invoked afterward, so custom artifact
layouts are ignored; update the code in
AI/modules/signal/models/itransformer/wrapper.py (the blocks around the
assignments of self.scaler_path and self.metadata_path and the other occurrences
around lines referenced 87-96 and 162-176) to call the corresponding loader
methods (e.g., self.load_scaler() and self.load_metadata() or whatever the class
methods are named) immediately after you overwrite
self.scaler_path/self.metadata_path so the scaler and metadata are actually
reloaded before get_signals() or build() run, ensuring feature counts and
metadata-driven build logic use the overridden artifacts.
- Around line 33-38: The model currently produces a single logit via self.head
but get_signals() duplicates that same value across all horizon keys (making
itransformer_1d/3d/5d/7d identical); change the output to match multi-horizon
usage by making self.head produce len(self.horizons) outputs (e.g., replace
self.head = nn.Linear(d_model * num_variates, 1) with nn.Linear(d_model *
num_variates, len(self.horizons))) and update get_signals() to map each horizon
to its corresponding output slice instead of replicating one value;
alternatively, if this class is meant to be single-horizon, ensure get_signals()
only returns the single horizon key rather than duplicating it.
- Around line 87-91: The load_scaler method currently uses pickle.load and
elsewhere torch.load without integrity checks (see load_scaler and torch.load
usage), which allows execution of malicious payloads from mounted volumes; fix
by enforcing one or more mitigations: restrict write permissions on the artifact
directory, add checksum or signature verification before deserializing (validate
file hash or signature prior to calling pickle.load or torch.load), or upgrade
to PyTorch >=2.13 and use torch.load(..., weights_only=True) for model tensors;
implement the chosen validation step(s) so pickle.load(self.scaler) and
torch.load calls only run after successful integrity/authenticity verification.
In `@AI/modules/trader/strategies/rl_agent.py`:
- Around line 8-13: The file is missing an import for the os module causing a
NameError when calling os.path.exists(model_path) in
AI/modules/trader/strategies/rl_agent.py; add import os to the top import block
(alongside numpy and stable_baselines3 imports) so uses of os.path.exists in the
RL agent initialization (e.g., in any __init__ or load_model functions that
reference model_path) work correctly, then run lint/tests to confirm no other
missing imports.
- Around line 15-20: In _default_rl_model_path(), avoid swallowing all
exceptions from load_trading_config(); instead catch specific errors (e.g.,
FileNotFoundError, ValueError, KeyError) raised by load_trading_config() and log
a warning with the exception details before falling back to
resolve_artifact_file("rl_agent_ppo.zip"); keep the normal return path that uses
trading_config.model.weights_dir when load_trading_config() succeeds and
reference the functions load_trading_config(), _default_rl_model_path, and
resolve_artifact_file when making the change.
In `@AI/modules/trader/train/train_ppo.py`:
- Around line 24-29: The helper _resolve_rl_save_dir currently swallows all
exceptions; change it to only handle expected config-related errors (e.g.,
FileNotFoundError, KeyError, ValueError, json.JSONDecodeError) when calling
load_trading_config and accessing trading_config.model.weights_dir, log a
warning with the exception details via the project's logger, and fall back to
resolve_artifact_root() only for those handled exceptions; do not catch
Exception broadly—let unexpected exceptions propagate (or re-raise) so real
failures aren’t hidden.
In `@AI/scripts/run_xai_cron.sh`:
- Around line 4-6: The script currently matches container names using
regex-based grep (grep -x/grep -E) against the overrideable
XAI_CONTAINER_NAME/CONTAINER_NAME which can contain regex metacharacters like
'.'; update the checks that reference CONTAINER_NAME so they use fixed-string
matching or Docker-native filtering: replace grep -x/grep -E usage with grep -Fx
(or better, use docker ps/docker inspect with --filter
"name=^${CONTAINER_NAME}$" or docker inspect "${CONTAINER_NAME}" to test
existence/status) to ensure literal name matching and avoid false positives
during start/stop/cleanup.
- Around line 4-5: 현재 스크립트에서 IMAGE와 IMAGE_REPO가 별개로 설정되어 있어 XAI_IMAGE만 오버라이드하면
cleanup 루프가 잘못된 리포지토(기본 IMAGE_REPO)를 정리할 수 있으니, cleanup 대상 리포지토를 IMAGE 값에서
파생시키거나 XAI_IMAGE_REPO가 명시되지 않은 경우 cleanup 루프(스크립트의 IMAGE, IMAGE_REPO 관련 코드 및
67-75번 블록)를 건너뛰도록 변경하세요: 즉, IMAGE가 "registry/.../name:tag" 형태이면 IMAGE_REPO를
IMAGE에서 추출해서 사용하고(예: strip 태그 및 digest), XAI_IMAGE_REPO 환경변수가 빈값이면 레포지토 정리 단계를
실행하지 않도록 조건을 추가해 안전하게 처리하세요.
---
Outside diff comments:
In `@AI/pipelines/daily_routine.py`:
- Around line 112-119: Wrap the initialize_models(...) call (the expression
assigned to model_wrappers) in a try/except that catches FileNotFoundError and
ValueError, logs a clear operational message including the exception (use the
existing logger/process_logger), and performs a safe shutdown/early return of
the pipeline instead of letting the exception propagate; ensure you reference
initialize_models, model_wrappers, loader, trading_config (data and model),
feature_columns and mode/run_mode so the handler can log context, and re-raise
or let other unexpected exceptions propagate after logging.
---
Nitpick comments:
In `@AI/modules/signal/models/PatchTST/train.py`:
- Around line 13-21: The function _default_model_save_path currently swallows
all exceptions by catching bare Exception when calling load_trading_config and
resolve_model_artifacts; update it to catch only expected errors (e.g.,
FileNotFoundError, OSError, ValueError, or your config-specific exception thrown
by load_trading_config) so that signals like KeyboardInterrupt or other
unexpected errors are not suppressed, and keep the existing fallback to
resolve_model_artifacts(model_name="patchtst") in the except block; reference
the functions _default_model_save_path, load_trading_config, and
resolve_model_artifacts when making this change.
In `@AI/modules/signal/models/PatchTST/wrapper.py`:
- Around line 107-108: The call site is redundantly adding a batch axis before
calling predict: variable window (shape (seq_len, features)) is expanded via
np.expand_dims(window, axis=0) and then passed into predict(), but predict()
already handles 2D inputs by adding the batch dim internally. Remove the
np.expand_dims call and pass window directly to predict(window) (referencing the
variable window and the predict method) so the input batching logic is only in
one place and the code is clearer.
- Around line 91-96: The code path that auto-selects features when self.features
is empty can yield non-deterministic column order; update the fallback in the
PatchTST wrapper to (1) emit a warning via the component logger (e.g.,
self.logger.warning or module logger) that features are being auto-selected and
recommend explicit configuration, and (2) choose features deterministically
(e.g., sort numeric_columns before slicing or otherwise enforce a stable order)
when using self.config.get("enc_in", 7) so repeated runs pick the same columns;
reference the existing self.features assignment and the config key "enc_in" when
making the change.
In `@AI/modules/signal/models/TCN/train.py`:
- Around line 210-214: The module currently computes default_output_dir at
import time using load_trading_config() which can raise and break imports; move
this computation into parse_args() and add a fallback pattern: inside
parse_args(), call resolve_model_artifacts(...) and load_trading_config() within
a try-except, and on exception set default_output_dir to a safe fallback (e.g.,
a temp or relative "artifacts/tcn" directory) before building the
argparse.Namespace defaults; refer to the parse_args function and symbols
default_output_dir, resolve_model_artifacts, and load_trading_config when making
the change.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: be185df8-9178-413b-a366-ff0e115b9178
📒 Files selected for processing (21)
.gitignoreAI/config/__init__.pyAI/config/trading.pyAI/modules/signal/core/__init__.pyAI/modules/signal/core/artifact_paths.pyAI/modules/signal/models/PatchTST/train.pyAI/modules/signal/models/PatchTST/wrapper.pyAI/modules/signal/models/TCN/train.pyAI/modules/signal/models/TCN/wrapper.pyAI/modules/signal/models/__init__.pyAI/modules/signal/models/itransformer/wrapper.pyAI/modules/signal/models/transformer/train.pyAI/modules/signal/workflows/inference.pyAI/modules/signal/workflows/optimize_thresholds.pyAI/modules/signal/workflows/train_calibrator.pyAI/modules/trader/strategies/rl_agent.pyAI/modules/trader/train/train_ppo.pyAI/pipelines/components/model_manager.pyAI/pipelines/daily_routine.pyAI/scripts/run_xai_cron.shdocker-compose.yml
💤 Files with no reviewable changes (3)
- AI/modules/signal/workflows/train_calibrator.py
- AI/modules/signal/workflows/inference.py
- AI/modules/signal/workflows/optimize_thresholds.py
AI/config/trading.py
Outdated
|
|
||
|
|
||
| def _build_config(raw: dict[str, Any]) -> TradingConfig: | ||
| model_weights_dir = os.getenv(MODEL_WEIGHTS_DIR_ENV_VAR, raw["model"]["weights_dir"]) |
There was a problem hiding this comment.
환경변수 공백값 처리 누락으로 가중치 경로가 잘못 해석될 수 있습니다.
Line 120에서 env var가 빈 문자열이어도 그대로 사용되어, Line 146에서 프로젝트 루트 등 의도치 않은 경로로 해석될 수 있습니다. 공백/빈값은 config 값으로 fallback 하도록 보강해주세요.
수정 예시
def _build_config(raw: dict[str, Any]) -> TradingConfig:
- model_weights_dir = os.getenv(MODEL_WEIGHTS_DIR_ENV_VAR, raw["model"]["weights_dir"])
+ env_weights_dir = os.getenv(MODEL_WEIGHTS_DIR_ENV_VAR)
+ model_weights_dir = (env_weights_dir or "").strip() or raw["model"]["weights_dir"]Also applies to: 146-146
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AI/config/trading.py` at line 120, The code reads model_weights_dir from
os.getenv(MODEL_WEIGHTS_DIR_ENV_VAR, raw["model"]["weights_dir"]) but doesn't
handle empty or whitespace-only env values; update the logic so you retrieve the
env var, trim it, and if it's None/empty after strip() fall back to
raw["model"]["weights_dir"] (apply the same guard to the analogous env fallback
at line 146); reference model_weights_dir and MODEL_WEIGHTS_DIR_ENV_VAR when
making this change.
| def _resolve_rl_save_dir() -> str: | ||
| try: | ||
| trading_config = load_trading_config() | ||
| return resolve_artifact_root(trading_config.model.weights_dir) | ||
| except Exception: | ||
| return resolve_artifact_root() |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, find the file and explore its context
find . -type f -name "train_ppo.py" | head -5Repository: SISC-IT/sisc-web
Length of output: 98
🏁 Script executed:
# Get the directory structure to understand the project layout
git ls-files | grep -E "(train_ppo|trading_config|artifact)" | head -20Repository: SISC-IT/sisc-web
Length of output: 137
🏁 Script executed:
# Look at the specific file and surrounding context
git ls-files | grep -E "^AI/modules/trader" | head -20Repository: SISC-IT/sisc-web
Length of output: 406
🏁 Script executed:
# Read the train_ppo.py file to see the function in context
wc -l AI/modules/trader/train/train_ppo.pyRepository: SISC-IT/sisc-web
Length of output: 99
🏁 Script executed:
# Read the specific function and surrounding context
cat -n AI/modules/trader/train/train_ppo.py | head -50Repository: SISC-IT/sisc-web
Length of output: 1813
🏁 Script executed:
# Search for load_trading_config function definition
rg "def load_trading_config" -A 10Repository: SISC-IT/sisc-web
Length of output: 720
🏁 Script executed:
# Search for resolve_artifact_root function definition
rg "def resolve_artifact_root" -A 10Repository: SISC-IT/sisc-web
Length of output: 1085
🏁 Script executed:
# Read the full load_trading_config function
cat -n AI/config/trading.py | grep -A 30 "def load_trading_config"Repository: SISC-IT/sisc-web
Length of output: 767
🏁 Script executed:
# Read the full trading.py to understand the exceptions
wc -l AI/config/trading.pyRepository: SISC-IT/sisc-web
Length of output: 84
🏁 Script executed:
# Read artifact_paths.py completely
cat -n AI/modules/signal/core/artifact_paths.pyRepository: SISC-IT/sisc-web
Length of output: 4817
🏁 Script executed:
# Check if there are any logging imports or patterns used in train_ppo.py
cat AI/modules/trader/train/train_ppo.py | grep -E "(import|logger|log\.)"Repository: SISC-IT/sisc-web
Length of output: 344
🏁 Script executed:
# Check logging patterns in the codebase
rg "logger\.|logging\.warning|logging\.error" AI/modules/trader --max-count=10Repository: SISC-IT/sisc-web
Length of output: 42
🏁 Script executed:
# Check for logging patterns in the entire codebase
rg "import logging|from logging|logger = |logger\." --max-count=20Repository: SISC-IT/sisc-web
Length of output: 42
🏁 Script executed:
# Check what exceptions are raised by checking _read_json and _build_config
cat -n AI/config/trading.py | head -100Repository: SISC-IT/sisc-web
Length of output: 3038
🏁 Script executed:
# Look at the full trading.py to understand what exceptions can be raised
cat -n AI/config/trading.pyRepository: SISC-IT/sisc-web
Length of output: 6750
🏁 Script executed:
# Check how exception handling is done in similar config loading scenarios
rg "except.*Error" AI/config --max-count=10 -B 2 -A 2Repository: SISC-IT/sisc-web
Length of output: 42
🏁 Script executed:
# Verify if json.JSONDecodeError could be raised
rg "json\.load|json\.JSONDecodeError" --max-count=10Repository: SISC-IT/sisc-web
Length of output: 386
🏁 Script executed:
# Check how other similar fallback patterns are handled
rg "try:.*except" AI/modules/trader --max-count=15 -A 1Repository: SISC-IT/sisc-web
Length of output: 42
🏁 Script executed:
# Check what exceptions could be raised from the functions called
# Let's see what exceptions json.JSONDecodeError covers
python3 << 'EOF'
import json
import inspect
# Check json.JSONDecodeError hierarchy
print("json.JSONDecodeError bases:", json.JSONDecodeError.__bases__)
# Check what exceptions could be raised
exceptions_to_check = [
FileNotFoundError,
json.JSONDecodeError,
KeyError,
ValueError,
TypeError,
AttributeError,
]
print("\nException hierarchy:")
for exc in exceptions_to_check:
print(f" {exc.__name__}: {exc.__bases__}")
EOFRepository: SISC-IT/sisc-web
Length of output: 369
🏁 Script executed:
# Check if there are any try-except patterns in the codebase that handle config loading
rg "load_trading_config|load_config" -A 5 -B 1 | grep -E "(try|except)" -A 2 -B 2 | head -30Repository: SISC-IT/sisc-web
Length of output: 1833
🏁 Script executed:
# Check the artifact_paths.py _resolve_absolute function more carefully
cat -n AI/modules/signal/core/artifact_paths.py | sed -n '22,26p'Repository: SISC-IT/sisc-web
Length of output: 265
🏁 Script executed:
# Check if there are any comments or documentation about this pattern
rg "fallback|graceful|degrade" AI/modules/trader AI/modules/signal --max-count=10Repository: SISC-IT/sisc-web
Length of output: 42
🏁 Script executed:
# Verify the exact line numbers and see if there are any other related calls
cat -n AI/modules/trader/train/train_ppo.py | sed -n '20,35p'Repository: SISC-IT/sisc-web
Length of output: 635
🏁 Script executed:
# Check if there's any explicit exception handling documentation
rg "Exception|error handling|config" AI/modules/trader/train/train_ppo.pyRepository: SISC-IT/sisc-web
Length of output: 326
🏁 Script executed:
# See if _resolve_absolute could raise any exception with Path.resolve()
python3 << 'EOF'
from pathlib import Path
import os
# Test what exceptions Path operations can raise
test_cases = [
"/valid/path",
"relative/path",
"",
]
for test_path in test_cases:
try:
p = Path(test_path)
resolved = p.resolve()
print(f"Path({test_path!r}).resolve() -> OK")
except Exception as e:
print(f"Path({test_path!r}).resolve() -> {type(e).__name__}: {e}")
EOFRepository: SISC-IT/sisc-web
Length of output: 158
광범위한 except Exception이 설정 오류를 은닉합니다.
Line 28에서 모든 예외를 삼키면 JSON 파싱 오류, 필수 키 누락, 설정 검증 실패 등을 감지하지 못하고 모델이 잘못된 경로에 저장될 수 있습니다. 특정 예외만 처리하고 경고를 남겨주세요.
수정 예시
def _resolve_rl_save_dir() -> str:
try:
trading_config = load_trading_config()
return resolve_artifact_root(trading_config.model.weights_dir)
- except Exception:
+ except (FileNotFoundError, ValueError, KeyError) as exc:
+ print(f"[RL][Warning] config 기반 저장 경로 해석 실패, 기본 경로로 대체: {exc}")
return resolve_artifact_root()📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def _resolve_rl_save_dir() -> str: | |
| try: | |
| trading_config = load_trading_config() | |
| return resolve_artifact_root(trading_config.model.weights_dir) | |
| except Exception: | |
| return resolve_artifact_root() | |
| def _resolve_rl_save_dir() -> str: | |
| try: | |
| trading_config = load_trading_config() | |
| return resolve_artifact_root(trading_config.model.weights_dir) | |
| except (FileNotFoundError, ValueError, KeyError) as exc: | |
| print(f"[RL][Warning] config 기반 저장 경로 해석 실패, 기본 경로로 대체: {exc}") | |
| return resolve_artifact_root() |
🧰 Tools
🪛 Ruff (0.15.6)
[warning] 28-28: Do not catch blind exception: Exception
(BLE001)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AI/modules/trader/train/train_ppo.py` around lines 24 - 29, The helper
_resolve_rl_save_dir currently swallows all exceptions; change it to only handle
expected config-related errors (e.g., FileNotFoundError, KeyError, ValueError,
json.JSONDecodeError) when calling load_trading_config and accessing
trading_config.model.weights_dir, log a warning with the exception details via
the project's logger, and fall back to resolve_artifact_root() only for those
handled exceptions; do not catch Exception broadly—let unexpected exceptions
propagate (or re-raise) so real failures aren’t hidden.
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (2)
AI/libs/llm/ollama.py (1)
53-56:print()대신logging사용을 권장합니다.프로덕션 환경에서는
print()대신logging모듈을 사용하면 로그 레벨 관리와 출력 형식 제어가 용이해집니다.♻️ 제안된 변경
+import logging + +logger = logging.getLogger(__name__) + # _ensure_model_available 메서드 내부 - print( - f"[OllamaClient][Warning] Default model '{self.model_name}' is unavailable. " - f"Using '{fallback_model}' instead." - ) + logger.warning( + "Default model '%s' is unavailable. Using '%s' instead.", + self.model_name, + fallback_model, + )🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI/libs/llm/ollama.py` around lines 53 - 56, Replace the print call in OllamaClient (the warning that default model self.model_name is unavailable and fallback_model is used) with the logging module: obtain a logger (module-level logger or self.logger) and call logger.warning with the same formatted message so production logging levels and handlers are respected; ensure the module imports logging and that any existing logger convention in this file/class is followed.AI/libs/llm/__init__.py (1)
16-21:__all__정렬을 린트 규칙에 맞추면 CI 노이즈를 줄일 수 있습니다.현재 순서는 Ruff RUF022 경고를 유발할 수 있습니다. 알파벳 순 정렬로 맞추는 것을 권장합니다.
정렬 예시
__all__ = [ "BaseLLMClient", + "GeminiClient", "GroqClient", "OllamaClient", - "GeminiClient" ]🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@AI/libs/llm/__init__.py` around lines 16 - 21, 정렬 문제: __all__ 리스트 항목(BaseLLMClient, GeminiClient, GroqClient, OllamaClient)을 알파벳 순으로 재배치하여 린트 경고(Ruff RUF022)를 제거하세요; 즉 __all__에 있는 문자열들을 사전순(예: "BaseLLMClient", "GeminiClient", "GroqClient", "OllamaClient")으로 정렬하고 기존 변수 이름과 대소문자 철자를 변경하지 마세요.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@AI/libs/llm/__init__.py`:
- Around line 6-14: The current import wrapper catches all Exception and binds
gemini_import_error only in the except block, causing a NameError later when
GeminiClient.__init__ references it; narrow the except to ImportError and
preserve the caught exception for later use (for example assign it to a
class-level variable like GeminiClient._import_error or capture it as a default
argument to __init__) so that GeminiClient.__init__ can raise ImportError(... )
from that preserved exception; update the try/except around "from .gemini import
GeminiClient" and adjust the fallback GeminiClient class to reference the stored
import error instead of the out-of-scope gemini_import_error.
---
Nitpick comments:
In `@AI/libs/llm/__init__.py`:
- Around line 16-21: 정렬 문제: __all__ 리스트 항목(BaseLLMClient, GeminiClient,
GroqClient, OllamaClient)을 알파벳 순으로 재배치하여 린트 경고(Ruff RUF022)를 제거하세요; 즉 __all__에
있는 문자열들을 사전순(예: "BaseLLMClient", "GeminiClient", "GroqClient", "OllamaClient")으로
정렬하고 기존 변수 이름과 대소문자 철자를 변경하지 마세요.
In `@AI/libs/llm/ollama.py`:
- Around line 53-56: Replace the print call in OllamaClient (the warning that
default model self.model_name is unavailable and fallback_model is used) with
the logging module: obtain a logger (module-level logger or self.logger) and
call logger.warning with the same formatted message so production logging levels
and handlers are respected; ensure the module imports logging and that any
existing logger convention in this file/class is followed.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 94eed60e-6655-476b-8505-bb489a44bde8
📒 Files selected for processing (4)
AI/libs/llm/__init__.pyAI/libs/llm/ollama.pyAI/tests/run_backfill.pyAI/tests/run_backtest.py
✅ Files skipped from review due to trivial changes (2)
- AI/tests/run_backtest.py
- AI/tests/run_backfill.py
| try: | ||
| from .gemini import GeminiClient | ||
| except Exception as gemini_import_error: | ||
| class GeminiClient: # type: ignore[no-redef] | ||
| def __init__(self, *args, **kwargs): | ||
| raise ImportError( | ||
| "GeminiClient requires the `google-genai` package. " | ||
| "Install it with `pip install -U google-genai`." | ||
| ) from gemini_import_error |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cd AI/libs/llm && pwd && cat -n __init__.pyRepository: SISC-IT/sisc-web
Length of output: 840
🏁 Script executed:
python3 << 'EOF'
# PEP 3110에 따른 예외 변수 바인딩 범위 검증
try:
# 예외 바인딩 테스트
try:
raise ImportError("test error")
except ImportError as e:
stored_error = e # 외부 변수에 저장
exception_in_block = e
print(f"except 블록 내부: {e}")
# except 블록 종료 후
try:
print(f"블록 외부에서 exception_in_block 접근: {exception_in_block}")
except NameError as ne:
print(f"NameError 발생: {ne}")
# 외부 변수는 접근 가능
print(f"블록 외부에서 stored_error 접근: {stored_error}")
except Exception as err:
print(f"예상 밖의 오류: {err}")
EOFRepository: SISC-IT/sisc-web
Length of output: 162
🏁 Script executed:
python3 << 'EOF'
# 클래스 메서드에서 예외 변수 참조 패턴 테스트
test_results = []
# 패턴 1: 직접 참조 (리뷰가 지적한 문제)
try:
try:
raise ImportError("direct reference test")
except ImportError as my_error:
class TestClass1:
def __init__(self):
# except 블록이 이미 종료된 상태에서 메서드 호출
raise ImportError("test") from my_error
# except 블록 종료 후 클래스 인스턴스화 시도
try:
TestClass1()
except Exception as e:
test_results.append(f"패턴1 (직접참조): {type(e).__name__} - {e}")
except Exception as e:
test_results.append(f"패턴1 오류: {type(e).__name__} - {e}")
# 패턴 2: 변수 저장 (리뷰 제안)
try:
try:
raise ImportError("saved variable test")
except ImportError as my_error:
saved_error = my_error
class TestClass2:
def __init__(self):
raise ImportError("test") from saved_error
try:
TestClass2()
except Exception as e:
test_results.append(f"패턴2 (저장): {type(e).__name__} - {e}")
except Exception as e:
test_results.append(f"패턴2 오류: {type(e).__name__} - {e}")
for result in test_results:
print(result)
EOFRepository: SISC-IT/sisc-web
Length of output: 143
🏁 Script executed:
python3 << 'EOF'
# 기본값 파라미터를 통한 접근 (리뷰 제안) 검증
try:
try:
raise ImportError("parameter default test")
except ImportError as my_error:
class TestClass3:
def __init__(self, *args, _cause=my_error, **kwargs):
raise ImportError("test") from _cause
try:
TestClass3()
except Exception as e:
print(f"패턴3 (기본값): {type(e).__name__} - {e}")
except Exception as e:
print(f"패턴3 오류: {type(e).__name__} - {e}")
EOFRepository: SISC-IT/sisc-web
Length of output: 89
GeminiClient 초기화 시 NameError가 발생하며, 예외 처리 범위도 과도합니다.
라인 8의 except Exception as gemini_import_error는 .gemini 내부의 모든 예외를 숨길 수 있습니다. 더 중요하게는, except 블록 종료 후 해당 변수는 스코프에서 제거되므로, 라인 14에서 메서드 실행 시 from gemini_import_error를 참조할 때 NameError가 발생합니다. ImportError만 명시적으로 포착하고, 예외를 별도 변수나 메서드 기본값으로 저장하세요.
수정 예시
try:
from .gemini import GeminiClient
-except Exception as gemini_import_error:
+except ImportError as gemini_import_error:
+ _gemini_import_error = gemini_import_error
class GeminiClient: # type: ignore[no-redef]
- def __init__(self, *args, **kwargs):
+ def __init__(self, *args, _cause=_gemini_import_error, **kwargs):
raise ImportError(
"GeminiClient requires the `google-genai` package. "
"Install it with `pip install -U google-genai`."
- ) from gemini_import_error
+ ) from _cause🧰 Tools
🪛 Ruff (0.15.6)
[warning] 8-8: Do not catch blind exception: Exception
(BLE001)
[error] 14-14: Undefined name gemini_import_error
(F821)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@AI/libs/llm/__init__.py` around lines 6 - 14, The current import wrapper
catches all Exception and binds gemini_import_error only in the except block,
causing a NameError later when GeminiClient.__init__ references it; narrow the
except to ImportError and preserve the caught exception for later use (for
example assign it to a class-level variable like GeminiClient._import_error or
capture it as a default argument to __init__) so that GeminiClient.__init__ can
raise ImportError(... ) from that preserved exception; update the try/except
around "from .gemini import GeminiClient" and adjust the fallback GeminiClient
class to reference the stored import error instead of the out-of-scope
gemini_import_error.
Summary by CodeRabbit
새로운 기능
개선
제거
기타