diff --git a/README.md b/README.md index 544c67c..f1282ee 100644 --- a/README.md +++ b/README.md @@ -1,40 +1,68 @@ # porjectx -## Dependency compatibility +## Reproducible environment bootstrap -The `toptek/requirements-lite.txt` file pins the scientific stack to keep it -compatible with the bundled scikit-learn release: +The numeric stack is pinned via [`constraints.txt`](constraints.txt) to avoid the +ABI/runtime mismatches that previously caused NumPy/SciPy import errors. The +root [`requirements.txt`](requirements.txt) includes that constraint file and +pulls in the toolkit's dependencies from `toptek/requirements-lite.txt`. -- `scikit-learn==1.3.2` -- `numpy>=1.21.6,<1.28` -- `scipy>=1.7.3,<1.12` +On Windows, run the helper script to recreate a clean environment and verify the +stack: -These ranges follow the support window published by scikit-learn 1.3.x and are -also consumed transitively by `toptek/requirements-streaming.txt` through its -`-r requirements-lite.txt` include. Installing within these bounds avoids the -ABI mismatches that occur with the NumPy/SciPy wheels when using newer major -releases. In particular, upgrading NumPy beyond `<1.28` causes SciPy to raise -its "compiled against NumPy 1.x" `ImportError`, mirroring the guidance already -documented in `toptek/README.md`. +```powershell +.\scripts\setup_env.ps1 +``` -## Verifying the environment +The script rebuilds `.venv`, installs from `requirements.txt`, then prints +`STACK_OK` followed by the resolved versions in JSON form. The check ensures the +runtime matches `numpy==1.26.4`, `scipy==1.10.1`, and `scikit-learn==1.3.2` +exactly alongside compatible `pandas`, `joblib`, and `threadpoolctl` wheels. -Use Python **3.10 or 3.11**—matching the guidance in `toptek/README.md`'s -quickstart—to stay within the wheel support window for SciPy and -scikit-learn 1.3.x. Python 3.12 is currently unsupported because prebuilt -SciPy/scikit-learn wheels for that interpreter depend on NumPy ≥1.28 and -SciPy ≥1.12, which exceed this project's pinned ranges. Create and activate a -compatible virtual environment, then install and check for dependency issues: +For POSIX shells the equivalent manual steps are: ```bash python -m venv .venv source .venv/bin/activate pip install --upgrade pip -pip install -r toptek/requirements-lite.txt -pip check +pip install -r requirements.txt ``` -The final `pip check` call should report "No broken requirements found", -confirming that the pinned dependency set resolves without conflicts. Users on -Python 3.12 should downgrade to Python 3.10/3.11 or wait for a dependency -refresh that supports NumPy ≥1.28 and SciPy ≥1.12 before proceeding. +## Runtime telemetry and guardrails + +The entry point now executes `toptek.core.utils.assert_numeric_stack()` and +`toptek.core.utils.set_seeds(42)` during startup. Version validation writes a +structured report to `reports/run_stack.json` so crash reports include the exact +Python and numeric-library versions. Structured logging is initialised via +`logging.basicConfig` with a rotating file handler targeting +`logs/toptek_YYYYMMDD.log` alongside console output, keeping telemetry for both +CLI and GUI sessions. + +## UI configuration surface + +The manual trading shell and Tkinter dashboard read defaults from +[`configs/ui.yml`](configs/ui.yml). The file ships with sensible demo values so +the GUI renders without external data sources: + +- `appearance` — theme token (currently `dark`) and accent family used by the + style registry. +- `shell` — defaults for the research symbol/timeframe, training lookback, + calibration flag, simulated backtest window, and preferred playbook. +- `chart` — LiveChart refresh cadence (`fps`), point budget, and price + precision used by streaming widgets. +- `status` — copy for the status banners shown in the Login, Train, Backtest, + and Guard tabs so product teams can retune messaging without touching code. + +Operators can override the YAML at runtime with environment variables or CLI +flags: + +- Environment variables follow the `TOPTEK_UI_*` convention, e.g. + `TOPTEK_UI_SYMBOL`, `TOPTEK_UI_INTERVAL`, `TOPTEK_UI_LOOKBACK_BARS`, + `TOPTEK_UI_CALIBRATE`, `TOPTEK_UI_FPS`, and `TOPTEK_UI_THEME`. +- CLI switches (`--symbol`, `--timeframe`, `--lookback`, `--model`, `--fps`) + apply the same overrides for one-off runs and are reflected back into the GUI + when it launches. + +These controls keep the default Topstep demo intact while making it easy to +point the toolkit at alternative markets or stress-test higher frequency charts +without editing source files. diff --git a/configs/ui.yml b/configs/ui.yml new file mode 100644 index 0000000..0d33ecf --- /dev/null +++ b/configs/ui.yml @@ -0,0 +1,31 @@ +appearance: + theme: dark + accent: violet +shell: + symbol: ES=F + interval: 5m + research_bars: 240 + lookback_bars: 480 + calibrate: true + model: logistic + simulation_bars: 720 + playbook: momentum +chart: + fps: 12 + max_points: 180 + price_decimals: 2 +status: + login: + idle: "Awaiting verification" + saved: "Saved. Run verification to confirm access." + verified: "All keys present. Proceed to Research ▶" + training: + idle: "Awaiting training run" + success: "Model artefact refreshed. Continue to Backtest ▶" + backtest: + idle: "No simulations yet" + success: "Sim complete. If expectancy holds, draft a manual trade plan ▶" + guard: + pending: "Topstep Guard: pending review" + intro: "Manual execution only. Awaiting guard refresh..." + defensive_warning: "DEFENSIVE_MODE active. Stand down and review your journal before trading." diff --git a/constraints.txt b/constraints.txt new file mode 100644 index 0000000..f833a3b --- /dev/null +++ b/constraints.txt @@ -0,0 +1,6 @@ +numpy==1.26.4 +scipy==1.10.1 +scikit-learn==1.3.2 +joblib>=1.3,<2 +threadpoolctl>=3,<4 +pandas>=1.5,<2.3 diff --git a/pyproject.toml b/pyproject.toml new file mode 100644 index 0000000..4a85092 --- /dev/null +++ b/pyproject.toml @@ -0,0 +1,3 @@ +[build-system] +requires = ["setuptools>=61", "wheel"] +build-backend = "setuptools.build_meta" diff --git a/requirements.txt b/requirements.txt new file mode 100644 index 0000000..531f036 --- /dev/null +++ b/requirements.txt @@ -0,0 +1,2 @@ +-c constraints.txt +-r toptek/requirements-lite.txt diff --git a/scripts/setup_env.ps1 b/scripts/setup_env.ps1 new file mode 100644 index 0000000..592dff2 --- /dev/null +++ b/scripts/setup_env.ps1 @@ -0,0 +1,33 @@ +param( + [string]$Python = "py -3.11" +) + +$ErrorActionPreference = "Stop" + +if (Test-Path ".venv") { + Remove-Item ".venv" -Recurse -Force +} + +& $Python -m venv .venv + +$venvPath = Join-Path (Resolve-Path ".venv").Path "Scripts" +$venvPython = Join-Path $venvPath "python.exe" + +& $venvPython -m pip install --upgrade pip +& $venvPython -m pip install -r requirements.txt + +$stackCheck = @" +import importlib +import json +import platform + +modules = ["numpy", "scipy", "sklearn", "pandas", "joblib", "threadpoolctl"] +versions = {name: importlib.import_module(name).__version__ for name in modules} +print("STACK_OK") +print(json.dumps({ + "python": platform.python_version(), + "versions": versions, +}, indent=2)) +"@ + +& $venvPython -c $stackCheck diff --git a/tests/test_training_pipeline_integration.py b/tests/test_training_pipeline_integration.py index 7b6d0b0..4bf81ac 100644 --- a/tests/test_training_pipeline_integration.py +++ b/tests/test_training_pipeline_integration.py @@ -65,11 +65,26 @@ def fake_train_classifier(X, y, **kwargs): monkeypatch.setattr("toptek.main.build_features", fake_build_features) monkeypatch.setattr("toptek.main.model.train_classifier", fake_train_classifier) - monkeypatch.setattr("toptek.main.data.sample_dataframe", lambda: _sample_dataframe(140)) + monkeypatch.setattr( + "toptek.main.data.sample_dataframe", lambda: _sample_dataframe(140) + ) - args = argparse.Namespace(cli="train", model="logistic", symbol="ES", timeframe="5m", lookback="90d", start=None) + args = argparse.Namespace( + cli="train", + model="logistic", + symbol="ES", + timeframe="5m", + lookback="90d", + start=None, + ) configs = {"risk": {}, "app": {}, "features": {}} - paths = utils.AppPaths(root=tmp_path, cache=tmp_path / "cache", models=tmp_path / "models") + paths = utils.AppPaths( + root=tmp_path, + cache=tmp_path / "cache", + models=tmp_path / "models", + logs=tmp_path / "logs", + reports=tmp_path / "reports", + ) run_cli(args, configs, paths) @@ -122,8 +137,12 @@ def fake_train_classifier(X, y, **kwargs): ) monkeypatch.setattr("toptek.gui.widgets.build_features", fake_build_features) - monkeypatch.setattr("toptek.gui.widgets.sample_dataframe", lambda rows: _sample_dataframe(rows)) - monkeypatch.setattr("toptek.gui.widgets.model.train_classifier", fake_train_classifier) + monkeypatch.setattr( + "toptek.gui.widgets.sample_dataframe", lambda rows: _sample_dataframe(rows) + ) + monkeypatch.setattr( + "toptek.gui.widgets.model.train_classifier", fake_train_classifier + ) monkeypatch.setattr("tkinter.messagebox.showwarning", lambda *args, **kwargs: None) monkeypatch.setattr("tkinter.messagebox.showinfo", lambda *args, **kwargs: None) @@ -131,7 +150,13 @@ def fake_train_classifier(X, y, **kwargs): notebook.pack() configs: dict[str, dict[str, object]] = {} - paths = utils.AppPaths(root=tmp_path, cache=tmp_path / "cache", models=tmp_path / "models") + paths = utils.AppPaths( + root=tmp_path, + cache=tmp_path / "cache", + models=tmp_path / "models", + logs=tmp_path / "logs", + reports=tmp_path / "reports", + ) tab = TrainTab(notebook, configs, paths) tab.calibrate_var.set(False) diff --git a/tests/test_ui_config.py b/tests/test_ui_config.py new file mode 100644 index 0000000..9339de4 --- /dev/null +++ b/tests/test_ui_config.py @@ -0,0 +1,46 @@ +from __future__ import annotations +from pathlib import Path + +import pytest + +from core import ui_config + + +def test_load_ui_config_defaults(tmp_path: Path) -> None: + path = tmp_path / "ui.yml" + path.write_text("{}\n", encoding="utf-8") + cfg = ui_config.load_ui_config(path, env={}) + assert cfg.shell.symbol == "ES=F" + assert cfg.shell.interval == "5m" + assert cfg.shell.research_bars == 240 + assert cfg.chart.fps == 12 + assert cfg.status.login.idle == "Awaiting verification" + assert cfg.appearance.theme == "dark" + + +def test_load_ui_config_env_overrides(tmp_path: Path) -> None: + path = tmp_path / "ui.yml" + path.write_text( + "shell:\n symbol: ES=F\n calibrate: true\nchart:\n fps: 8\n", + encoding="utf-8", + ) + env = { + "TOPTEK_UI_SYMBOL": "NQ=F", + "TOPTEK_UI_CALIBRATE": "false", + "TOPTEK_UI_LOOKBACK_BARS": "960", + "TOPTEK_UI_FPS": "24", + "TOPTEK_UI_THEME": "dark", + } + cfg = ui_config.load_ui_config(path, env=env) + assert cfg.shell.symbol == "NQ=F" + assert cfg.shell.calibrate is False + assert cfg.shell.lookback_bars == 960 + assert cfg.chart.fps == 24 + assert cfg.appearance.theme == "dark" + + +def test_load_ui_config_validation(tmp_path: Path) -> None: + path = tmp_path / "ui.yml" + path.write_text("chart:\n fps: 0\n", encoding="utf-8") + with pytest.raises(ValueError): + ui_config.load_ui_config(path, env={}) diff --git a/tests/test_utils_stack.py b/tests/test_utils_stack.py new file mode 100644 index 0000000..46dbcf6 --- /dev/null +++ b/tests/test_utils_stack.py @@ -0,0 +1,68 @@ +"""Tests for numeric stack validation and logging utilities.""" + +import json +import logging +from logging.handlers import RotatingFileHandler +from pathlib import Path + +import numpy as np +import pytest + +from toptek.core import utils + + +def test_assert_numeric_stack_writes_report(tmp_path: Path) -> None: + reports_dir = tmp_path / "reports" + versions = utils.assert_numeric_stack(reports_dir=reports_dir) + + report_path = reports_dir / "run_stack.json" + assert report_path.exists() + + payload = json.loads(report_path.read_text(encoding="utf-8")) + assert payload["status"] == "ok" + assert payload["required"]["numpy"] == versions["numpy"] + assert payload["expected"]["scipy"] == utils.STACK_REQUIREMENTS["scipy"] + + +def test_assert_numeric_stack_raises_on_mismatch( + monkeypatch: pytest.MonkeyPatch, tmp_path: Path +) -> None: + monkeypatch.setitem(utils.STACK_REQUIREMENTS, "numpy", "0.0.0") + + with pytest.raises(RuntimeError) as excinfo: + utils.assert_numeric_stack(reports_dir=tmp_path) + + assert "scripts/setup_env.ps1" in str(excinfo.value) + + report = json.loads((tmp_path / "run_stack.json").read_text(encoding="utf-8")) + assert report["status"] == "error" + + +def test_set_seeds_reproducible() -> None: + utils.set_seeds(123) + first = np.random.random(3) + utils.set_seeds(123) + second = np.random.random(3) + + assert np.allclose(first, second) + + +def test_configure_logging_installs_rotating_handler(tmp_path: Path) -> None: + root_logger = logging.getLogger() + original_handlers = list(root_logger.handlers) + for handler in original_handlers: + root_logger.removeHandler(handler) + + try: + log_path = utils.configure_logging(tmp_path, level="INFO") + assert log_path.exists() + assert any( + isinstance(handler, RotatingFileHandler) + for handler in logging.getLogger().handlers + ) + finally: + for handler in logging.getLogger().handlers: + handler.close() + logging.getLogger().handlers.clear() + for handler in original_handlers: + logging.getLogger().addHandler(handler) diff --git a/toptek/README.md b/toptek/README.md index d1ee4ea..2dd870e 100644 --- a/toptek/README.md +++ b/toptek/README.md @@ -10,14 +10,12 @@ Toptek is a Windows-friendly starter kit for working with the ProjectX Gateway ( ```powershell # Windows, Python 3.11 -py -3.11 -m venv .venv +.\scripts\setup_env.ps1 .venv\Scripts\activate -pip install --upgrade pip -pip install -r requirements-lite.txt -copy .env.example .env +copy toptek\.env.example .env # edit PX_* in .env OR use GUI Settings -python main.py +python toptek\main.py ``` ## CLI usage examples @@ -62,7 +60,9 @@ Configuration defaults live under the `config/` folder and are merged with value ## Requirements profiles -- `requirements-lite.txt`: minimal dependencies for polling workflows. NumPy is capped below 1.28 so the bundled SciPy wheels stay importable; installing NumPy 2.x triggers a SciPy `ImportError` about missing manylinux-compatible binaries. +- `../constraints.txt`: pins NumPy 1.26.4, SciPy 1.10.1, scikit-learn 1.3.2 plus compatible `pandas`, `joblib`, and `threadpoolctl` wheels. +- `../requirements.txt`: references the constraint file and pulls in the lite dependency set. +- `requirements-lite.txt`: minimal dependencies for polling workflows (consumed via the root requirements). - `requirements-streaming.txt`: extends the lite profile with optional SignalR streaming support. ## Development notes diff --git a/toptek/config/app.yml b/toptek/config/app.yml index dfbb07f..ec62b29 100644 --- a/toptek/config/app.yml +++ b/toptek/config/app.yml @@ -2,3 +2,5 @@ polling_interval_seconds: 5 cache_directory: data/cache models_directory: models log_level: INFO +logs_directory: logs +reports_directory: reports diff --git a/toptek/core/ui_config.py b/toptek/core/ui_config.py new file mode 100644 index 0000000..6ffd405 --- /dev/null +++ b/toptek/core/ui_config.py @@ -0,0 +1,336 @@ +"""UI configuration parsing utilities with environment overrides.""" + +from __future__ import annotations + +import os +from dataclasses import asdict, dataclass, field, replace +from pathlib import Path +from typing import Any, Dict, Mapping + +from . import utils + + +def _coerce_str(value: Any, field_name: str) -> str: + if value is None: + raise ValueError(f"{field_name} cannot be null") + return str(value) + + +def _coerce_bool(value: Any, field_name: str) -> bool: + if isinstance(value, bool): + return value + if isinstance(value, str): + lowered = value.strip().lower() + if lowered in {"1", "true", "yes", "on"}: + return True + if lowered in {"0", "false", "no", "off"}: + return False + raise ValueError(f"{field_name} must be a boolean or boolean-like string") + + +def _coerce_int(value: Any, field_name: str, *, minimum: int | None = None) -> int: + try: + coerced = int(value) + except (TypeError, ValueError) as exc: # pragma: no cover - defensive + raise ValueError(f"{field_name} must be an integer") from exc + if minimum is not None and coerced < minimum: + raise ValueError(f"{field_name} must be >= {minimum}") + return coerced + + +@dataclass(frozen=True) +class ShellSettings: + """Configuration for CLI shell defaults.""" + + symbol: str = "ES=F" + interval: str = "5m" + research_bars: int = 240 + lookback_bars: int = 480 + calibrate: bool = True + model: str = "logistic" + simulation_bars: int = 720 + playbook: str = "momentum" + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "ShellSettings": + calibrate_raw = data.get("calibrate", cls.calibrate) + if isinstance(calibrate_raw, str): + calibrate_value = _coerce_bool(calibrate_raw, "shell.calibrate") + elif isinstance(calibrate_raw, bool): + calibrate_value = calibrate_raw + elif calibrate_raw is None: + calibrate_value = cls.calibrate + else: + calibrate_value = bool(calibrate_raw) + return cls( + symbol=_coerce_str(data.get("symbol", cls.symbol), "shell.symbol"), + interval=_coerce_str(data.get("interval", cls.interval), "shell.interval"), + research_bars=_coerce_int( + data.get("research_bars", cls.research_bars), + "shell.research_bars", + minimum=60, + ), + lookback_bars=_coerce_int( + data.get("lookback_bars", cls.lookback_bars), + "shell.lookback_bars", + minimum=120, + ), + calibrate=calibrate_value, + model=_coerce_str(data.get("model", cls.model), "shell.model"), + simulation_bars=_coerce_int( + data.get("simulation_bars", cls.simulation_bars), + "shell.simulation_bars", + minimum=120, + ), + playbook=_coerce_str(data.get("playbook", cls.playbook), "shell.playbook"), + ) + + def apply_environment(self, env: Mapping[str, str]) -> "ShellSettings": + updates: Dict[str, Any] = {} + if env.get("TOPTEK_UI_SYMBOL"): + updates["symbol"] = env["TOPTEK_UI_SYMBOL"] + if env.get("TOPTEK_UI_INTERVAL"): + updates["interval"] = env["TOPTEK_UI_INTERVAL"] + if env.get("TOPTEK_UI_RESEARCH_BARS"): + updates["research_bars"] = _coerce_int( + env["TOPTEK_UI_RESEARCH_BARS"], + "env.TOPTEK_UI_RESEARCH_BARS", + minimum=60, + ) + if env.get("TOPTEK_UI_LOOKBACK_BARS"): + updates["lookback_bars"] = _coerce_int( + env["TOPTEK_UI_LOOKBACK_BARS"], + "env.TOPTEK_UI_LOOKBACK_BARS", + minimum=120, + ) + if env.get("TOPTEK_UI_CALIBRATE"): + updates["calibrate"] = _coerce_bool( + env["TOPTEK_UI_CALIBRATE"], "env.TOPTEK_UI_CALIBRATE" + ) + if env.get("TOPTEK_UI_MODEL"): + updates["model"] = env["TOPTEK_UI_MODEL"] + if env.get("TOPTEK_UI_SIMULATION_BARS"): + updates["simulation_bars"] = _coerce_int( + env["TOPTEK_UI_SIMULATION_BARS"], + "env.TOPTEK_UI_SIMULATION_BARS", + minimum=120, + ) + if env.get("TOPTEK_UI_PLAYBOOK"): + updates["playbook"] = env["TOPTEK_UI_PLAYBOOK"] + return replace(self, **updates) if updates else self + + +@dataclass(frozen=True) +class ChartSettings: + """Chart refresh and theming parameters.""" + + fps: int = 12 + max_points: int = 180 + price_decimals: int = 2 + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "ChartSettings": + return cls( + fps=_coerce_int(data.get("fps", cls.fps), "chart.fps", minimum=1), + max_points=_coerce_int( + data.get("max_points", cls.max_points), "chart.max_points", minimum=10 + ), + price_decimals=_coerce_int( + data.get("price_decimals", cls.price_decimals), + "chart.price_decimals", + minimum=0, + ), + ) + + def apply_environment(self, env: Mapping[str, str]) -> "ChartSettings": + updates: Dict[str, Any] = {} + if env.get("TOPTEK_UI_FPS"): + updates["fps"] = _coerce_int( + env["TOPTEK_UI_FPS"], "env.TOPTEK_UI_FPS", minimum=1 + ) + if env.get("TOPTEK_UI_CHART_POINTS"): + updates["max_points"] = _coerce_int( + env["TOPTEK_UI_CHART_POINTS"], "env.TOPTEK_UI_CHART_POINTS", minimum=10 + ) + return replace(self, **updates) if updates else self + + +@dataclass(frozen=True) +class AppearanceSettings: + """High-level UI theming choices.""" + + theme: str = "dark" + accent: str = "violet" + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "AppearanceSettings": + return cls( + theme=_coerce_str(data.get("theme", cls.theme), "appearance.theme"), + accent=_coerce_str(data.get("accent", cls.accent), "appearance.accent"), + ) + + def apply_environment(self, env: Mapping[str, str]) -> "AppearanceSettings": + updates: Dict[str, Any] = {} + if env.get("TOPTEK_UI_THEME"): + updates["theme"] = env["TOPTEK_UI_THEME"] + if env.get("TOPTEK_UI_ACCENT"): + updates["accent"] = env["TOPTEK_UI_ACCENT"] + return replace(self, **updates) if updates else self + + +@dataclass(frozen=True) +class LoginStatus: + idle: str = "Awaiting verification" + saved: str = "Saved. Run verification to confirm access." + verified: str = "All keys present. Proceed to Research ▶" + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "LoginStatus": + return cls( + idle=_coerce_str(data.get("idle", cls.idle), "status.login.idle"), + saved=_coerce_str(data.get("saved", cls.saved), "status.login.saved"), + verified=_coerce_str( + data.get("verified", cls.verified), "status.login.verified" + ), + ) + + +@dataclass(frozen=True) +class TrainingStatus: + idle: str = "Awaiting training run" + success: str = "Model artefact refreshed. Continue to Backtest ▶" + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "TrainingStatus": + return cls( + idle=_coerce_str(data.get("idle", cls.idle), "status.training.idle"), + success=_coerce_str( + data.get("success", cls.success), "status.training.success" + ), + ) + + +@dataclass(frozen=True) +class BacktestStatus: + idle: str = "No simulations yet" + success: str = "Sim complete. If expectancy holds, draft a manual trade plan ▶" + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "BacktestStatus": + return cls( + idle=_coerce_str(data.get("idle", cls.idle), "status.backtest.idle"), + success=_coerce_str( + data.get("success", cls.success), "status.backtest.success" + ), + ) + + +@dataclass(frozen=True) +class GuardStatus: + pending: str = "Topstep Guard: pending review" + intro: str = "Manual execution only. Awaiting guard refresh..." + defensive_warning: str = ( + "DEFENSIVE_MODE active. Stand down and review your journal before trading." + ) + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "GuardStatus": + return cls( + pending=_coerce_str( + data.get("pending", cls.pending), "status.guard.pending" + ), + intro=_coerce_str(data.get("intro", cls.intro), "status.guard.intro"), + defensive_warning=_coerce_str( + data.get("defensive_warning", cls.defensive_warning), + "status.guard.defensive_warning", + ), + ) + + +@dataclass(frozen=True) +class StatusMessages: + login: LoginStatus = field(default_factory=LoginStatus) + training: TrainingStatus = field(default_factory=TrainingStatus) + backtest: BacktestStatus = field(default_factory=BacktestStatus) + guard: GuardStatus = field(default_factory=GuardStatus) + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "StatusMessages": + return cls( + login=LoginStatus.from_mapping(data.get("login", {})), + training=TrainingStatus.from_mapping(data.get("training", {})), + backtest=BacktestStatus.from_mapping(data.get("backtest", {})), + guard=GuardStatus.from_mapping(data.get("guard", {})), + ) + + +@dataclass(frozen=True) +class UIConfig: + """Top-level structure for UI settings.""" + + appearance: AppearanceSettings = field(default_factory=AppearanceSettings) + shell: ShellSettings = field(default_factory=ShellSettings) + chart: ChartSettings = field(default_factory=ChartSettings) + status: StatusMessages = field(default_factory=StatusMessages) + + @classmethod + def from_mapping(cls, data: Mapping[str, Any]) -> "UIConfig": + return cls( + appearance=AppearanceSettings.from_mapping(data.get("appearance", {})), + shell=ShellSettings.from_mapping(data.get("shell", {})), + chart=ChartSettings.from_mapping(data.get("chart", {})), + status=StatusMessages.from_mapping(data.get("status", {})), + ) + + def apply_environment(self, env: Mapping[str, str]) -> "UIConfig": + return replace( + self, + appearance=self.appearance.apply_environment(env), + shell=self.shell.apply_environment(env), + chart=self.chart.apply_environment(env), + ) + + def with_updates( + self, + *, + appearance: Dict[str, Any] | None = None, + shell: Dict[str, Any] | None = None, + chart: Dict[str, Any] | None = None, + ) -> "UIConfig": + """Return a copy of the config with provided section overrides.""" + + updates: Dict[str, Any] = {} + if appearance: + updates["appearance"] = replace(self.appearance, **appearance) + if shell: + updates["shell"] = replace(self.shell, **shell) + if chart: + updates["chart"] = replace(self.chart, **chart) + return replace(self, **updates) if updates else self + + def as_dict(self) -> Dict[str, Any]: + return { + "appearance": asdict(self.appearance), + "shell": asdict(self.shell), + "chart": asdict(self.chart), + "status": asdict(self.status), + } + + +def load_ui_config(path: Path, *, env: Mapping[str, str] | None = None) -> UIConfig: + """Load :class:`UIConfig` from *path* applying environment overrides.""" + + env_mapping = os.environ if env is None else env + data = utils.load_yaml(path) + config = UIConfig.from_mapping(data) + return config.apply_environment(env_mapping) + + +__all__ = [ + "AppearanceSettings", + "ShellSettings", + "ChartSettings", + "StatusMessages", + "UIConfig", + "load_ui_config", +] diff --git a/toptek/core/utils.py b/toptek/core/utils.py index 42916db..0a0c528 100644 --- a/toptek/core/utils.py +++ b/toptek/core/utils.py @@ -1,8 +1,11 @@ """Utility helpers for configuration, logging, time conversions, and JSON handling. -This module centralises convenience helpers shared across the project. It loads -configuration files, initialises structured logging, and provides a few small -wrappers for timezone-aware timestamps and JSON serialisation. +This module centralises convenience helpers shared across the project. It +loads configuration files, initialises structured logging backed by rotating +file handlers, and provides a few small wrappers for timezone-aware timestamps +and JSON serialisation. It also exposes deterministic seeding helpers and +numeric stack validation utilities that fail fast when the runtime drifts away +from the supported SciPy/NumPy/sklearn matrix. Example: >>> from core import utils @@ -12,18 +15,31 @@ from __future__ import annotations +import importlib +import importlib.util import json import logging import os +import platform +import random from dataclasses import dataclass from datetime import datetime, timezone +from logging.handlers import RotatingFileHandler from pathlib import Path -from typing import Any, Dict +from typing import Any, Dict, Iterable import yaml DEFAULT_TIMEZONE = timezone.utc +REPO_ROOT = Path(__file__).resolve().parents[2] + +STACK_REQUIREMENTS: Dict[str, str] = { + "numpy": "1.26.4", + "scipy": "1.10.1", + "sklearn": "1.3.2", +} +STACK_OPTIONAL: Iterable[str] = ("pandas", "joblib", "threadpoolctl") @dataclass @@ -34,35 +50,74 @@ class AppPaths: root: Base directory for the project. cache: Directory path for cached data files. models: Directory path for persisted models. + logs: Directory path for rotating log files. + reports: Directory path for diagnostic reports. """ root: Path cache: Path models: Path + logs: Path + reports: Path def build_logger(name: str, level: str = "INFO") -> logging.Logger: - """Create a structured logger configured for console output. - - Args: - name: Logger name. - level: Log level name. + """Return a logger configured against the global logging policy. - Returns: - A configured :class:`logging.Logger` instance. + The global logging configuration is expected to be installed via + :func:`configure_logging`. When imported in isolation (e.g. within unit + tests) the helper falls back to ``logging.basicConfig`` with sane defaults + so that callers always receive a functional logger. """ + root_logger = logging.getLogger() + if not root_logger.handlers: + logging.basicConfig( + level=level.upper(), + format="%(asctime)s | %(levelname)s | %(name)s | %(message)s", + datefmt="%Y-%m-%d %H:%M:%S", + ) logger = logging.getLogger(name) - if not logger.handlers: - handler = logging.StreamHandler() + logger.setLevel(level.upper()) + return logger + + +def configure_logging(log_dir: Path, level: str = "INFO") -> Path: + """Initialise rotating file + console logging in ``log_dir``. + + Returns the path to the active log file. + """ + + log_dir.mkdir(parents=True, exist_ok=True) + filename = f"toptek_{datetime.now(tz=DEFAULT_TIMEZONE).strftime('%Y%m%d')}.log" + log_path = log_dir / filename + + root_logger = logging.getLogger() + if not any( + isinstance(handler, RotatingFileHandler) for handler in root_logger.handlers + ): formatter = logging.Formatter( "%(asctime)s | %(levelname)s | %(name)s | %(message)s", datefmt="%Y-%m-%d %H:%M:%S", ) - handler.setFormatter(formatter) - logger.addHandler(handler) - logger.setLevel(level.upper()) - return logger + file_handler = RotatingFileHandler( + log_path, + maxBytes=5 * 1024 * 1024, + backupCount=5, + encoding="utf-8", + ) + file_handler.setFormatter(formatter) + + stream_handler = logging.StreamHandler() + stream_handler.setFormatter(formatter) + + logging.basicConfig( + level=level.upper(), + handlers=[file_handler, stream_handler], + ) + else: + root_logger.setLevel(level.upper()) + return log_path def load_yaml(path: Path) -> Dict[str, Any]: @@ -84,7 +139,7 @@ def load_yaml(path: Path) -> Dict[str, Any]: def ensure_directories(paths: AppPaths) -> None: """Ensure application directories exist.""" - for directory in (paths.cache, paths.models): + for directory in (paths.cache, paths.models, paths.logs, paths.reports): directory.mkdir(parents=True, exist_ok=True) @@ -119,4 +174,83 @@ def build_paths(root: Path, app_config: Dict[str, Any]) -> AppPaths: cache_dir = root / app_config.get("cache_directory", "data/cache") models_dir = root / app_config.get("models_directory", "models") - return AppPaths(root=root, cache=cache_dir, models=models_dir) + log_dir = root / app_config.get("logs_directory", "logs") + reports_dir = root / app_config.get("reports_directory", "reports") + return AppPaths( + root=root, cache=cache_dir, models=models_dir, logs=log_dir, reports=reports_dir + ) + + +def _collect_versions(modules: Iterable[str]) -> Dict[str, str | None]: + versions: Dict[str, str | None] = {} + for module_name in modules: + spec = importlib.util.find_spec(module_name) + if spec is None: + versions[module_name] = None + continue + module = importlib.import_module(module_name) + versions[module_name] = getattr(module, "__version__", None) + return versions + + +def assert_numeric_stack(*, reports_dir: Path | None = None) -> Dict[str, str]: + """Verify the numeric stack exactly matches the supported versions. + + Writes a diagnostic report to ``reports_dir`` (defaulting to + ``reports/run_stack.json`` at the repository root) describing the current + environment and raises :class:`RuntimeError` when a mismatch is detected. + """ + + reports_path = reports_dir or (REPO_ROOT / "reports") + reports_path.mkdir(parents=True, exist_ok=True) + + required_versions = _collect_versions(STACK_REQUIREMENTS.keys()) + optional_versions = _collect_versions(STACK_OPTIONAL) + + mismatches = { + name: version + for name, version in required_versions.items() + if version is not None and version != STACK_REQUIREMENTS[name] + } + missing = [name for name, version in required_versions.items() if version is None] + + stack_report = { + "timestamp": datetime.now(tz=DEFAULT_TIMEZONE).isoformat(), + "python": platform.python_version(), + "required": required_versions, + "expected": STACK_REQUIREMENTS, + "optional": optional_versions, + "status": "ok" if not (mismatches or missing) else "error", + } + + report_path = reports_path / "run_stack.json" + with report_path.open("w", encoding="utf-8") as handle: + json.dump(stack_report, handle, indent=2) + + if missing or mismatches: + problems: list[str] = [] + if missing: + problems.append("missing: " + ", ".join(sorted(missing))) + if mismatches: + issues = ", ".join( + f"{name}=={version} (expected {STACK_REQUIREMENTS[name]})" + for name, version in mismatches.items() + ) + problems.append(f"mismatched: {issues}") + joined = "; ".join(problems) + raise RuntimeError( + "Numeric stack mismatch detected (" + joined + "). " + "Run scripts/setup_env.ps1 to recreate the supported environment." + ) + + return {name: version or "" for name, version in required_versions.items()} + + +def set_seeds(seed: int) -> None: + """Set deterministic seeds for Python and NumPy.""" + + random.seed(seed) + os.environ["PYTHONHASHSEED"] = str(seed) + import numpy as np # Imported lazily to avoid top-level dependency during import + + np.random.seed(seed) diff --git a/toptek/gui/widgets.py b/toptek/gui/widgets.py index a3f8995..88b52a2 100644 --- a/toptek/gui/widgets.py +++ b/toptek/gui/widgets.py @@ -5,7 +5,7 @@ import os import tkinter as tk from tkinter import messagebox, ttk -from typing import Dict +from typing import Any, Dict, TypeVar, cast import numpy as np @@ -17,6 +17,9 @@ from . import DARK_PALETTE, TEXT_WIDGET_DEFAULTS +T = TypeVar("T") + + class BaseTab(ttk.Frame): """Base class providing convenience utilities for tabs.""" @@ -28,6 +31,7 @@ def __init__( ) -> None: super().__init__(master, style="DashboardBackground.TFrame") self.configs = configs + self._ui_config = configs.get("ui", {}) self.paths = paths self.logger = utils.build_logger(self.__class__.__name__) @@ -52,6 +56,18 @@ def update_section(self, section: str, updates: Dict[str, object]) -> None: self.configs.setdefault(section, {}).update(updates) + def ui_setting(self, *keys: str, default: T) -> T: + """Retrieve a nested UI configuration value with a fallback.""" + + data: Any = self._ui_config + for key in keys: + if not isinstance(data, dict): + return default + data = data.get(key) + if data is None: + return default + return cast(T, data) + class DashboardTab(BaseTab): """Mission control overview with themed dashboard cards.""" @@ -277,9 +293,10 @@ def _build(self) -> None: style="Neutral.TButton", command=self._verify_env, ).pack(side=tk.LEFT) - self.status = ttk.Label( - actions, text="Awaiting verification", style="StatusInfo.TLabel" + login_idle = self.ui_setting( + "status", "login", "idle", default="Awaiting verification" ) + self.status = ttk.Label(actions, text=login_idle, style="StatusInfo.TLabel") self.status.pack(side=tk.LEFT, padx=12) def _env_value(self, key: str) -> str: @@ -292,10 +309,13 @@ def _save_env(self) -> None: handle.write(f"{key}={var.get()}\n") messagebox.showinfo("Settings", f"Saved credentials to {env_path}") self.update_section("login", {"saved": True, "verified": False}) - self.status.config( - text="Saved. Run verification to confirm access.", - foreground=DARK_PALETTE["warning"], + saved_msg = self.ui_setting( + "status", + "login", + "saved", + default="Saved. Run verification to confirm access.", ) + self.status.config(text=saved_msg, foreground=DARK_PALETTE["warning"]) def _verify_env(self) -> None: missing = [key for key, var in self.vars.items() if not var.get().strip()] @@ -308,10 +328,13 @@ def _verify_env(self) -> None: messagebox.showwarning("Verification", f"Provide values for: {details}") return self.update_section("login", {"saved": True, "verified": True}) - self.status.config( - text="All keys present. Proceed to Research ▶", - foreground=DARK_PALETTE["success"], + verified_msg = self.ui_setting( + "status", + "login", + "verified", + default="All keys present. Proceed to Research ▶", ) + self.status.config(text=verified_msg, foreground=DARK_PALETTE["success"]) messagebox.showinfo( "Verification", "Environment entries look complete. Continue to the next tab.", @@ -346,9 +369,12 @@ def _build(self) -> None: style="Surface.TLabel", ).grid(row=0, column=0, columnspan=4, sticky=tk.W, pady=(0, 8)) - self.symbol_var = tk.StringVar(value="ES=F") - self.timeframe_var = tk.StringVar(value="5m") - self.bars_var = tk.IntVar(value=240) + shell_symbol = self.ui_setting("shell", "symbol", default="ES=F") + shell_interval = self.ui_setting("shell", "interval", default="5m") + shell_bars = int(self.ui_setting("shell", "research_bars", default=240)) + self.symbol_var = tk.StringVar(value=shell_symbol) + self.timeframe_var = tk.StringVar(value=shell_interval) + self.bars_var = tk.IntVar(value=shell_bars) ttk.Label(controls, text="Symbol", style="Surface.TLabel").grid( row=1, column=0, sticky=tk.W, padx=(0, 6) @@ -473,9 +499,12 @@ def _build(self) -> None: style="Surface.TLabel", ).grid(row=0, column=0, columnspan=4, sticky=tk.W) - self.model_type = tk.StringVar(value="logistic") - self.calibrate_var = tk.BooleanVar(value=True) - self.lookback_var = tk.IntVar(value=480) + default_model = self.ui_setting("shell", "model", default="logistic") + default_calibrate = bool(self.ui_setting("shell", "calibrate", default=True)) + default_lookback = int(self.ui_setting("shell", "lookback_bars", default=480)) + self.model_type = tk.StringVar(value=default_model) + self.calibrate_var = tk.BooleanVar(value=default_calibrate) + self.lookback_var = tk.IntVar(value=default_lookback) ttk.Label(config, text="Model", style="Surface.TLabel").grid( row=1, column=0, sticky=tk.W, pady=(8, 0) @@ -525,8 +554,11 @@ def _build(self) -> None: self.output = tk.Text(self, height=12) self.style_text_widget(self.output) self.output.pack(fill=tk.BOTH, expand=True, padx=10, pady=(6, 4)) + training_idle = self.ui_setting( + "status", "training", "idle", default="Awaiting training run" + ) self.status = ttk.Label( - self, text="Awaiting training run", anchor=tk.W, style="StatusInfo.TLabel" + self, text=training_idle, anchor=tk.W, style="StatusInfo.TLabel" ) self.status.pack(fill=tk.X, padx=12, pady=(0, 12)) @@ -670,8 +702,14 @@ def _train_model(self) -> None: self.output.insert(tk.END, json_dumps(payload)) self.update_section("training", payload) if not calibration_failed: + success_msg = self.ui_setting( + "status", + "training", + "success", + default="Model artefact refreshed. Continue to Backtest ▶", + ) self.status.config( - text="Model artefact refreshed. Continue to Backtest ▶", + text=success_msg, foreground=DARK_PALETTE["accent_alt"], ) @@ -704,8 +742,10 @@ def _build(self) -> None: style="Surface.TLabel", ).grid(row=0, column=0, columnspan=4, sticky=tk.W) - self.sample_var = tk.IntVar(value=720) - self.strategy_var = tk.StringVar(value="momentum") + default_sample = int(self.ui_setting("shell", "simulation_bars", default=720)) + default_strategy = self.ui_setting("shell", "playbook", default="momentum") + self.sample_var = tk.IntVar(value=default_sample) + self.strategy_var = tk.StringVar(value=default_strategy) ttk.Label(controls, text="Sample bars", style="Surface.TLabel").grid( row=1, column=0, sticky=tk.W, pady=(8, 0) @@ -742,8 +782,11 @@ def _build(self) -> None: self.output = tk.Text(self, height=14) self.style_text_widget(self.output) self.output.pack(fill=tk.BOTH, expand=True, padx=10, pady=(6, 4)) + backtest_idle = self.ui_setting( + "status", "backtest", "idle", default="No simulations yet" + ) self.status = ttk.Label( - self, text="No simulations yet", anchor=tk.W, style="StatusInfo.TLabel" + self, text=backtest_idle, anchor=tk.W, style="StatusInfo.TLabel" ) self.status.pack(fill=tk.X, padx=12, pady=(0, 12)) @@ -771,10 +814,13 @@ def _run_backtest(self) -> None: } self.output.delete("1.0", tk.END) self.output.insert(tk.END, json_dumps(payload)) - self.status.config( - text="Sim complete. If expectancy holds, draft a manual trade plan ▶", - foreground=DARK_PALETTE["accent_alt"], + success_msg = self.ui_setting( + "status", + "backtest", + "success", + default="Sim complete. If expectancy holds, draft a manual trade plan ▶", ) + self.status.config(text=success_msg, foreground=DARK_PALETTE["accent_alt"]) self.update_section("backtest", payload) @@ -787,7 +833,10 @@ def __init__( configs: Dict[str, Dict[str, object]], paths: utils.AppPaths, ) -> None: - self.guard_status = tk.StringVar(value="Topstep Guard: pending review") + guard_pending = self.ui_setting( + "status", "guard", "pending", default="Topstep Guard: pending review" + ) + self.guard_status = tk.StringVar(value=guard_pending) self.guard_label: ttk.Label | None = None super().__init__(master, configs, paths) self._build() @@ -824,10 +873,15 @@ def _build(self) -> None: self.output = tk.Text(self, height=12) self.style_text_widget(self.output) self.output.pack(fill=tk.BOTH, expand=True, padx=10, pady=(6, 12)) + guard_intro = self.ui_setting( + "status", + "guard", + "intro", + default="Manual execution only. Awaiting guard refresh...", + ) self.output.insert( tk.END, - "Manual execution only. Awaiting guard refresh...\n" - "Use insights from earlier tabs to justify every trade — and always log rationale.", + f"{guard_intro}\nUse insights from earlier tabs to justify every trade — and always log rationale.", ) def _show_risk(self) -> None: @@ -882,7 +936,13 @@ def _show_risk(self) -> None: if guard == "OK": messagebox.showinfo("Topstep Guard", guard_message) else: - warning_message = f"{guard_message}\n\nDEFENSIVE_MODE active. Stand down and review your journal before trading." + warning_suffix = self.ui_setting( + "status", + "guard", + "defensive_warning", + default="DEFENSIVE_MODE active. Stand down and review your journal before trading.", + ) + warning_message = f"{guard_message}\n\n{warning_suffix}" messagebox.showwarning("Topstep Guard", warning_message) diff --git a/toptek/main.py b/toptek/main.py index 34f80a0..56eb862 100644 --- a/toptek/main.py +++ b/toptek/main.py @@ -4,31 +4,54 @@ import argparse from pathlib import Path -from typing import Dict +from typing import Dict, Tuple -import numpy as np from dotenv import load_dotenv -from core import backtest, data, model, risk, utils +from core import backtest, data, model, risk, ui_config, utils from toptek.features import build_features ROOT = Path(__file__).parent -def load_configs() -> Dict[str, Dict[str, object]]: - """Load configuration files into a dictionary.""" +def load_configs() -> Tuple[Dict[str, Dict[str, object]], ui_config.UIConfig]: + """Load configuration files along with UI defaults.""" app_cfg = utils.load_yaml(ROOT / "config" / "app.yml") risk_cfg = utils.load_yaml(ROOT / "config" / "risk.yml") feature_cfg = utils.load_yaml(ROOT / "config" / "features.yml") - return {"app": app_cfg, "risk": risk_cfg, "features": feature_cfg} + ui_path = ROOT.parent / "configs" / "ui.yml" + ui_cfg = ui_config.load_ui_config(ui_path) + return ( + { + "app": app_cfg, + "risk": risk_cfg, + "features": feature_cfg, + "ui": ui_cfg.as_dict(), + }, + ui_cfg, + ) -def run_cli(args: argparse.Namespace, configs: Dict[str, Dict[str, object]], paths: utils.AppPaths) -> None: +def run_cli( + args: argparse.Namespace, + configs: Dict[str, Dict[str, object]], + paths: utils.AppPaths, +) -> None: """Dispatch CLI commands based on ``args``.""" + import numpy as np + logger = utils.build_logger("toptek") + logger.info( + "CLI mode=%s symbol=%s timeframe=%s lookback=%s fps=%s", + args.cli, + args.symbol, + args.timeframe, + args.lookback, + getattr(args, "fps", None), + ) risk_profile = risk.RiskProfile( max_position_size=configs["risk"].get("max_position_size", 1), max_daily_loss=configs["risk"].get("max_daily_loss", 1000), @@ -38,7 +61,8 @@ def run_cli(args: argparse.Namespace, configs: Dict[str, Dict[str, object]], pat cooldown_minutes=configs["risk"].get("cooldown_minutes", 30), ) - df = data.sample_dataframe() + lookback = int(args.lookback) + df = data.sample_dataframe(lookback) try: bundle = build_features(df, cache_dir=paths.cache) except ValueError as exc: @@ -56,14 +80,22 @@ def run_cli(args: argparse.Namespace, configs: Dict[str, Dict[str, object]], pat if args.cli == "train": if np.unique(y).size < 2: - logger.error("Training aborted: dataset lacks class diversity after cleaning") + logger.error( + "Training aborted: dataset lacks class diversity after cleaning" + ) return try: - result = model.train_classifier(X, y, model_type=args.model, models_dir=paths.models) + result = model.train_classifier( + X, y, model_type=args.model, models_dir=paths.models + ) except ValueError as exc: logger.error("Training failed: %s", exc) return - logger.info("Training complete: metrics=%s threshold=%.2f", result.metrics, result.threshold) + logger.info( + "Training complete: metrics=%s threshold=%.2f", + result.metrics, + result.threshold, + ) elif args.cli == "backtest": returns = np.log(df["close"]).diff().fillna(0).to_numpy() signals = (returns > 0).astype(int) @@ -84,7 +116,9 @@ def run_cli(args: argparse.Namespace, configs: Dict[str, Dict[str, object]], pat if atr_series.size: atr_value = float(atr_series[-1]) if atr_value is None: - logger.warning("ATR14 feature unavailable from bundle; unable to size position") + logger.warning( + "ATR14 feature unavailable from bundle; unable to size position" + ) return size = risk.position_size( account_balance=50000, @@ -97,28 +131,113 @@ def run_cli(args: argparse.Namespace, configs: Dict[str, Dict[str, object]], pat logger.error("Unknown CLI command: %s", args.cli) -def parse_args() -> argparse.Namespace: - """Parse command-line arguments.""" +def parse_args(settings: ui_config.UIConfig) -> argparse.Namespace: + """Parse command-line arguments with defaults sourced from ``settings``.""" parser = argparse.ArgumentParser(description="Toptek manual trading toolkit") - parser.add_argument("--cli", choices=["train", "backtest", "paper"], help="Run in CLI mode instead of GUI") - parser.add_argument("--symbol", default="ESZ5", help="Futures symbol") - parser.add_argument("--timeframe", default="5m", help="Bar timeframe") - parser.add_argument("--lookback", default="90d", help="Lookback period for CLI commands") - parser.add_argument("--model", default="logistic", choices=["logistic", "gbm"], help="Model type for training") + parser.add_argument( + "--cli", + choices=["train", "backtest", "paper"], + help="Run in CLI mode instead of GUI", + ) + parser.add_argument( + "--symbol", + help=f"Futures symbol (default: {settings.shell.symbol})", + ) + parser.add_argument( + "--timeframe", + help=f"Bar timeframe (default: {settings.shell.interval})", + ) + parser.add_argument( + "--lookback", + type=int, + help=f"Synthetic bar count for CLI flows (default: {settings.shell.lookback_bars})", + ) + parser.add_argument( + "--model", + choices=["logistic", "gbm"], + help=f"Model type for training (default: {settings.shell.model})", + ) + parser.add_argument( + "--fps", + type=int, + help=f"Live chart frames per second (default: {settings.chart.fps})", + ) parser.add_argument("--start", help="Start date for backtest") return parser.parse_args() +def _apply_cli_overrides( + settings: ui_config.UIConfig, + *, + symbol: str | None, + timeframe: str | None, + lookback: int | None, + model_name: str | None, + fps: int | None, +) -> ui_config.UIConfig: + shell_updates: Dict[str, object] = {} + chart_updates: Dict[str, object] = {} + if symbol is not None: + shell_updates["symbol"] = symbol + if timeframe is not None: + shell_updates["interval"] = timeframe + if lookback is not None: + shell_updates["lookback_bars"] = lookback + if model_name is not None: + shell_updates["model"] = model_name + if fps is not None: + chart_updates["fps"] = fps + if not shell_updates and not chart_updates: + return settings + return settings.with_updates( + shell=shell_updates if shell_updates else None, + chart=chart_updates if chart_updates else None, + ) + + def main() -> None: """Program entry point.""" load_dotenv(ROOT / ".env") - configs = load_configs() - paths = utils.build_paths(ROOT, configs["app"]) + configs, ui_settings = load_configs() + app_config = configs.get("app", {}) + paths = utils.build_paths(ROOT, app_config) utils.ensure_directories(paths) - args = parse_args() + log_level = str(app_config.get("log_level", "INFO")) + utils.configure_logging(paths.logs, level=log_level) + stack_versions = utils.assert_numeric_stack(reports_dir=paths.reports) + utils.set_seeds(42) + + args = parse_args(ui_settings) + raw_symbol = args.symbol + raw_timeframe = args.timeframe + raw_lookback = args.lookback + raw_model = args.model + raw_fps = args.fps + args.symbol = raw_symbol or ui_settings.shell.symbol + args.timeframe = raw_timeframe or ui_settings.shell.interval + args.lookback = raw_lookback or ui_settings.shell.lookback_bars + args.model = raw_model or ui_settings.shell.model + args.fps = raw_fps or ui_settings.chart.fps + ui_settings = _apply_cli_overrides( + ui_settings, + symbol=raw_symbol, + timeframe=raw_timeframe, + lookback=raw_lookback, + model_name=raw_model, + fps=raw_fps, + ) + configs["ui"] = ui_settings.as_dict() + + logger = utils.build_logger("toptek", level=log_level) + logger.info( + "Numeric stack verified: numpy=%s scipy=%s sklearn=%s", + stack_versions["numpy"], + stack_versions["scipy"], + stack_versions["sklearn"], + ) if args.cli: run_cli(args, configs, paths) return