init 0.2.9, pretty minor changes but mostly just bug fixes and publishing to pypi for easier installation#77
Conversation
📝 WalkthroughWalkthroughThis PR coordinates a version bump to 0.2.9 with infrastructure improvements: sccache-based CI caching, UV-based MkDocs installation, expanded Tokio features across the dependency tree, multi-stage Docker builds, Python import robustness enhancements, and build profile optimizations. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Tip Try Coding Plans. Let us write the prompt for your AI agent so you can ship faster (with fewer bugs). Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request delivers version 0.2.9 of the project, primarily focusing on stability, performance, and ease of installation. It incorporates crucial bug fixes, enhances the Python bindings for better robustness, and optimizes the build process to reduce CI times. The update also prepares the project for broader Python compatibility and simplifies the installation experience by publishing to PyPI. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Ignored Files
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces version 0.2.9, focusing on bug fixes, dependency updates, and build process improvements. The changes to use cargo-chef in the Dockerfile and optimize Rust build profiles are great for improving CI and development build times. The refactoring of Python import logic and making custom validators safer are also solid improvements. However, I found a critical issue in the tracing.py module where a refactoring seems to have accidentally removed public methods from the Logger and LogBuilder classes, breaking the logging functionality. The rest of the changes look good.
| class Logger: | ||
| """ | ||
| A logger class wrapping the RustLogger functionality. | ||
|
|
||
| Attributes: | ||
| logger (RustLogger): The underlying RustLogger instance. | ||
| """ | ||
|
|
||
| def __init__(self): | ||
| RustLogger = _get_rust_attr("Logger") | ||
| self.logger = RustLogger() | ||
|
|
||
|
|
||
| class LogBuilder: | ||
| """ | ||
| A builder class for configuring the logs, create log layers and iterators. | ||
|
|
||
| Attributes: | ||
| builder (RustLogBuilder): The underlying RustLogBuilder instance. | ||
| """ | ||
|
|
||
| def __init__(self): | ||
| RustLogBuilder = _get_rust_attr("LogBuilder") | ||
| self.builder = RustLogBuilder() |
There was a problem hiding this comment.
The refactoring to use _get_rust_attr is a good improvement for centralizing import logic. However, it appears that the public methods for the Logger and LogBuilder classes were accidentally removed in the process. This breaks their functionality as they no longer expose any logging or building capabilities. The wrapper classes should delegate calls to the underlying Rust objects.
class Logger:
"""
A logger class wrapping the RustLogger functionality.
Attributes:
logger (RustLogger): The underlying RustLogger instance.
"""
def __init__(self):
RustLogger = _get_rust_attr("Logger")
self.logger = RustLogger()
def debug(self, message):
"""
Log a debug message.
Args:
message (str): The message to log.
"""
self.logger.debug(str(message))
def info(self, message):
"""
Log an informational message.
Args:
message (str): The message to log.
"""
self.logger.info(str(message))
def warn(self, message):
"""
Log a warning message.
Args:
message (str): The message to log.
"""
self.logger.warn(str(message))
def error(self, message):
"""
Log an error message.
Args:
message (str): The message to log.
"""
self.logger.error(str(message))
class LogBuilder:
"""
A builder class for configuring the logs, create log layers and iterators.
Attributes:
builder (RustLogBuilder): The underlying RustLogBuilder instance.
"""
def __init__(self):
RustLogBuilder = _get_rust_attr("LogBuilder")
self.builder = RustLogBuilder()
def create_logs_iterator(self, level: str = "DEBUG", timeout: Optional[timedelta] = None) -> LogSubscription:
"""
Create a new logs iterator with the specified level and timeout.
Args:
level (str): The logging level (default is "DEBUG").
timeout (Optional[timedelta]): Optional timeout for the iterator.
Returns:
LogSubscription: A new LogSubscription instance that supports both asynchronous and synchronous iterators.
"""
return LogSubscription(self.builder.create_logs_iterator(level, timeout))
def log_file(self, path: str = "logs.log", level: str = "DEBUG") -> "LogBuilder":
"""
Configure logging to a file.
Args:
path (str): The path where logs will be stored (default is "logs.log").
level (str): The minimum log level for this file handler.
"""
self.builder.log_file(path, level)
return self
def terminal(self, level: str = "DEBUG") -> "LogBuilder":
"""
Configure logging to the terminal.
Args:
level (str): The minimum log level for this terminal handler.
"""
self.builder.terminal(level)
return self
def build(self):
"""
Build and initialize the logging configuration. This function should be called only once per execution.
"""
self.builder.build()There was a problem hiding this comment.
Actionable comments posted: 11
🧹 Nitpick comments (3)
docker/linux/Dockerfile (1)
33-33: Consider adding--out distfor consistency with CI workflow.The CI workflow (in
.github/workflows/CI.yml) uses--out distto specify the output directory. While the defaulttarget/wheels/works, aligning with CI conventions improves maintainability.-RUN maturin build --release --strip --interpreter python3 +RUN maturin build --release --strip --interpreter python3 --out distNote: If you apply this, also update line 38 to
COPY --from=builder /app/dist/*.whl /app/.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docker/linux/Dockerfile` at line 33, Update the maturin build invocation in the Dockerfile: modify the RUN command that calls "maturin build --release --strip --interpreter python3" to include "--out dist" so the build artifacts land in /app/dist; then adjust the subsequent COPY step that references the wheel (the COPY --from=builder ... *.whl line) to copy from /app/dist/*.whl instead of the current target directory.README.md (1)
116-155: Document the PyPI install path as the primary option.This release is explicitly about easier installation via PyPI, but Option A still points users to platform-specific GitHub wheel URLs. I’d add
pip install binaryoptionstoolsv2first and keep the direct release URLs as a fallback.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@README.md` around lines 116 - 155, Update the installation section to show the PyPI package as the primary, e.g., add a new "Option A: PyPI (Recommended)" with the command `pip install binaryoptionstoolsv2` (or replace current Option A heading with this), then demote the GitHub wheel URLs to a new "Option B: Prebuilt Wheels (Fallback)" and keep the existing platform-specific wheel examples and the existing "Option B/C: Build from Source" instructions (rename as needed) so the wheel URLs remain as fallbacks; modify the README headings and example commands accordingly to ensure users see `pip install binaryoptionstoolsv2` first while preserving the current direct wheel and build instructions.CHANGELOG.md (1)
16-24: Make the 0.2.9 release notes more specific.Entries like
Updated python support,Balance returning -1 (possibly), andUnsafe unwrapsare too vague for a changelog. Please name the actual supported version range, affected API/failure mode, and the PyPI publishing change;double-encoded JSON messagesalso reads cleaner on Line 17.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@CHANGELOG.md` around lines 16 - 24, Update the 0.2.9 CHANGELOG entries to be specific: change "Updated python support" to name the supported Python versions and PyPI publishing change (e.g., "Python 3.8–3.11 support; publish wheel to PyPI"), change "Improved SSID parsing to prevent double encoded JSON msgs" to "Improved SSID parsing to prevent double-encoded JSON messages", replace "Balance returning -1 (possibly)" with the affected API and failure mode (e.g., "Fix: account balance API returned -1 on malformed balance responses in BalanceService"), and replace "Unsafe unwraps" with the files/functions where unwraps were fixed (reference specific symbols or modules, e.g., "Fixed unsafe unwraps in Ssid::Display and auth_handshake::parse_credentials"). Also keep the existing Ssid::Display note but ensure wording clarifies it now returns the raw auth message (`42[\"auth\",{...}]`) sent during the WebSocket handshake.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In @.github/workflows/CI.yml:
- Around line 30-33: Replace the unsafe curl|sh installer step named "Install
uv" with the official action astral-sh/setup-uv pinned to a stable version
(e.g., astral-sh/setup-uv@v7) and keep the subsequent "Install dependencies with
uv" step as-is so uv is on PATH for the next step; alternatively, if you must
keep the shell installer, explicitly export the install directory to
$GITHUB_PATH after the installer step so the "Install dependencies with uv" step
can find uv. Use the job step names "Install uv" and "Install dependencies with
uv" to locate and update the workflow.
In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/config.py`:
- Around line 71-84: The _update_pyconfig method and the to_dict method are
dropping public fields terminal_logging, log_level, and extra_duration; update
_update_pyconfig to assign self._pyconfig.terminal_logging,
self._pyconfig.log_level, and self._pyconfig.extra_duration from the Config
instance (same pattern used for max_allowed_loops, sleep_interval, etc.), and
update to_dict to include "terminal_logging", "log_level", and "extra_duration"
keys so JSON/FFI round-trips preserve those values; refer to methods
_update_pyconfig, to_dict and attributes terminal_logging, log_level,
extra_duration to locate the changes.
- Around line 149-151: The update() method currently uses hasattr(self, key)
which permits overwriting methods/properties/private state (e.g., to_json,
pyconfig, _locked, _pyconfig); change update() to only accept keys that are
actual dataclass fields (same filter used by from_dict()), e.g., iterate over
the dataclass field names (self.__dataclass_fields__ or
dataclasses.fields(self)) and set attributes only when key is in that set so
methods/properties/private attrs cannot be overwritten.
- Around line 13-22: The fallback uses an invalid level-2 relative import ("from
..BinaryOptionsToolsV2 import PyConfig") which raises "attempted relative import
beyond top-level package"; update the fallback inside _get_pyconfig to use a
level-1 relative import (e.g., "from .BinaryOptionsToolsV2 import PyConfig" or
"from . import PyConfig" if PyConfig is exported in __init__.py) so that
importing PyConfig from the BinaryOptionsToolsV2 package succeeds; ensure the
import path references the same symbol names (PyConfig, BinaryOptionsToolsV2) as
in the current diff.
In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/tracing.py`:
- Around line 92-96: The current try/except in start_logs swallows all
exceptions (from os.makedirs or start_tracing) by printing and returning,
contradicting the docstring; change start_logs so startup failures propagate:
either remove the broad try/except or catch only expected exceptions and
re-raise after logging; if you want a log message, use the module logger (not
print) and then raise the original exception (or a wrapped one) so callers see
the failure; ensure references to os.makedirs, start_tracing, and the start_logs
function are updated accordingly.
In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/validator.py`:
- Around line 11-20: The fallback relative import in
BinaryOptionsToolsV2.validator is using a level-2 relative import which fails;
update the fallback import in the except block to use a level-1 relative import
so the sibling compiled module can be resolved: change the statement that
imports RawValidator (currently using from ..BinaryOptionsToolsV2 import
RawValidator) to a single-dot relative import (from .BinaryOptionsToolsV2 import
RawValidator) so RawValidator can be returned correctly; keep the subsequent
ImportError fallback to import BinaryOptionsToolsV2 and
getattr(BinaryOptionsToolsV2, "RawValidator") as-is.
In `@docker/linux/Dockerfile`:
- Around line 35-41: The final runner stage runs as root; create a non-root
user/group in the runner stage, chown the /app directory (and copied wheel files
from the COPY step) to that user, and set USER to that account before CMD so the
container no longer runs as root. Locate the runner stage (FROM
debian:bookworm-slim AS runner) where COPY --from=builder
/app/target/wheels/*.whl /app/ occurs, add steps to create a minimal user/group
(system or non-login), change ownership of /app and its contents, and then set
USER to that new user so subsequent commands (CMD ["ls", "-l", "/app"]) run
unprivileged.
- Line 24: The RUN instruction currently uses an unpinned package install ("pip
install maturin") which can lead to unreproducible builds; change that to pin a
specific stable maturin version (e.g., replace "pip install maturin" with "pip
install maturin==<stable_version>") or introduce a build ARG like
MATURIN_VERSION and use it in the RUN line to install
"maturin==${MATURIN_VERSION}" so builds remain reproducible and easy to update.
In `@examples/python/async/get_candles.py`:
- Around line 20-23: The example is hard-coding a 60s frame in the call to
get_candles, causing duplicate output across the time_frames loop; update the
call to use the loop variable (e.g., replace the literal 60 with the
frame/time_frame variable used in the loop) or remove the surrounding loop so
you only fetch once; ensure you modify the get_candles(...) invocation (the
symbol to change) to pass the intended frame variable instead of the constant.
In `@README.md`:
- Around line 118-135: The README and packaging metadata claim Python 3.8
compatibility but the wheels and Rust pyo3 binding use ABI3 for CPython 3.9
(`cp39-abi3`, `pyo3` `abi3-py39`), so update docs and pyproject to require
Python >=3.9: change the README text from "3.8 - 3.15" to "3.9+", remove the
Python 3.8 badge/mention, and in BinaryOptionsToolsV2/pyproject.toml update
requires-python to ">=3.9" and remove any Python 3.8 classifiers so the package
metadata matches the built wheels and pyo3 ABI configuration.
In `@tests/python/core/test_validator.py`:
- Around line 79-84: The test should avoid E712 and ARG001: rename the unused
parameter in crashing_func from msg to _msg (e.g., def crashing_func(_msg: str)
-> bool) and change the assertion to use identity comparison: assert
v.check("any message") is False; this keeps the intended behavior
(Validator.custom(crashing_func) returns False on exception) while satisfying
lint rules.
---
Nitpick comments:
In `@CHANGELOG.md`:
- Around line 16-24: Update the 0.2.9 CHANGELOG entries to be specific: change
"Updated python support" to name the supported Python versions and PyPI
publishing change (e.g., "Python 3.8–3.11 support; publish wheel to PyPI"),
change "Improved SSID parsing to prevent double encoded JSON msgs" to "Improved
SSID parsing to prevent double-encoded JSON messages", replace "Balance
returning -1 (possibly)" with the affected API and failure mode (e.g., "Fix:
account balance API returned -1 on malformed balance responses in
BalanceService"), and replace "Unsafe unwraps" with the files/functions where
unwraps were fixed (reference specific symbols or modules, e.g., "Fixed unsafe
unwraps in Ssid::Display and auth_handshake::parse_credentials"). Also keep the
existing Ssid::Display note but ensure wording clarifies it now returns the raw
auth message (`42[\"auth\",{...}]`) sent during the WebSocket handshake.
In `@docker/linux/Dockerfile`:
- Line 33: Update the maturin build invocation in the Dockerfile: modify the RUN
command that calls "maturin build --release --strip --interpreter python3" to
include "--out dist" so the build artifacts land in /app/dist; then adjust the
subsequent COPY step that references the wheel (the COPY --from=builder ...
*.whl line) to copy from /app/dist/*.whl instead of the current target
directory.
In `@README.md`:
- Around line 116-155: Update the installation section to show the PyPI package
as the primary, e.g., add a new "Option A: PyPI (Recommended)" with the command
`pip install binaryoptionstoolsv2` (or replace current Option A heading with
this), then demote the GitHub wheel URLs to a new "Option B: Prebuilt Wheels
(Fallback)" and keep the existing platform-specific wheel examples and the
existing "Option B/C: Build from Source" instructions (rename as needed) so the
wheel URLs remain as fallbacks; modify the README headings and example commands
accordingly to ensure users see `pip install binaryoptionstoolsv2` first while
preserving the current direct wheel and build instructions.
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 9a0314dc-48cc-406b-9f0d-694b72ddac2d
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (20)
.github/workflows/CI.yml.gitignore.serena/.gitignore.serena/project.ymlBinaryOptionsToolsUni/Cargo.tomlBinaryOptionsToolsV2/python/BinaryOptionsToolsV2/config.pyBinaryOptionsToolsV2/python/BinaryOptionsToolsV2/tracing.pyBinaryOptionsToolsV2/python/BinaryOptionsToolsV2/validator.pyBinaryOptionsToolsV2/rust/Cargo.tomlCHANGELOG.mdCITATION.cffCargo.tomlREADME.mdcrates/binary_options_tools/Cargo.tomlcrates/core-pre/Cargo.tomlcrates/core/Cargo.tomldocker/linux/Dockerfiledocs/examplesexamples/python/async/get_candles.pytests/python/core/test_validator.py
💤 Files with no reviewable changes (3)
- .serena/.gitignore
- docs/examples
- .serena/project.yml
| - name: Install uv | ||
| run: curl -LsSf https://astral.sh/uv/install.sh | sh | ||
| - name: Install dependencies with uv | ||
| run: uv pip install --system mkdocs-material "mkdocstrings[python]" |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is Astral's recommended way to install uvin GitHub Actions? Does the shell installer persistuvon PATH across later steps, or should a pinnedastral-sh/setup-uvaction /$GITHUB_PATH be used instead?
💡 Result:
Astral’s docs recommend installing uv in GitHub Actions via the official astral-sh/setup-uv action (it installs uv, adds it to PATH for later steps, and can persist/cache uv’s cache). (docs.astral.sh)
If you use the shell installer (curl … | sh) in Actions, don’t rely on it “sticking” on PATH across steps: the installer’s “put uv on PATH” behavior is primarily by modifying shell profiles, which later GHA steps won’t necessarily source. In CI, Astral instead points to using an unmanaged install to a known directory and then you must ensure that directory is on PATH yourself (e.g., via $GITHUB_PATH). (docs.astral.sh)
So: prefer a pinned astral-sh/setup-uv (e.g., @v7 or a commit SHA), or if you use the installer, add the install dir to $GITHUB_PATH explicitly. (docs.astral.sh)
Citations:
- 1: https://docs.astral.sh/uv/guides/integration/github/
- 2: https://docs.astral.sh/uv/reference/installer/
- 3: https://docs.astral.sh/uv/guides/integration/github/
Use astral-sh/setup-uv action instead of curl | sh.
This job has contents: write permission, making the piped installer a supply-chain risk. Additionally, shell-profile edits don't persist on PATH across Actions steps, so the next step may fail. Use the official astral-sh/setup-uv action (pinned to a version like @v7), or if using the shell installer, explicitly add the install directory to $GITHUB_PATH.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In @.github/workflows/CI.yml around lines 30 - 33, Replace the unsafe curl|sh
installer step named "Install uv" with the official action astral-sh/setup-uv
pinned to a stable version (e.g., astral-sh/setup-uv@v7) and keep the subsequent
"Install dependencies with uv" step as-is so uv is on PATH for the next step;
alternatively, if you must keep the shell installer, explicitly export the
install directory to $GITHUB_PATH after the installer step so the "Install
dependencies with uv" step can find uv. Use the job step names "Install uv" and
"Install dependencies with uv" to locate and update the workflow.
| except (ImportError, AttributeError): | ||
| # Fallback to direct import | ||
| try: | ||
| from ..BinaryOptionsToolsV2 import PyConfig | ||
|
|
||
| return PyConfig | ||
| except ImportError: | ||
| import BinaryOptionsToolsV2 | ||
|
|
||
| return getattr(BinaryOptionsToolsV2, "PyConfig") |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -euo pipefail
python - <<'PY'
from importlib._bootstrap import _resolve_name
for level in (1, 2):
try:
resolved = _resolve_name("BinaryOptionsToolsV2", "BinaryOptionsToolsV2", level)
print(f"level={level}: {resolved}")
except Exception as exc:
print(f"level={level}: {type(exc).__name__}: {exc}")
PY
echo
echo "Package exports:"
fd -p '__init__.py' BinaryOptionsToolsV2/python/BinaryOptionsToolsV2 -x sh -c 'echo "### $1"; sed -n "1,200p" "$1"' sh {}Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 2642
The second fallback import has an invalid relative import level.
Line 16 uses from ..BinaryOptionsToolsV2 import PyConfig, which attempts a level-2 relative import from within the BinaryOptionsToolsV2 package. However, since there is no parent package above BinaryOptionsToolsV2, this relative import fails with ImportError: attempted relative import beyond top-level package. This means _get_pyconfig() still fails in the exact scenario this fallback is meant to handle.
Change the import to level-1: from .BinaryOptionsToolsV2 import PyConfig (or alternatively, from . import PyConfig if PyConfig is already exported from the package init.py).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/config.py` around lines 13 -
22, The fallback uses an invalid level-2 relative import ("from
..BinaryOptionsToolsV2 import PyConfig") which raises "attempted relative import
beyond top-level package"; update the fallback inside _get_pyconfig to use a
level-1 relative import (e.g., "from .BinaryOptionsToolsV2 import PyConfig" or
"from . import PyConfig" if PyConfig is exported in __init__.py) so that
importing PyConfig from the BinaryOptionsToolsV2 package succeeds; ensure the
import path references the same symbol names (PyConfig, BinaryOptionsToolsV2) as
in the current diff.
| def _update_pyconfig(self): | ||
| """Updates the internal PyConfig with current values""" | ||
| if self._locked: | ||
| raise RuntimeError("Configuration is locked and cannot be modified after being used") | ||
|
|
||
| if self._pyconfig is None: | ||
| self._pyconfig = _get_pyconfig()() | ||
|
|
||
| self._pyconfig.max_allowed_loops = self.max_allowed_loops | ||
| self._pyconfig.sleep_interval = self.sleep_interval | ||
| self._pyconfig.reconnect_time = self.reconnect_time | ||
| self._pyconfig.connection_initialization_timeout_secs = self.connection_initialization_timeout_secs | ||
| self._pyconfig.timeout_secs = self.timeout_secs | ||
| self._pyconfig.urls = self.urls |
There was a problem hiding this comment.
These public config fields are currently dropped.
terminal_logging, log_level, and extra_duration are part of the public Config state, but Lines 79-84 never copy them into self._pyconfig. to_dict() on Lines 119-128 also omits extra_duration, so JSON round-trips and FFI callers can see a different config than the Python object.
Also applies to: 119-128
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/config.py` around lines 71 -
84, The _update_pyconfig method and the to_dict method are dropping public
fields terminal_logging, log_level, and extra_duration; update _update_pyconfig
to assign self._pyconfig.terminal_logging, self._pyconfig.log_level, and
self._pyconfig.extra_duration from the Config instance (same pattern used for
max_allowed_loops, sleep_interval, etc.), and update to_dict to include
"terminal_logging", "log_level", and "extra_duration" keys so JSON/FFI
round-trips preserve those values; refer to methods _update_pyconfig, to_dict
and attributes terminal_logging, log_level, extra_duration to locate the
changes.
| for key, value in config_dict.items(): | ||
| if hasattr(self, key): | ||
| setattr(self, key, value) |
There was a problem hiding this comment.
Restrict update() to declared config fields.
Using hasattr(self, key) here lets callers overwrite methods (to_json), properties (pyconfig), and private state (_locked, _pyconfig). from_dict() already filters to dataclass fields; update() should do the same.
🔒 Suggested fix
- for key, value in config_dict.items():
- if hasattr(self, key):
- setattr(self, key, value)
+ allowed = set(type(self).__dataclass_fields__)
+ for key, value in config_dict.items():
+ if key in allowed:
+ setattr(self, key, value)📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| for key, value in config_dict.items(): | |
| if hasattr(self, key): | |
| setattr(self, key, value) | |
| allowed = set(type(self).__dataclass_fields__) | |
| for key, value in config_dict.items(): | |
| if key in allowed: | |
| setattr(self, key, value) |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/config.py` around lines 149
- 151, The update() method currently uses hasattr(self, key) which permits
overwriting methods/properties/private state (e.g., to_json, pyconfig, _locked,
_pyconfig); change update() to only accept keys that are actual dataclass fields
(same filter used by from_dict()), e.g., iterate over the dataclass field names
(self.__dataclass_fields__ or dataclasses.fields(self)) and set attributes only
when key is in that set so methods/properties/private attrs cannot be
overwritten.
| try: | ||
| os.makedirs(path, exist_ok=True) | ||
| start_tracing(path, level, terminal, layers) | ||
| except Exception as e: | ||
| print(f"Error starting logs: {e}") |
There was a problem hiding this comment.
Don't swallow logging startup failures.
The docstring says start_logs() raises on error, but Lines 95-96 turn every exception into a print() and return successfully. That hides broken log configuration from callers and makes startup failures look like success.
🛠️ Suggested fix
- except Exception as e:
- print(f"Error starting logs: {e}")
+ except Exception as exc:
+ raise RuntimeError(f"Error starting logs: {exc}") from exc📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| try: | |
| os.makedirs(path, exist_ok=True) | |
| start_tracing(path, level, terminal, layers) | |
| except Exception as e: | |
| print(f"Error starting logs: {e}") | |
| try: | |
| os.makedirs(path, exist_ok=True) | |
| start_tracing(path, level, terminal, layers) | |
| except Exception as exc: | |
| raise RuntimeError(f"Error starting logs: {exc}") from exc |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@BinaryOptionsToolsV2/python/BinaryOptionsToolsV2/tracing.py` around lines 92
- 96, The current try/except in start_logs swallows all exceptions (from
os.makedirs or start_tracing) by printing and returning, contradicting the
docstring; change start_logs so startup failures propagate: either remove the
broad try/except or catch only expected exceptions and re-raise after logging;
if you want a log message, use the module logger (not print) and then raise the
original exception (or a wrapped one) so callers see the failure; ensure
references to os.makedirs, start_tracing, and the start_logs function are
updated accordingly.
| @@ -14,7 +23,19 @@ ENV PATH="$VENV_PATH/bin:$PATH" | |||
|
|
|||
| RUN pip install --upgrade pip && pip install maturin | |||
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
What is the latest stable version of maturin for Python packaging?
💡 Result:
The latest stable release of maturin on PyPI is 1.12.6 (released March 1, 2026). [1]
Sources:
[1] (pypi.org)
Citations:
Pin maturin version for reproducible builds.
The pip install maturin command lacks version pinning, which may cause inconsistent builds over time. Use a specific stable version instead.
📦 Suggested fix
-RUN pip install --upgrade pip && pip install maturin
+RUN pip install --upgrade pip && pip install maturin==1.12.6📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| RUN pip install --upgrade pip && pip install maturin | |
| RUN pip install --upgrade pip && pip install maturin==1.12.6 |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker/linux/Dockerfile` at line 24, The RUN instruction currently uses an
unpinned package install ("pip install maturin") which can lead to
unreproducible builds; change that to pin a specific stable maturin version
(e.g., replace "pip install maturin" with "pip install
maturin==<stable_version>") or introduce a build ARG like MATURIN_VERSION and
use it in the RUN line to install "maturin==${MATURIN_VERSION}" so builds remain
reproducible and easy to update.
| # Stage 3: Runner (Minimal image) | ||
| FROM debian:bookworm-slim AS runner | ||
| WORKDIR /app | ||
| COPY --from=builder /app/target/wheels/*.whl /app/ | ||
| # The runner stage just holds the artifacts in this case as it's a build container. | ||
| # If it were an app, we would install the wheel here. | ||
| CMD ["ls", "-l", "/app"] |
There was a problem hiding this comment.
Add a non-root user to the final stage for security hardening.
The container runs as root, which is flagged by static analysis (DS-0002). Even for a build artifact container, using a non-root user is a security best practice that prevents privilege escalation in case of container escape.
🔒 Suggested fix
# Stage 3: Runner (Minimal image)
FROM debian:bookworm-slim AS runner
+RUN useradd --create-home --shell /bin/bash appuser
WORKDIR /app
COPY --from=builder /app/target/wheels/*.whl /app/
+RUN chown -R appuser:appuser /app
+USER appuser
# The runner stage just holds the artifacts in this case as it's a build container.
# If it were an app, we would install the wheel here.
CMD ["ls", "-l", "/app"]📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| # Stage 3: Runner (Minimal image) | |
| FROM debian:bookworm-slim AS runner | |
| WORKDIR /app | |
| COPY --from=builder /app/target/wheels/*.whl /app/ | |
| # The runner stage just holds the artifacts in this case as it's a build container. | |
| # If it were an app, we would install the wheel here. | |
| CMD ["ls", "-l", "/app"] | |
| # Stage 3: Runner (Minimal image) | |
| FROM debian:bookworm-slim AS runner | |
| RUN useradd --create-home --shell /bin/bash appuser | |
| WORKDIR /app | |
| COPY --from=builder /app/target/wheels/*.whl /app/ | |
| RUN chown -R appuser:appuser /app | |
| USER appuser | |
| # The runner stage just holds the artifacts in this case as it's a build container. | |
| # If it were an app, we would install the wheel here. | |
| CMD ["ls", "-l", "/app"] |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@docker/linux/Dockerfile` around lines 35 - 41, The final runner stage runs as
root; create a non-root user/group in the runner stage, chown the /app directory
(and copied wheel files from the COPY step) to that user, and set USER to that
account before CMD so the container no longer runs as root. Locate the runner
stage (FROM debian:bookworm-slim AS runner) where COPY --from=builder
/app/target/wheels/*.whl /app/ occurs, add steps to create a minimal user/group
(system or non-login), change ownership of /app and its contents, and then set
USER to that new user so subsequent commands (CMD ["ls", "-l", "/app"]) run
unprivileged.
| candles = await api.get_candles("EURUSD_otc", 60, time) | ||
| # print(f"Raw Candles: {candles}") | ||
| candles_pd = pd.DataFrame.from_dict(candles) | ||
| print(f"Candles: {candles_pd}") | ||
| print(f"Raw Candles: {candles}") | ||
| # candles_pd = pd.DataFrame.from_dict(candles) | ||
| # print(f"Candles: {candles_pd}") |
There was a problem hiding this comment.
Use frame in the request or remove the inner loop.
Line 20 still hard-codes 60, so this example prints the same candle batch once for every entry in time_frames. Now that the script outputs raw payloads, that duplication is user-visible.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@examples/python/async/get_candles.py` around lines 20 - 23, The example is
hard-coding a 60s frame in the call to get_candles, causing duplicate output
across the time_frames loop; update the call to use the loop variable (e.g.,
replace the literal 60 with the frame/time_frame variable used in the loop) or
remove the surrounding loop so you only fetch once; ensure you modify the
get_candles(...) invocation (the symbol to change) to pass the intended frame
variable instead of the constant.
| Install directly from our GitHub releases. Supports **Python 3.8 - 3.15**. | ||
|
|
||
| **Windows** | ||
|
|
||
| ```bash | ||
| pip install "https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/download/v0.2.8/binaryoptionstoolsv2-0.2.8-cp39-abi3-win_amd64.whl" | ||
| pip install "https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/download/v0.2.9/binaryoptionstoolsv2-0.2.9-cp39-abi3-win_amd64.whl" | ||
| ``` | ||
|
|
||
| **Linux** | ||
|
|
||
| ```bash | ||
| pip install "https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/download/v0.2.8/binaryoptionstoolsv2-0.2.8-cp39-abi3-manylinux_2_28_x86_64.whl" | ||
| pip install "https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/download/v0.2.9/binaryoptionstoolsv2-0.2.9-cp39-abi3-manylinux_2_28_x86_64.whl" | ||
| ``` | ||
|
|
||
| **macOS (Apple Silicon)** | ||
|
|
||
| ```bash | ||
| pip install "https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/download/v0.2.8/binaryoptionstoolsv2-0.2.8-cp39-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl" | ||
| pip install "https://github.com/ChipaDevTeam/BinaryOptionsTools-v2/releases/download/v0.2.9/binaryoptionstoolsv2-0.2.9-cp39-abi3-macosx_10_12_x86_64.macosx_11_0_arm64.macosx_10_12_universal2.whl" |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
set -e
echo "== ABI floor from Rust binding =="
rg -n 'abi3-py39' BinaryOptionsToolsV2/rust/Cargo.toml
echo
echo "== README support claims and wheel tags =="
rg -n '3\.8|3\.15|cp39-abi3' README.md
echo
echo "== Python package metadata, if present =="
fd '^pyproject\.toml$' . -x sh -c '
file="$1"
echo "-- $file --"
rg -n "requires-python|Programming Language :: Python" "$file" || true
' sh {}Repository: ChipaDevTeam/BinaryOptionsTools-v2
Length of output: 1619
Update Python version floor to 3.9+ across all documentation and package metadata.
The published wheels are tagged cp39-abi3 and the Rust binding is configured with pyo3 abi3-py39, which requires Python 3.9+. However, the README (Lines 4 and 118) and pyproject.toml (Line 14) incorrectly claim Python 3.8 support. Python 3.8 users will not be able to install this package as documented. Before publishing, update:
- README Line 118: Change
3.8 - 3.15to3.9+ - README Line 4: Remove Python 3.8 from the badge
BinaryOptionsToolsV2/pyproject.tomlLine 14: Changerequires-python = ">=3.8"to>=3.9- Remove Python 3.8 from classifiers in
pyproject.toml(Line 23)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@README.md` around lines 118 - 135, The README and packaging metadata claim
Python 3.8 compatibility but the wheels and Rust pyo3 binding use ABI3 for
CPython 3.9 (`cp39-abi3`, `pyo3` `abi3-py39`), so update docs and pyproject to
require Python >=3.9: change the README text from "3.8 - 3.15" to "3.9+", remove
the Python 3.8 badge/mention, and in BinaryOptionsToolsV2/pyproject.toml update
requires-python to ">=3.9" and remove any Python 3.8 classifiers so the package
metadata matches the built wheels and pyo3 ABI configuration.
| def crashing_func(msg: str) -> bool: | ||
| raise ValueError("Simulated crash") | ||
|
|
||
| v = Validator.custom(crashing_func) | ||
| # This should return False instead of crashing the process | ||
| assert v.check("any message") == False |
There was a problem hiding this comment.
Make the new test use a lint-safe boolean assertion.
Line 84 is flagged as E712. Renaming the unused argument on Line 79 also clears ARG001 in the same block.
🧹 Suggested fix
- def crashing_func(msg: str) -> bool:
+ def crashing_func(_msg: str) -> bool:
raise ValueError("Simulated crash")
@@
- assert v.check("any message") == False
+ assert v.check("any message") is False🧰 Tools
🪛 Ruff (0.15.4)
[warning] 79-79: Unused function argument: msg
(ARG001)
[warning] 80-80: Avoid specifying long messages outside the exception class
(TRY003)
[error] 84-84: Avoid equality comparisons to False; use not v.check("any message"): for false checks
Replace with not v.check("any message")
(E712)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/python/core/test_validator.py` around lines 79 - 84, The test should
avoid E712 and ARG001: rename the unused parameter in crashing_func from msg to
_msg (e.g., def crashing_func(_msg: str) -> bool) and change the assertion to
use identity comparison: assert v.check("any message") is False; this keeps the
intended behavior (Validator.custom(crashing_func) returns False on exception)
while satisfying lint rules.
Pull Request
Overview
init 0.2.9, pretty minor changes but mostly just bug fixes and publishing to pypi for easier installation
Changes
decreased ci build time using cache
some small bug fixes and increased error catching
Type of Change
Validation
Describe how the changes were tested.
Environment
Checklist
Screenshots (Optional)
Add relevant visuals if applicable.
Summary by CodeRabbit
Release Notes - Version 0.2.9
Bug Fixes
Changes
Documentation
Tests