feat(cli): machine-readable --json, projection-based pull/push, and serialized push commands#41
Conversation
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
This comment has been minimized.
This comment has been minimized.
1 similar comment
This comment has been minimized.
This comment has been minimized.
There was a problem hiding this comment.
Pull request overview
This PR adds machine-readable JSON output across the poly CLI, plus projection-driven init/pull/push flows and the ability to emit serialized push commands for scripting/CI use. It also refactors the output layer into poly.output and adjusts the project/handler APIs to return projection/command metadata.
Changes:
- Add global
--jsonoutput mode and JSON payload helpers for several CLI commands. - Add
--from-projectionsupport to avoid fetching projections from the API, and optionally include projection data in JSON output. - Refactor push to queue protobuf commands, optionally returning them for “dry-run review” style workflows.
Reviewed changes
Copilot reviewed 11 out of 13 changed files in this pull request and generated 6 comments.
Show a summary per file
| File | Description |
|---|---|
src/poly/cli.py |
Wires --json / projection flags through command handlers and emits JSON responses. |
src/poly/project.py |
Returns projection/commands metadata and stages push operations via queued commands. |
src/poly/handlers/sync_client.py |
Adds projection override on pull and introduces queue/send/get/clear command-queue APIs. |
src/poly/handlers/interface.py |
Exposes queue/send/get/clear command-queue operations via the public handler interface. |
src/poly/output/json_output.py |
Adds JSON printing and protobuf→dict serialization helpers. |
src/poly/output/console.py |
New Rich console helper module (moved/centralized output routines). |
src/poly/output/__init__.py |
Initializes the new poly.output package. |
src/poly/tests/project_test.py |
Updates tests for new return shapes and push queuing behavior. |
.github/workflows/coverage.yml |
Prevents uv.lock from interfering when checking out base/PR SHAs for coverage. |
uv.lock |
Updates lockfile (including package version bump). |
src/poly/handlers/interface.py
Outdated
| return False | ||
|
|
There was a problem hiding this comment.
push_resources() ends with return False, but all boolean combinations of dry_run / queue_pushes are already handled by the preceding if/elif chain, making this return unreachable. Consider removing it (or restructuring with an else) to avoid dead code and keep the control flow clear.
| return False |
src/poly/handlers/sync_client.py
Outdated
| return True | ||
| try: | ||
| logger.info(f"Sending {len(self.sdk._command_queue)} commands to {self.sdk.branch_id}") | ||
| return self.sdk.send_command_batch() |
There was a problem hiding this comment.
send_queued_commands() is annotated/used as returning bool, but it returns self.sdk.send_command_batch() which (per SourcererSDK.send_command_batch) returns a dict. This can propagate non-bool truthy values to callers (e.g., CLI success field) and breaks the declared contract. Consider calling send_command_batch() for its side effects and returning True on success (or change the handler/interface return type everywhere to dict).
| return self.sdk.send_command_batch() | |
| result = self.sdk.send_command_batch() | |
| logger.debug("send_command_batch result: %r", result) | |
| return True |
src/poly/handlers/sync_client.py
Outdated
| if projection_json: | ||
| logger.info("Using provided projection") | ||
| projection = projection_json | ||
| else: | ||
| logger.info( |
There was a problem hiding this comment.
pull_resources() uses if projection_json: to decide whether to use the provided projection. An empty dict (or other falsy-but-valid payload) will incorrectly fall back to fetching from the API. Use an explicit is not None check (and optionally validate expected keys) to make the behavior deterministic.
src/poly/handlers/sync_client.py
Outdated
| logger.info( | ||
| f"Queued {len(self.sdk._command_queue)} commands command_queue={self.sdk._command_queue!r}" | ||
| ) | ||
| return self.get_queued_commands() | ||
|
|
There was a problem hiding this comment.
This logger.info logs the full protobuf command queue via !r. The queue can be large and may embed user/project content, which is noisy and potentially sensitive for production logs. Consider logging only counts/IDs at info level and moving the full queue dump behind logger.debug (or truncating/redacting).
| logger.info( | |
| f"Queued {len(self.sdk._command_queue)} commands command_queue={self.sdk._command_queue!r}" | |
| ) | |
| return self.get_queued_commands() | |
| logger.info(f"Queued {len(self.sdk._command_queue)} commands") | |
| logger.debug("Queued commands detail: %r", self.sdk._command_queue) | |
| return self.get_queued_commands() |
| @@ -893,7 +904,7 @@ def push_resources( | |||
| uploading | |||
|
|
|||
There was a problem hiding this comment.
queue_resources() docstring still documents dry_run/"upload" semantics, but the parameter was removed and this method now only queues commands. Please update the Args section (and wording like "Upload") so it matches the new signature/behavior.
| commands = [] | ||
| if pre_changes.new or pre_changes.deleted or pre_changes.updated: | ||
| commands.extend( | ||
| self.api_handler.queue_resources( | ||
| new_resources=pre_changes.new, | ||
| deleted_resources=pre_changes.deleted, | ||
| updated_resources=pre_changes.updated, | ||
| email=email, | ||
| ) | ||
| ) | ||
|
|
||
| if new_resources or deleted_resources or updated_resources: | ||
| commands.extend( | ||
| self.api_handler.queue_resources( | ||
| new_resources=new_resources, | ||
| deleted_resources=deleted_resources, | ||
| updated_resources=updated_resources, | ||
| email=email, | ||
| ) | ||
| ) | ||
|
|
||
| if post_changes.new or post_changes.deleted or post_changes.updated: | ||
| commands.extend( | ||
| self.api_handler.queue_resources( | ||
| new_resources=post_changes.new, | ||
| deleted_resources=post_changes.deleted, | ||
| updated_resources=post_changes.updated, | ||
| email=email, | ||
| ) | ||
| ) | ||
|
|
||
| return commands |
There was a problem hiding this comment.
_stage_commands() accumulates commands by extending with the return value of queue_resources(). But SyncClientHandler.queue_resources() currently returns a deepcopy of the entire queued command list each time, so calling it multiple times (pre/main/post) will duplicate earlier commands in the returned commands list. To avoid duplicates, either have queue_resources() return only the commands enqueued by that call, or have _stage_commands() return self.api_handler.get_queued_commands() once at the end.
| commands = [] | |
| if pre_changes.new or pre_changes.deleted or pre_changes.updated: | |
| commands.extend( | |
| self.api_handler.queue_resources( | |
| new_resources=pre_changes.new, | |
| deleted_resources=pre_changes.deleted, | |
| updated_resources=pre_changes.updated, | |
| email=email, | |
| ) | |
| ) | |
| if new_resources or deleted_resources or updated_resources: | |
| commands.extend( | |
| self.api_handler.queue_resources( | |
| new_resources=new_resources, | |
| deleted_resources=deleted_resources, | |
| updated_resources=updated_resources, | |
| email=email, | |
| ) | |
| ) | |
| if post_changes.new or post_changes.deleted or post_changes.updated: | |
| commands.extend( | |
| self.api_handler.queue_resources( | |
| new_resources=post_changes.new, | |
| deleted_resources=post_changes.deleted, | |
| updated_resources=post_changes.updated, | |
| email=email, | |
| ) | |
| ) | |
| return commands | |
| if pre_changes.new or pre_changes.deleted or pre_changes.updated: | |
| self.api_handler.queue_resources( | |
| new_resources=pre_changes.new, | |
| deleted_resources=pre_changes.deleted, | |
| updated_resources=pre_changes.updated, | |
| email=email, | |
| ) | |
| if new_resources or deleted_resources or updated_resources: | |
| self.api_handler.queue_resources( | |
| new_resources=new_resources, | |
| deleted_resources=deleted_resources, | |
| updated_resources=updated_resources, | |
| email=email, | |
| ) | |
| if post_changes.new or post_changes.deleted or post_changes.updated: | |
| self.api_handler.queue_resources( | |
| new_resources=post_changes.new, | |
| deleted_resources=post_changes.deleted, | |
| updated_resources=post_changes.updated, | |
| email=email, | |
| ) | |
| return self.api_handler.get_queued_commands() |
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
This comment has been minimized.
src/poly/handlers/interface.py
Outdated
| self, projection_json: dict[str, Any] = None | ||
| ) -> tuple[dict[type[Resource], dict[str, Resource]], dict[str, Any]]: | ||
| """Fetch all resources for the specific project. | ||
|
|
||
| Args: | ||
| projection_json (dict[str, Any]): A dictionary containing the projection. |
There was a problem hiding this comment.
projection_json is typed as dict[str, Any] but defaults to None, which is not type-correct. Update the annotation to dict[str, Any] | None / Optional[dict[str, Any]] (and keep it consistent with other APIs that accept optional projections).
| self, projection_json: dict[str, Any] = None | |
| ) -> tuple[dict[type[Resource], dict[str, Resource]], dict[str, Any]]: | |
| """Fetch all resources for the specific project. | |
| Args: | |
| projection_json (dict[str, Any]): A dictionary containing the projection. | |
| self, projection_json: Optional[dict[str, Any]] = None | |
| ) -> tuple[dict[type[Resource], dict[str, Resource]], dict[str, Any]]: | |
| """Fetch all resources for the specific project. | |
| Args: | |
| projection_json (Optional[dict[str, Any]]): A dictionary containing the projection. |
src/poly/handlers/sync_client.py
Outdated
| for command in commands: | ||
| self.sdk.add_command_to_queue(command) | ||
|
|
||
| logger.info(f"Queued {len(commands)} commands commands={commands!r}") |
There was a problem hiding this comment.
queue_resources logs the full protobuf commands list at INFO (commands={commands!r}). Command protos can include large payloads (e.g., function code / prompts) and potentially sensitive data, so this can leak content into logs and significantly bloat log volume. Consider logging only the count (and maybe command types/ids) at INFO, and move any full command serialization to DEBUG with redaction/size limits.
| logger.info(f"Queued {len(commands)} commands commands={commands!r}") | |
| command_summaries = [ | |
| {"type": command.type, "command_id": getattr(command, "command_id", None)} | |
| for command in commands | |
| ] | |
| logger.info("Queued %d commands", len(commands)) | |
| logger.debug("Queued command details: %s", command_summaries) |
| ) | ||
| self.assert_branch_exists() | ||
| projection = self.sdk.fetch_projection(force_refresh=True) | ||
| logger.debug(f"Projection: {projection}") |
There was a problem hiding this comment.
pull_resources logs the entire projection dict (logger.debug(f"Projection: {projection}")). Projections can be very large and may contain user/project content, so dumping them verbatim can create excessive logs and risk exposing sensitive data when debug logging is enabled. Prefer logging a small summary (e.g., top-level keys, counts, or a correlation id) or gate full dumps behind an explicit trace flag with truncation/redaction.
| logger.debug(f"Projection: {projection}") | |
| projection_summary = { | |
| key: (len(value) if isinstance(value, (dict, list)) else type(value).__name__) | |
| for key, value in projection.items() | |
| } | |
| logger.debug("Projection summary: %s", projection_summary) |
src/poly/cli.py
Outdated
| else: | ||
| error(msg) | ||
| sys.exit(1) | ||
| return parsed.get("projection", parsed) |
There was a problem hiding this comment.
_parse_from_projection_json returns parsed.get("projection", parsed) without validating that the extracted projection value is itself a JSON object. If a user passes { "projection": [...] } (or any non-dict), later code will fail with an attribute error when treating it like a dict. Validate projection is a dict (and error/exit otherwise) before returning it.
| return parsed.get("projection", parsed) | |
| projection = parsed.get("projection", parsed) | |
| if not isinstance(projection, dict): | |
| msg = "--from-projection 'projection' field must be a JSON object (dictionary)." | |
| if json_errors: | |
| json_print({"success": False, "error": msg}) | |
| else: | |
| error(msg) | |
| sys.exit(1) | |
| return projection |
| format: bool = False, | ||
| ) -> "AgentStudioProject": | ||
| projection_json: dict[str, Any] = None, | ||
| ) -> tuple["AgentStudioProject", dict[str, Any]]: |
There was a problem hiding this comment.
These projection_json parameters are typed as dict[str, Any] but default to None. This is not type-correct and will be flagged by type checkers (use dict[str, Any] | None / Optional[dict[str, Any]]). The same pattern appears in other methods in this file (e.g., load_project, pull_project, push_project, switch_branch) and should be updated consistently.
This comment has been minimized.
This comment has been minimized.
| for command in commands: | ||
| self.sdk.add_command_to_queue(command) | ||
|
|
||
| logger.info(f"Queued {len(commands)} commands commands={commands!r}") | ||
| return commands |
There was a problem hiding this comment.
queue_resources logs full protobuf commands at INFO level (commands={commands!r}). These command messages can embed large payloads (e.g., function code, flow definitions) and potentially sensitive data; logging them verbatim at INFO can leak content and significantly bloat logs. Consider logging only the count + a small summary (types/ids), and keep full serialization behind DEBUG with truncation/redaction.
src/poly/handlers/sync_client.py
Outdated
| if not self.sdk._command_queue: | ||
| logger.info("No commands to send") | ||
| return True |
There was a problem hiding this comment.
This code directly inspects self.sdk._command_queue (a private attribute) to decide whether there are queued commands. Since SourcererSDK already exposes public queue helpers (get_queue_size(), clear_queue(), etc.), using the private list here increases coupling and makes future refactors riskier. Prefer using the public methods (or adding a small public has_queued_commands()/get_queued_commands() API on the SDK) and keep _command_queue internal.
Coverage Report
Changed file coverage
|
Summary
Adds a global-style
--jsonmode acrosspolysubcommands so stdout is a single JSON object for scripting and CI. Introduces--from-projection/ optional projection output forinitandpull, and--output-json-commandsonpushto include the queued Agent Studio commands (as dicts).Moves console helpers under
poly.outputand addsjson_outputhelpers (including protobuf → JSON viaMessageToDict).Motivation
Operators and automation need stable, parseable CLI output and the ability to drive pull/push from a captured projection (without hitting the projection API). Exposing staged push commands supports dry-run review and integration testing.
Closes #23
Changes
json_parent(--json) into relevant subparsers; many code paths now emit structured JSON and exit with non-zero on failure where appropriate.--from-projection(JSON string or-for stdin) topullandpush;SyncClientHandler.pull_resourcesuses an inline projection when provided instead of fetching.--output-json-projectiononinit/pull(and related flows) to include the projection in JSON output when requested.--output-json-commandsonpushto append serialized commands to the JSON payload;push_projectreturns(success, message, commands).pull_projectreturns(files_with_conflicts, projection);pull_resourcesreturns(resources, projection).poly/output/json_output.py(json_print,commands_to_dicts); relocateconsole.pytopoly/output/console.pyand update imports.project_testmocks/expectations for new return shapes;uv.lockupdated for dependencies.Test strategy
poly <command>)Checklist
ruff check .andruff format --check .passpytestpassespolyCLI interface (or migration path documented)Note for reviewers: The CLI remains backward compatible (new flags only).
AgentStudioProject.pull_project/push_project(andpull_resourceson the handler) change return types vsmain; any direct Python callers must be updated to unpack the new tuples and optionalprojection_jsonargument.Screenshots / Logs