Problem
The Nova Act SDK currently returns an ActResult with high-level metadata (num_steps_executed, time_worked_s, etc.), but does not expose the detailed step-by-step execution data that appears in console logs during execution.
Console output shows valuable structured data like:
👀 ...
💭 ...
think("The cookie popup is now closed...")
agentClick("43,695,62,743")
This data is currently not accessible programmatically through the Python SDK.
Requested Feature
Expose step-level data in ActResult, for example:
result = nova.act("Navigate to pricing page")
for step in result.steps: # New property
print(step.observation) # What the agent saw
print(step.thinking) # Agent reasoning (think())
print(step.action) # Action taken (agentClick, agentType, etc.)
print(step.timestamp) # When it occurred
Use Cases
Debugging - Understanding why an automation failed at a specific step
Audit trails - Recording what actions were taken for compliance
Analytics - Analyzing agent behavior patterns
Building guides/tutorials - Turning automation runs into step-by-step documentation
Quality assurance - Validating agent reasoning matches expected behavior
Current Workarounds
Parsing stdout (fragile, format not guaranteed)
Manually checking logs_directory output files
Using the observability console (not programmatic)
Additional Context
The data appears to already exist internally - it would be valuable to expose it through the SDK's public API.
Problem
The Nova Act SDK currently returns an ActResult with high-level metadata (num_steps_executed, time_worked_s, etc.), but does not expose the detailed step-by-step execution data that appears in console logs during execution.
Console output shows valuable structured data like:
👀 ...
💭 ...
think("The cookie popup is now closed...")
agentClick("43,695,62,743")
This data is currently not accessible programmatically through the Python SDK.
Requested Feature
Expose step-level data in ActResult, for example:
result = nova.act("Navigate to pricing page")
for step in result.steps: # New property
print(step.observation) # What the agent saw
print(step.thinking) # Agent reasoning (think())
print(step.action) # Action taken (agentClick, agentType, etc.)
print(step.timestamp) # When it occurred
Use Cases
Debugging - Understanding why an automation failed at a specific step
Audit trails - Recording what actions were taken for compliance
Analytics - Analyzing agent behavior patterns
Building guides/tutorials - Turning automation runs into step-by-step documentation
Quality assurance - Validating agent reasoning matches expected behavior
Current Workarounds
Parsing stdout (fragile, format not guaranteed)
Manually checking logs_directory output files
Using the observability console (not programmatic)
Additional Context
The data appears to already exist internally - it would be valuable to expose it through the SDK's public API.