-
Notifications
You must be signed in to change notification settings - Fork 0
WIP: Track dwell time counter for time in area project #12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
Important Review skippedDraft detected. Please check the settings in the CodeRabbit UI or the You can disable this status message by setting the Note Other AI code review bot(s) detectedCodeRabbit has detected other AI code review bot(s) in this pull request and will avoid duplicating their findings in the review comments. This may lead to a less comprehensive review. WalkthroughAdds a complete Time-in-Area analytics solution: Telegraf input/output configs, Starlark processor for per-track time calculation and cleanup, a threshold filter, Axis metadata consumers and test feeders, test data and utilities (recorder, heatmap viewer), documentation, and CI workflows/actions for quality and end-to-end tests. Changes
Sequence Diagram(s)sequenceDiagram
autonumber
participant Camera as Axis Camera
participant Consumer as axis_scene_detection_consumer.sh
participant Telegraf as Telegraf (execd input)
participant Starlark as track_duration_calculator.star
participant Filter as Threshold Processor
participant Output as MQTT / stdout
Camera->>Consumer: message-broker-cli (scene_description frames)
Consumer-->>Telegraf: per-detection JSON lines
Telegraf->>Starlark: detection_frame metric
Starlark-->>Telegraf: detection_frame + time_in_area_seconds
Telegraf->>Filter: apply threshold (ALERT_THRESHOLD_SECONDS)
alt time_in_area >= threshold
Filter-->>Output: detection_frame (alert)
else below threshold
Filter--xOutput: drop metric
end
sequenceDiagram
autonumber
participant Feeder as sample_data_feeder.sh
participant Telegraf as Telegraf (execd + processors)
participant Stdout as outputs.file (stdout)
Feeder-->>Telegraf: per-detection JSON (from simple_tracks.jsonl)
Telegraf->>Telegraf: Starlark duration calc -> threshold filter
Telegraf-->>Stdout: JSON metrics for testing/verification
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60–90 minutes Possibly related PRs
Suggested reviewers
✨ Finishing touches🧪 Generate unit tests (beta)
Comment |
8e99c0d to
7a0cb16
Compare
🎨 Code Formatting RequiredTrailing Whitespace IssuesSome files have trailing whitespace (spaces/tabs at end of lines). Files with trailing whitespace:
To fix: Remove the trailing spaces/tabs at the end of lines in your favorite editor. After fixing:
|
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ✅ Track Heatmap Viewer: success What This MeansTelegraf Pipeline Issues:
|
|
TODO:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
🧹 Nitpick comments (26)
project-time-in-area-analytics/test_scripts/requirements.txt (1)
1-4: Confirm compatibility with Python 3.13 used in CIThese pins look reasonable. Please verify they install cleanly under Python 3.13 (the version configured in the composite action), especially numpy/matplotlib wheels and Paramiko’s cryptography dependency.
If needed, I can update the composite action to make the Python version configurable per workflow so projects with stricter wheels can use 3.12 temporarily.
project-time-in-area-analytics/test_files/simple_tracks.jsonl (2)
9-9: Potentially unintended reuse of track_001 after deletionAfter a DeleteOperation for track_001 (Line 6), Line 9 introduces a new observation with the same track_id. If this isn’t an explicit test for “ID reuse,” consider using a new ID (e.g., track_005) to avoid ambiguity in dwell-time logic and metrics attribution.
Apply this diff if reuse is unintentional:
-{"frame": {"observations": [{"bounding_box": {"bottom": 0.79, "left": 0.29, "right": 0.39, "top": 0.54}, "class": {"type": "Vehicle"}, "timestamp": "2024-01-15T10:00:12.456789Z", "track_id": "track_003"}, {"bounding_box": {"bottom": 0.68, "left": 0.28, "right": 0.38, "top": 0.48}, "class": {"type": "Human"}, "timestamp": "2024-01-15T10:00:12.456789Z", "track_id": "track_001"}], "timestamp": "2024-01-15T10:00:12.456789Z"}} +{"frame": {"observations": [{"bounding_box": {"bottom": 0.79, "left": 0.29, "right": 0.39, "top": 0.54}, "class": {"type": "Vehicle"}, "timestamp": "2024-01-15T10:00:12.456789Z", "track_id": "track_003"}, {"bounding_box": {"bottom": 0.68, "left": 0.28, "right": 0.38, "top": 0.48}, "class": {"type": "Human"}, "timestamp": "2024-01-15T10:00:12.456789Z", "track_id": "track_005"}], "timestamp": "2024-01-15T10:00:12.456789Z"}}
11-11: Consider finalizing open tracks with explicit DeleteOperationsThe last frame has no operations. If the dwell-time calculator awaits explicit DeleteOperation events to flush metrics, add DeleteOperations for any active tracks (e.g., track_003 and track_004) to make tests deterministic.
Apply this diff to emit explicit deletions at the final timestamp:
-{"frame": {"observations": [], "operations": [], "timestamp": "2024-01-15T10:00:25.000000Z"}} +{"frame": {"observations": [], "operations": [{"id": "track_003", "type": "DeleteOperation"}, {"id": "track_004", "type": "DeleteOperation"}], "timestamp": "2024-01-15T10:00:25.000000Z"}}project-time-in-area-analytics/test_files/sample_data_feeder.sh (2)
1-3: Enable safer shell execution flagsUse set -eu for early failure on unset vars and error propagation (portable in /bin/sh). This avoids silent misconfigurations during CI runs.
Apply this diff:
#!/bin/sh -# Sample Data Feeder Script +set -eu + +# Sample Data Feeder Script
11-21: Validate HELPER_FILES_DIR is set before path usageThe script assumes HELPER_FILES_DIR is set. With set -u (recommended), referencing an unset var will abort the script. Add an explicit check and clearer error.
Apply this diff:
# Check if sample file is specified if [ -z "$SAMPLE_FILE" ]; then echo "ERROR: SAMPLE_FILE environment variable not set" >&2 exit 1 fi +# Check if helper files directory is specified +if [ -z "${HELPER_FILES_DIR:-}" ]; then + echo "ERROR: HELPER_FILES_DIR environment variable not set" >&2 + exit 1 +fi + # Check if sample file exists if [ ! -f "$HELPER_FILES_DIR/$SAMPLE_FILE" ]; then echo "ERROR: Sample file not found: $HELPER_FILES_DIR/$SAMPLE_FILE" >&2 exit 1 fi.github/actions/python-quality-check/action.yml (6)
3-14: Make Python version configurable (default 3.13)Different projects may have wheels that lag on 3.13 (e.g., opencv). Add a python-version input with a sensible default.
Apply this diff:
inputs: working-directory: description: "Working directory for the Python project" required: true + python-version: + description: "Python version to use via actions/setup-python" + required: false + default: "3.13" additional-types: description: "Additional type packages to install (space-separated)" required: false default: "" pylint-options: description: "Additional options for pylint" required: false default: ""
19-23: Use the new python-version input in setup stepWire the new input into actions/setup-python to enable per-workflow overrides.
Apply this diff:
- name: Setup Python uses: actions/setup-python@v5 with: - python-version: "3.13" + python-version: ${{ inputs.python-version }}
24-36: Avoid bundling types-click by default (can conflict with Click 8’s typing)types-click 7.x can overshadow Click’s built-in annotations (8.x) and cause mypy noise. Make it opt-in via additional-types instead of default.
Apply this diff:
- name: Install test dependencies shell: bash run: | python -m pip install --upgrade pip pip install \ black==25.1.0 \ flake8==7.3.0 \ isort==6.0.1 \ mypy==1.17.1 \ - pylint==3.3.8 \ - types-click==7.1.8 \ + pylint==3.3.8 \ ${{ inputs.additional-types }}
37-42: Prefer python -m pip for invoking pip in a deterministic interpreter contextMinor robustness tweak; avoids PATH/alias surprises.
Apply this diff:
- name: Install project requirements shell: bash working-directory: ${{ inputs.working-directory }} run: | - pip install -r requirements.txt + python -m pip install -r requirements.txt
57-62: Make pylint step resilient when no top-level .py files existGuard against “no match” glob expansion to keep CI green for script-less directories.
Apply this diff:
- name: Run pylint shell: bash working-directory: ${{ inputs.working-directory }} run: | - pylint *.py ${{ inputs.pylint-options }} + if compgen -G "*.py" > /dev/null; then + pylint *.py ${{ inputs.pylint-options }} + else + echo "No Python files to lint" + fi
75-79: Run doctests only if Python files are presentPrevents false negatives when a test_scripts dir only contains helpers.
Apply this diff:
- name: Run doctests shell: bash working-directory: ${{ inputs.working-directory }} run: | - python -m doctest *.py -v + if compgen -G "*.py" > /dev/null; then + python -m doctest *.py -v + else + echo "No Python files to doctest" + fi.github/workflows/project-timelapse-s3-python-quality.yml (1)
25-32: Nice reuse of the composite action; keep Python version override in mindLooks good. If any of this project’s dependencies lag on Python 3.13 wheels, you can now set python-version: "3.12" via the action’s new input without changing the action itself.
Example:
- uses: ./.github/actions/python-quality-check with: working-directory: project-timelapse-s3/test_scripts python-version: "3.12" additional-types: types-tqdm==4.67.0.20250516 types-boto3==1.40.4 types-Pillow==10.2.0.20240822 pylint-options: --disable=E1101.github/workflows/project-time-in-area-test-analytics.yml (6)
298-314: Quote $GITHUB_OUTPUT to avoid word-splitting (SC2086)Shellcheck warns about unquoted redirections. Quote $GITHUB_OUTPUT everywhere it’s used.
Apply this diff:
- echo "vis_result=$VIS_RESULT" >> $GITHUB_OUTPUT - echo "telegraf_result=$TELEGRAF_RESULT" >> $GITHUB_OUTPUT + echo "vis_result=$VIS_RESULT" >> "$GITHUB_OUTPUT" + echo "telegraf_result=$TELEGRAF_RESULT" >> "$GITHUB_OUTPUT" @@ - echo "all_passed=true" >> $GITHUB_OUTPUT + echo "all_passed=true" >> "$GITHUB_OUTPUT" @@ - echo "all_passed=false" >> $GITHUB_OUTPUT + echo "all_passed=false" >> "$GITHUB_OUTPUT"
50-51: Use grep -c instead of grep|wc -l (SC2126)Directly count matches with grep -c; it’s simpler and avoids a subshell + wc.
Apply this diff:
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -Ec "^\s+track_") @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -Ec "^\s+track_") + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -Ec "^\s+track_") + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//')Also applies to: 81-83, 115-117
142-147: Replace deprecated apt-key usage when adding the InfluxData repoapt-key is deprecated. Use a signed-by keyring with gpg --dearmor.
Apply this diff:
- wget -qO- https://repos.influxdata.com/influxdata-archive_compat.key | sudo apt-key add - - echo "deb https://repos.influxdata.com/ubuntu $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/influxdb.list + wget -qO- https://repos.influxdata.com/influxdata-archive_compat.key | gpg --dearmor | sudo tee /usr/share/keyrings/influxdb-archive_compat.gpg >/dev/null + . /etc/os-release + echo "deb [signed-by=/usr/share/keyrings/influxdb-archive_compat.gpg] https://repos.influxdata.com/ubuntu ${VERSION_CODENAME} stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
156-159: Split export assignment to avoid masking return values (SC2155)Declare and export separately to silence shellcheck and avoid masking.
Apply this diff:
- export HELPER_FILES_DIR="$(pwd)" - export SAMPLE_FILE="test_files/simple_tracks.jsonl" - export ALERT_THRESHOLD_SECONDS="15" + HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR + SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE + ALERT_THRESHOLD_SECONDS="15"; export ALERT_THRESHOLD_SECONDS @@ - export HELPER_FILES_DIR="$(pwd)" - export SAMPLE_FILE="test_files/simple_tracks.jsonl" - export ALERT_THRESHOLD_SECONDS="2" + HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR + SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE + ALERT_THRESHOLD_SECONDS="2"; export ALERT_THRESHOLD_SECONDS @@ - export HELPER_FILES_DIR="$(pwd)" - export SAMPLE_FILE="test_files/simple_tracks.jsonl" - export ALERT_THRESHOLD_SECONDS="0" + HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR + SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE + ALERT_THRESHOLD_SECONDS="0"; export ALERT_THRESHOLD_SECONDSAlso applies to: 196-199, 242-245
366-366: Trim trailing whitespace (YAML lint)There’s trailing whitespace on this line.
Apply this diff:
- - The time-in-area calculation or threshold filtering may not be working correctly + - The time-in-area calculation or threshold filtering may not be working correctly
34-62: Make visualization test parsing more robust (machine-readable output)Grep-based parsing of the “Tracks with alarms” free-text output is brittle and easy to break as the script’s logging evolves. Since
track_heatmap_viewer.pyalready prints:print(f"\nTracks with alarms (>= {alarm_threshold}s):") for track_id in sorted(alarm_tracks): print(f" {track_id}")consider adding a new flag (e.g.
--json-summaryor--summary) that emits a concise JSON payload:{ "alarms": <number>, "tracks": ["track_1", "track_2", …] }Then update each workflow step in
.github/workflows/project-time-in-area-test-analytics.yml(lines 34–62, 63–96, 97–130) to usejqfor assertions:OUTPUT=$(python test_scripts/track_heatmap_viewer.py ... --json-summary) ACTUAL_ALARMS=$(echo "$OUTPUT" | jq '.alarms')This will make tests far more stable and self-documenting.
• Files to update:
.github/workflows/project-time-in-area-test-analytics.ymlproject-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyI’m happy to draft a PR with a JSON summary mode and updated workflow steps—just let me know!
project-time-in-area-analytics/test_scripts/README.md (3)
36-42: Use “Axis devices” to align with repo terminologyPrefer “Axis devices” to keep examples broadly applicable (even though this script targets camera analytics).
Apply this diff:
-The `record_real_data.py` script allows you to record live analytics scene description data from Axis cameras for testing and analysis purposes. +The `record_real_data.py` script allows you to record live analytics scene description data from Axis devices (e.g., cameras) for testing and analysis purposes.
128-133: Add manual anchor for heading with special character (“&”)Per docs guidelines, add an HTML anchor for headings with special characters and update the TOC link to reference it.
Apply this diff:
- - [Features & Example Output](#features--example-output) + - [Features & Example Output](#features-example-output) @@ -### Features & Example Output +<a id="features-example-output"></a> +### Features & Example Output
68-76: Document minimum required privileges for credentialsPlease specify the minimal privileges needed (e.g., viewer/operator/admin) for the SSH account to access the message broker, and advise against using higher privileges than necessary.
I can propose precise wording once you confirm which roles can read the broker on AXIS OS 12+ devices.
project-time-in-area-analytics/README.md (3)
121-127: Call out POSIX shell requirement for .sh helper scriptsPer repo guidance, Axis devices do not have bash. Ensure helper scripts are POSIX sh compatible and documented as such.
Would you like me to scan test_files/sample_data_feeder.sh for bashisms and propose a POSIX-safe rewrite?
137-159: Clarify host vs. device behavior and minimum privileges for credentialsPlease state minimum required privileges for any credentials used (viewer/operator/admin) and prefer least-privilege. Also clarify that the host examples run locally and device-side uses the Axis message broker.
Apply this diff to include minimum-privileges guidance:
### Host Testing Limitations @@ -**Only works in the Axis Device:** +**Only works on the Axis device:** @@ -You can also test with real analytics scene description data recorded from an Axis device: +You can also test with real analytics scene description data recorded from an Axis device (use least-privilege account, e.g., operator if sufficient; avoid admin unless required):
328-345: Viewer vs. Telegraf 60s-gap handling: confirm consistency claimREADME claims both tools behave identically and that Telegraf forgets a track after 60s gaps. The viewer currently accumulates time since first seen and doesn’t reset after long gaps. Please either:
- document that the viewer approximates “total since first seen,” or
- update the viewer to reset after gaps > 60s to match Telegraf.
I’ve proposed a concrete code change in track_heatmap_viewer.py to align behavior; see that review for details.
project-time-in-area-analytics/test_scripts/record_real_data.py (2)
299-311: Use 'raise … from None' to satisfy Ruff B904 and clarify exception originsRuff flagged these raises in except blocks. Use exception chaining to avoid implying handler errors.
Apply this diff:
- except KeyboardInterrupt: - click.echo("\n❌ Cancelled by user") - raise click.Abort() + except KeyboardInterrupt: + click.echo("\n❌ Cancelled by user") + raise click.Abort() from None @@ - except KeyboardInterrupt: - click.echo("\n❌ Cancelled by user") - raise click.Abort() + except KeyboardInterrupt: + click.echo("\n❌ Cancelled by user") + raise click.Abort() from None
52-71: Auto-adding unknown host keys is convenient but weakens SSH trustAutoAddPolicy() trusts any host key on first connect. For production or sensitive environments, prefer RejectPolicy + known_hosts management or make this user-configurable via a flag.
I can add a --insecure-auto-add-host-key flag defaulting to False and switch the policy accordingly. Want me to push that change?
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
project-time-in-area-analytics/.images/track-heatmap-120s.pngis excluded by!**/*.pngproject-time-in-area-analytics/.images/track-heatmap-simple.pngis excluded by!**/*.png
📒 Files selected for processing (16)
.github/actions/python-quality-check/action.yml(1 hunks).github/workflows/project-time-in-area-analytics-python-quality.yml(1 hunks).github/workflows/project-time-in-area-test-analytics.yml(1 hunks).github/workflows/project-timelapse-s3-python-quality.yml(1 hunks)project-time-in-area-analytics/README.md(1 hunks)project-time-in-area-analytics/config_process_threshold_filter.conf(1 hunks)project-time-in-area-analytics/config_process_track_duration.conf(1 hunks)project-time-in-area-analytics/test_files/config_input_sample_data.conf(1 hunks)project-time-in-area-analytics/test_files/config_output_stdout.conf(1 hunks)project-time-in-area-analytics/test_files/sample_data_feeder.sh(1 hunks)project-time-in-area-analytics/test_files/simple_tracks.jsonl(1 hunks)project-time-in-area-analytics/test_scripts/README.md(1 hunks)project-time-in-area-analytics/test_scripts/record_real_data.py(1 hunks)project-time-in-area-analytics/test_scripts/requirements.txt(1 hunks)project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py(1 hunks)project-time-in-area-analytics/track_duration_calculator.star(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (3)
- project-time-in-area-analytics/test_files/config_output_stdout.conf
- project-time-in-area-analytics/config_process_threshold_filter.conf
- project-time-in-area-analytics/track_duration_calculator.star
🧰 Additional context used
📓 Path-based instructions (3)
project-*/**
⚙️ CodeRabbit Configuration File
project-*/**: Each example project for the FixedIT Data Agent should be self-contained and designed as a standalone example / tutorial showing how to use the FixedIT Data Agent. The project should be well documented and include all the necessary files to run the example. A reader should be able to understand the background and the purpose of the project, and be able to understand the implementation details and how to use it. If the project includes a dashboard, it should be placed in a new folder in the repo-root/dashboard-deployments/ directory instead of being included in the project folder. In the README for the project, the first time the FixedIT Data Agent is mentioned, we should link to the product page: https://fixedit.ai/products-data-agent/ Emphasize that these solutions are accessible to system integrators and IT professionals without requiring complex C/C++ ACAP development. Highlight that custom edge intelligence can be implemented using familiar IT tools (configuration files, shell scripts, REST APIs) rather than traditional embedded programming approaches. Note that bash does not exist on the Axis devices, so .sh scripts should be written in portable shell syntax.
README Structure: All project README files should follow the standardized template at.project_readme_template.md. This template ensures consistent documentation that serves both audiences: (1) IT professionals who want plug-and-play solutions, and (2) developers who want to customize and extend the examples. The template enforces a logical structure: 1) Introduction and background (project purpose, how it works, why choose this approach) 2) Technical explanation and quick start guide (compatibility, setup, troubleshooting) 3) Developer-oriented notes (local testing, implementation details)
Key README Requirements: - Include compatibility sections for both AXIS OS and FixedIT Data Agent versions - Explain host testing and its limitations and why certain parts require actual Axis d...
Files:
project-time-in-area-analytics/test_scripts/requirements.txtproject-time-in-area-analytics/test_files/config_input_sample_data.confproject-time-in-area-analytics/test_scripts/record_real_data.pyproject-time-in-area-analytics/config_process_track_duration.confproject-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/test_files/simple_tracks.jsonlproject-time-in-area-analytics/test_files/sample_data_feeder.shproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*
⚙️ CodeRabbit Configuration File
This repository serves FixedIT Data Agent users across a spectrum from plug-and-play dashboard deployment to advanced edge device customization. Consider whether changes maintain accessibility for both DevOps professionals (server-side focus) and developers (edge customization focus). If new features are added or existing ones changed significantly, ensure documentation clearly explains the intended audience and usage level. We use prettier for formatting of common file formats like markdown, yaml, json, etc. Example projects should be placed in the repo-root in a directory named project-*/. Whenever referencing Axis device model names like "D4100-VE mk II" or "M3045", prefix it with "AXIS" in capital letters (e.g. "AXIS D4100-VE mk II"). When using Axis as a company name, use "Axis Communications", note that all-caps is only used when "AXIS" is used in their product names, not their company name. When using the name of an Axis product, google on the name to verify that it is correctly identified. Avoid using 'cameras' or 'Axis cameras' unless the solution in related to visual analytics, otherwise prefer using 'Axis devices' to show that the FixedIT Data Agent also works with strobes, speakers, door stations, etc. Images used for the README should be placed in a directory called .images/ in the affected project folder. These images might also be referred to from other sources like the top-level README.md file.
Files:
project-time-in-area-analytics/test_scripts/requirements.txtproject-time-in-area-analytics/test_files/config_input_sample_data.confproject-time-in-area-analytics/test_scripts/record_real_data.pyproject-time-in-area-analytics/config_process_track_duration.confproject-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/test_files/simple_tracks.jsonlproject-time-in-area-analytics/test_files/sample_data_feeder.shproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: Documentation files should clearly communicate the dual audience: (1) server-side dashboard users who want to keep agent with bundled configs, and (2) edge device developers who want to customize agent behavior. Ensure examples and instructions are appropriate for the intended skill level and use case. Since this is a public repository, we should not include any sensitive information, the instructions should be easily understandable for a wide audience, and we should avoid using any jargon or technical terms that are not commonly used.
Anchor Links: For headings that contain Unicode characters (emojis, special characters) or backticks, use HTML anchor tags to ensure consistent linking behavior across different platforms (GitHub, VS Code, etc.). Add a separate line with<a id="anchor-name"></a>before the heading, and update the table of contents to reference this manual anchor.
Files:
project-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
🧬 Code Graph Analysis (2)
project-time-in-area-analytics/test_scripts/record_real_data.py (1)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (1)
main(712-773)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (1)
project-time-in-area-analytics/test_scripts/record_real_data.py (1)
main(242-351)
🪛 actionlint (1.7.7)
.github/workflows/project-time-in-area-test-analytics.yml
25-25: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
35-35: shellcheck reported issue in this script: SC2126:style:15:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
64-64: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
98-98: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
150-150: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
190-190: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
236-236: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
294-294: shellcheck reported issue in this script: SC2086:info:4:34: Double quote to prevent globbing and word splitting
(shellcheck)
294-294: shellcheck reported issue in this script: SC2086:info:5:44: Double quote to prevent globbing and word splitting
(shellcheck)
294-294: shellcheck reported issue in this script: SC2086:info:14:29: Double quote to prevent globbing and word splitting
(shellcheck)
294-294: shellcheck reported issue in this script: SC2086:info:19:30: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 YAMLlint (1.37.1)
.github/workflows/project-time-in-area-test-analytics.yml
[error] 366-366: trailing spaces
(trailing-spaces)
🪛 Ruff (0.12.2)
project-time-in-area-analytics/test_scripts/record_real_data.py
308-308: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
311-311: Within an except clause, raise exceptions with raise ... from err or raise ... from None to distinguish them from errors in exception handling
(B904)
🪛 GitHub Actions: Python Code Quality Check for the time-in-area-analytics project
project-time-in-area-analytics/test_scripts/record_real_data.py
[error] 17-17: Mypy error: Library stubs not installed for 'paramiko' (import-untyped). Install type hints: 'python3 -m pip install types-paramiko' or run 'mypy --install-types'.
🪛 LanguageTool
project-time-in-area-analytics/README.md
[grammar] ~52-~52: There might be a mistake here.
Context: ...Contents - Compatibility - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~53-~53: There might be a mistake here.
Context: ...ompatibility) - AXIS OS Compatibility - [FixedIT Data Agent Compatibility](#fixed...
(QB_NEW_EN)
[grammar] ~54-~54: There might be a mistake here.
Context: ...y) - FixedIT Data Agent Compatibility - Quick Setup - [Troubles...
(QB_NEW_EN)
[grammar] ~55-~55: There might be a mistake here.
Context: ...data-agent-compatibility) - Quick Setup - Troubleshooting - [Co...
(QB_NEW_EN)
[grammar] ~56-~56: There might be a mistake here.
Context: ...etup](#quick-setup) - Troubleshooting - [Configuration Files](#configuration-file...
(QB_NEW_EN)
[grammar] ~57-~57: There might be a mistake here.
Context: ...#troubleshooting) - Configuration Files - [config_process_track_duration.conf and t...
(QB_NEW_EN)
[grammar] ~58-~58: There might be a mistake here.
Context: ....conf and track_duration_calculator.star](#config_process_track_durationconf-and-track_duration_calculatorstar) - [config_process_threshold_filter.conf](#c...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ... - config_process_threshold_filter.conf - [test_files/config_output_stdout.conf](#t...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ... - test_files/config_output_stdout.conf - [test_files/sample_data_feeder.sh](#test_...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...f) - test_files/sample_data_feeder.sh - [Future Enhancements](#future-enhancement...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...le_data_feedersh) - Future Enhancements - [Local Testing on Host](#local-testing-on...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...e-enhancements) - Local Testing on Host - Prerequisites - [Host...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...ocal-testing-on-host) - Prerequisites - [Host Testing Limitations](#host-testing-...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...equisites) - Host Testing Limitations - Test Commands - [Te...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...-testing-limitations) - Test Commands - [Test Time in Area Calculation Only](#tes...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ... - Test Time in Area Calculation Only - [Test Complete Alert Pipeline](#test-comp...
(QB_NEW_EN)
[grammar] ~68-~68: There might be a mistake here.
Context: ...nly) - Test Complete Alert Pipeline - [Test with Real Device Data](#test-with-r...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ...eline) - Test with Real Device Data - [Analytics Data Structure](#analytics-dat...
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...device-data) - Analytics Data Structure - Data Format - [Data Beh...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...alytics-data-structure) - Data Format - Data Behavior - [Data...
(QB_NEW_EN)
[grammar] ~72-~72: There might be a mistake here.
Context: ...Format](#data-format) - Data Behavior - [Data Transformation for Telegraf](#data-...
(QB_NEW_EN)
[grammar] ~73-~73: There might be a mistake here.
Context: ...r) - Data Transformation for Telegraf - [Track Activity Visualization](#track-act...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...elegraf) - Track Activity Visualization - Automated Testing ...
(QB_NEW_EN)
[grammar] ~75-~75: There might be a mistake here.
Context: ...vity-visualization) - Automated Testing - GitHub Workflow - [...
(QB_NEW_EN)
[grammar] ~76-~76: There might be a mistake here.
Context: ...#automated-testing) - GitHub Workflow - Test Data - [PR Comments]...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...rkflow](#github-workflow) - Test Data - PR Comments <!-- tocstop...
(QB_NEW_EN)
[grammar] ~86-~86: There might be a mistake here.
Context: ...ity - Minimum AXIS OS version: TODO - Required tools: TODO ### FixedIT Data...
(QB_NEW_EN)
[grammar] ~91-~91: There might be a mistake here.
Context: ...y - Minimum Data Agent version: 1.0 - Required features: TODO ## Quick Setu...
(QB_NEW_EN)
[grammar] ~98-~98: There might be a mistake here.
Context: ...in the Uploaded helper files section (refresh page to see updates). Note: Don't ...
(QB_NEW_EN)
[grammar] ~151-~151: There might be a mistake here.
Context: ...processor logic testing with sample data - MQTT output configuration validation (TO...
(QB_NEW_EN)
[grammar] ~152-~152: There might be a mistake here.
Context: ...T output configuration validation (TODO) - Alert threshold configuration testing *...
(QB_NEW_EN)
[grammar] ~177-~177: There might be a mistake here.
Context: ... --once ``` Expected Output: All detections with `time_in_area_second...
(QB_NEW_EN)
[grammar] ~198-~198: There might be a mistake here.
Context: ... --once ``` Expected Output: Only detections with time in area (`time...
(QB_NEW_EN)
[grammar] ~296-~296: There might be a mistake here.
Context: ...ed observations into individual messages - Preserves object bounding box coordina...
(QB_NEW_EN)
[grammar] ~297-~297: There might be a mistake here.
Context: ...serves** object bounding box coordinates - Simplifies object classification to ju...
(QB_NEW_EN)
[grammar] ~298-~298: There might be a mistake here.
Context: ...* object classification to just the type - Skips frames with no observations enti...
(QB_NEW_EN)
[grammar] ~349-~349: There might be a mistake here.
Context: ...ata with: - track_001: Appears twice with 8s gap (total time: 11.33s) - `track_00...
(QB_NEW_EN)
[grammar] ~349-~349: There might be a mistake here.
Context: ...s twice with 8s gap (total time: 11.33s) - track_002: Continuous presence for 2.22s - `track...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...rack_002: Continuous presence for 2.22s - track_003: Continuous presence for 2.22s - track...
(QB_NEW_EN)
[grammar] ~351-~351: There might be a mistake here.
Context: ...rack_003: Continuous presence for 2.22s - track_004`: Single appearance (0s) ![Example data...
(QB_NEW_EN)
[grammar] ~360-~360: There might be a mistake here.
Context: ...Success confirmation when all tests pass - ❌ Specific failure diagnostics and troub...
(QB_NEW_EN)
project-time-in-area-analytics/test_scripts/README.md
[grammar] ~9-~9: There might be a mistake here.
Context: ... Contents - Installation - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...tallation) - Recording Real Device Data - Basic Usage - [Advanced...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ...rding-real-device-data) - Basic Usage - Advanced Usage - [Pa...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ...Usage](#basic-usage) - Advanced Usage - Parameters - [Authentica...
(QB_NEW_EN)
[grammar] ~13-~13: There might be a mistake here.
Context: ... Usage](#advanced-usage) - Parameters - Authentication - [AX...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...meters](#parameters) - Authentication - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ...thentication) - AXIS OS Compatibility - Use Cases - [Track Heatmap ...
(QB_NEW_EN)
[grammar] ~16-~16: There might be a mistake here.
Context: ...](#axis-os-compatibility) - Use Cases - [Track Heatmap Visualization](#track-heat...
(QB_NEW_EN)
[grammar] ~17-~17: There might be a mistake here.
Context: ...se-cases) - Track Heatmap Visualization - Basic Usage - [Advanc...
(QB_NEW_EN)
[grammar] ~18-~18: There might be a mistake here.
Context: ...-heatmap-visualization) - Basic Usage - Advanced Usage - [...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ...age](#basic-usage-1) - Advanced Usage - [Features & Example Output](#features--ex...
(QB_NEW_EN)
[grammar] ~20-~20: There might be a mistake here.
Context: ...-usage-1) - Features & Example Output - [Activity Percentage Calculation](#activi...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...) - Activity Percentage Calculation - Prerequisites <!-- toc...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ...dress or hostname (default: 192.168.1.2) - --username, -u: SSH username (default: acap-fixeditdat...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...sername (default: acap-fixeditdataagent) - --password, -p: SSH password (optional, will prompt if...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...ssword (optional, will prompt if needed) - --duration, -d: Recording duration in seconds (default...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...ording duration in seconds (default: 30) - --topic: Message broker topic to consume (defau...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...xis.analytics_scene_description.v0.beta) - --source: Message broker source (default: 1) - `...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...rce: Message broker source (default: 1) - --output-file, -o`: Output file path (default: test_files/...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ...rded data with visualization tools like track_heatmap_viewer.py - Debugging: Capture problematic scenari...
(QB_NEW_EN)
[grammar] ~137-~137: There might be a mistake here.
Context: ... white box)**: Shows key dataset metrics - Tracks: 42 - Total unique track IDs de...
(QB_NEW_EN)
[grammar] ~138-~138: There might be a mistake here.
Context: ...que track IDs detected across all frames - Frames: 1180 - Number of frames contai...
(QB_NEW_EN)
[grammar] ~139-~139: There might be a mistake here.
Context: ...one detection (not total elapsed frames) - Activity: 99.8% - Percentage of frames...
(QB_NEW_EN)
[grammar] ~140-~140: There might be a mistake here.
Context: ...entage of frames with detections present - Alarms: 5 - Number of tracks exceeding...
(QB_NEW_EN)
[grammar] ~141-~141: There might be a mistake here.
Context: ...s exceeding the alarm threshold duration - X-axis (Time): Timestamps of frames wi...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...me gaps without detections are not shown - Y-axis (Track IDs): Individual object ...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ... 3effc, 58cef) sorted alphabetically - Color Legend (right side): Visual scal...
(QB_NEW_EN)
🔇 Additional comments (12)
project-time-in-area-analytics/test_files/simple_tracks.jsonl (1)
6-6: Heads-up: DeleteOperation frames are currently dropped by the feederGiven the current feeder (test_files/sample_data_feeder.sh) only emits observations, this DeleteOperation (Line 6) won’t reach the processing pipeline. If the algorithm needs delete events to end dwell windows, tests may undercount or never finalize. See suggested feeder update to emit operations as separate events.
project-time-in-area-analytics/test_files/config_input_sample_data.conf (2)
16-28: LGTM: Clear and focused execd input for local test feederThe execd input is well-scoped for test data, sets a stable measurement name, and preserves key string fields. Works well with the rest of the pipeline.
26-28: Confirmjson_string_fieldsmatches your feeder’s flattened JSON schemaThe sample feeder emits raw nested JSON under
frame(e.g.{"frame": { "observations": [ { "timestamp":"…", "track_id":"…" } ], "timestamp":"…" }}), so Telegraf’s JSON parser will flatten those keys into names like
frame_timestampandframe_observations_track_id. The currentjson_string_fields = ["timestamp", "track_id", "object_type"]won’t match the flattened field names. If you see type coercion (IDs or timestamps parsed as numbers/booleans), then:
- Update
json_string_fieldsto the actual flattened keys, for example:json_string_fields = [ "frame_timestamp", "frame_observations_track_id" ]- Or adjust your feeder to emit Telegraf metric-format JSON (with a top-level
fieldsobject) so the parser picks up your named fields directly.- Optionally promote identifiers to tags instead:
tag_keys = ["track_id", "object_type"]To validate locally, run a dry-run of the file input plugin:
telegraf \ --config project-time-in-area-analytics/test_files/config_input_sample_data.conf \ --test \ --input-filter fileand confirm that
timestampandtrack_idremain strings in the parsed output.project-time-in-area-analytics/config_process_track_duration.conf (1)
11-18: LGTM: Starlark processor wiring is concise and scopednamepass targets the right measurement, and the script path is parameterized via HELPER_FILES_DIR for portability. Good separation of concerns with debug metric emission left to the script.
.github/workflows/project-time-in-area-analytics-python-quality.yml (1)
21-29: LGTM: Python quality workflow correctly targets test_scripts and uses local composite actionWorkflow name and path filters match the project. The local action usage keeps checks standardized across projects.
project-time-in-area-analytics/README.md (3)
321-354: Image path and assets look goodImages are stored under .images/ in the project folder and referenced correctly. This follows repo guidelines.
48-80: Table of contents looks comprehensive; verify anchors resolve correctly on GitHubThe TOC uses auto-generated anchors for file-like headings. GitHub’s anchor normalization strips slashes and dots. Please verify that links like [test_files/config_output_stdout.conf] resolve to the correct section.
Run this quick check locally by clicking through the anchors on your branch’s README page. If any fail, add manual anchors using HTML tags as per repo guidelines.
1-46: Strong introduction and value propositionClear explanation of the problem, approach, and “no C/C++ development required” value with correct FixedIT Data Agent links. Well aligned with guidelines.
project-time-in-area-analytics/test_scripts/record_real_data.py (1)
220-242: CLI UX and defaults look goodClick-based CLI, helpful defaults, and clear help texts align with our Python CLI guidelines.
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (3)
613-690: Headless rendering control works as intendedno-ui flag prevents GUI usage, which is important for CI. Ticks, labels, and colorbar are well set up.
261-300: Robust JSONL parsing with clear error messagesGood validation and exception chaining. This will help users identify malformed lines quickly.
693-778: CLI UX matches project conventionsClick-driven CLI with examples and error handling is consistent with repo standards.
| - name: Set up Python 3.13 | ||
| uses: actions/setup-python@v4 | ||
| with: | ||
| python-version: "3.13" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Update setup-python to v5 to avoid runner incompatibility
actionlint flags actions/setup-python@v4 as deprecated/incompatible on current runners. Use v5.
Apply this diff:
- - name: Set up Python 3.13
- uses: actions/setup-python@v4
+ - name: Set up Python 3.13
+ uses: actions/setup-python@v5
with:
python-version: "3.13"📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| - name: Set up Python 3.13 | |
| uses: actions/setup-python@v4 | |
| with: | |
| python-version: "3.13" | |
| - name: Set up Python 3.13 | |
| uses: actions/setup-python@v5 | |
| with: | |
| python-version: "3.13" |
🧰 Tools
🪛 actionlint (1.7.7)
25-25: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🤖 Prompt for AI Agents
.github/workflows/project-time-in-area-test-analytics.yml around lines 24 to 27:
the workflow pins actions/setup-python@v4 which actionlint reports as
deprecated/incompatible; update the uses entry to actions/setup-python@v5 (and,
if present, ensure any input keys remain compatible with v5) so the workflow
uses the latest supported setup-python action.
| ## Quick Setup | ||
|
|
||
| ### Troubleshooting | ||
|
|
||
| Enable the `Debug` option in the FixedIT Data Agent for detailed logs. Debug files will appear in the `Uploaded helper files` section (refresh page to see updates). | ||
|
|
||
| **Note**: Don't leave debug enabled long-term as it creates large log files. | ||
|
|
||
| ## Configuration Files | ||
|
|
||
| This project uses several configuration files that work together to create a time-in-area analytics pipeline: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Quick Setup section is empty; add a minimal end-to-end guide
The template requires a concrete quick start. Place this before Troubleshooting so users can run the example without guessing.
Apply this diff to add a succinct quick start:
## Quick Setup
-
-### Troubleshooting
+1) Clone this repo and cd into project directory:
+ git clone https://github.com/fixedit-ai/fixedit-data-agent-examples.git
+ cd fixedit-data-agent-examples/project-time-in-area-analytics
+
+2) Host testing (no device required):
+ - Install Telegraf on your workstation
+ - Export env for sample data:
+ HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR
+ SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE
+ - Run time-in-area only:
+ telegraf --config test_files/config_input_sample_data.conf \
+ --config config_process_track_duration.conf \
+ --config test_files/config_output_stdout.conf \
+ --once
+
+3) On-device testing (Axis device required):
+ - Ensure FixedIT Data Agent is installed and running on the device
+ - Upload config files from this folder into the agent’s helper files
+ - Configure MQTT endpoint or stdout (for testing) in the output config
+ - Start the pipeline and monitor logs (enable Debug temporarily)
+
+4) Optional: record real device data to use as SAMPLE_FILE:
+ python test_scripts/record_real_data.py --host <device_ip> --username <dev_user> -o test_files/real_device_data.jsonl
+
+### Troubleshooting📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ## Quick Setup | |
| ### Troubleshooting | |
| Enable the `Debug` option in the FixedIT Data Agent for detailed logs. Debug files will appear in the `Uploaded helper files` section (refresh page to see updates). | |
| **Note**: Don't leave debug enabled long-term as it creates large log files. | |
| ## Configuration Files | |
| This project uses several configuration files that work together to create a time-in-area analytics pipeline: | |
| ## Quick Setup | |
| 1) Clone this repo and cd into the project directory: |
🧰 Tools
🪛 LanguageTool
[grammar] ~98-~98: There might be a mistake here.
Context: ...in the Uploaded helper files section (refresh page to see updates). Note: Don't ...
(QB_NEW_EN)
🤖 Prompt for AI Agents
In project-time-in-area-analytics/README.md around lines 94 to 104, the "Quick
Setup" section is empty; add a concise end-to-end quick start before
Troubleshooting that lists prerequisites (e.g., Node/Python version, agent
binary), installation steps (clone repo, install deps), how to run the example
pipeline (start agent, load sample config, run the demo command), how to verify
output (where to view results/logs), and a note to disable Debug after testing;
keep it minimal (3–6 steps) and use imperative verbs so users can follow
immediately.
| jq -c ' | ||
| .frame as $frame | | ||
| if ($frame.observations | length) > 0 then | ||
| $frame.observations[] | | ||
| { | ||
| "frame": $frame.timestamp, | ||
| "timestamp": .timestamp, | ||
| "track_id": .track_id, | ||
| "object_type": (.class.type // "null"), | ||
| "bounding_box": .bounding_box | ||
| } | ||
| else | ||
| empty | ||
| end | ||
| ' "$HELPER_FILES_DIR/$SAMPLE_FILE" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Emit operation events (DeleteOperation) in addition to observations
Currently frames without observations are dropped, which discards DeleteOperation events. This risks never finalizing dwell windows. Emit both observation and operation messages. Also prefer JSON null (not the string "null") for missing object_type.
Apply this diff:
-# Process all frames at once with a single jq call for optimal performance
-# This approach scales much better with large datasets than per-line processing
-jq -c '
-.frame as $frame |
-if ($frame.observations | length) > 0 then
- $frame.observations[] |
- {
- "frame": $frame.timestamp,
- "timestamp": .timestamp,
- "track_id": .track_id,
- "object_type": (.class.type // "null"),
- "bounding_box": .bounding_box
- }
-else
- empty
-end
-' "$HELPER_FILES_DIR/$SAMPLE_FILE"
+# Process all frames at once with a single jq call for optimal performance
+# Emit one line per observation and per operation (e.g., DeleteOperation)
+jq -c '
+ .frame as $frame
+ |
+ # Observations (if any)
+ if ($frame.observations | length) > 0 then
+ ($frame.observations[] |
+ {
+ "event_type": "observation",
+ "frame": $frame.timestamp,
+ "timestamp": .timestamp,
+ "track_id": .track_id,
+ "object_type": (.class.type // null),
+ "bounding_box": .bounding_box
+ })
+ else
+ empty
+ end,
+ # Operations (if any)
+ if ($frame.operations | length) > 0 then
+ ($frame.operations[] |
+ {
+ "event_type": "operation",
+ "frame": $frame.timestamp,
+ "timestamp": $frame.timestamp,
+ "track_id": .id,
+ "operation_type": .type
+ })
+ else
+ empty
+ end
+' "$HELPER_FILES_DIR/$SAMPLE_FILE"Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In project-time-in-area-analytics/test_files/sample_data_feeder.sh around lines
31 to 45, the jq filter currently drops frames without observations (losing
DeleteOperation events) and emits the string "null" for missing object_type;
change the filter so it emits observation objects for each observation and also
emits an operation object (e.g., DeleteOperation) for frames with no
observations (or emit both observation and a corresponding operation as
required), and ensure object_type uses a true JSON null (use // null, not "//
\"null\""). Adjust the jq branches to produce the additional operation JSON and
replace the ("null") fallback with null.
| import click | ||
| import paramiko | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Mypy failure: missing Paramiko stubs
CI reports “Library stubs not installed for 'paramiko'”. Two viable fixes:
- Preferred: add types-paramiko to the project’s pinned requirements for tooling (e.g., project-time-in-area-analytics/test_scripts/requirements.txt).
- Alternative: configure mypy to ignore missing imports for paramiko.
Proposed diffs:
- Add types-paramiko to test_scripts requirements (pin to latest you use):
+# Add type stubs for mypy
+types-paramiko==<pin-latest-verified>- Or annotate the import to mute mypy (quick workaround):
-import paramiko
+import paramiko # type: ignore[import-untyped]Let me know which route you prefer; I can update the workflow to install the stubs as well.
🏁 Script executed:
#!/bin/bash
# Locate all requirements files
echo "Requirements files found:"
fd -t f 'requirements*.txt'
# Check for types-paramiko in test_scripts requirements
echo -e "\nChecking for types-paramiko in project-time-in-area-analytics/test_scripts/requirements.txt:"
grep -n 'types-paramiko' project-time-in-area-analytics/test_scripts/requirements.txt || echo "→ types-paramiko not found"
# Search for mypy ignore_missing_imports settings across the repo
echo -e "\nSearching for 'ignore_missing_imports' in configuration files:"
grep -R "ignore_missing_imports" -n . || echo "→ No ignore_missing_imports setting found"
# Check for a [mypy] section in setup.cfg
echo -e "\nLooking for a [mypy] section in setup.cfg:"
grep -R "^\[mypy\]" -n setup.cfg || echo "→ No [mypy] section in setup.cfg"
# Check for mypy configuration in pyproject.toml
echo -e "\nLooking for tool.mypy in pyproject.toml:"
grep -R "tool.mypy" -n pyproject.toml || echo "→ No tool.mypy section in pyproject.toml"Length of output: 1358
Add Paramiko Type Stubs to Fix MyPy Errors
We’ve confirmed that
project-time-in-area-analytics/test_scripts/requirements.txtdoes not includetypes-paramiko- there is no MyPy config (no
ignore_missing_importsor[mypy]section)
To resolve the CI failure, please choose one of the following:
• Preferred (install stubs):
In project-time-in-area-analytics/test_scripts/requirements.txt, add:
+# Add type stubs for MyPy
+types-paramiko==<latest-pinned-version>• Alternative (quick mute):
In project-time-in-area-analytics/test_scripts/record_real_data.py, change:
-import paramiko
+import paramiko # type: ignore[import-untyped]Let me know which approach you’d like, and I can update the workflow to install the stubs accordingly.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| import click | |
| import paramiko | |
| ++ b/project-time-in-area-analytics/test_scripts/record_real_data.py | |
| @@ -16,7 +16,7 @@ import click | |
| import paramiko # type: ignore[import-untyped] |
🧰 Tools
🪛 GitHub Actions: Python Code Quality Check for the time-in-area-analytics project
[error] 17-17: Mypy error: Library stubs not installed for 'paramiko' (import-untyped). Install type hints: 'python3 -m pip install types-paramiko' or run 'mypy --install-types'.
🎨 Code Formatting RequiredTrailing Whitespace IssuesSome files have trailing whitespace (spaces/tabs at end of lines). Files with trailing whitespace:
To fix: Remove the trailing spaces/tabs at the end of lines in your favorite editor. After fixing:
|
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ❌ Track Heatmap Viewer: failure What This MeansTrack Heatmap Viewer Issues:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (2)
.github/workflows/project-time-in-area-test-analytics.yml (2)
24-27: Update setup-python to v5 to avoid runner incompatibilityactionlint flags actions/setup-python@v4 as deprecated on current runners. Switch to v5.
Apply this diff:
- - name: Set up Python 3.13 - uses: actions/setup-python@v4 + - name: Set up Python 3.13 + uses: actions/setup-python@v5 with: python-version: "3.13"
329-342: Fix unclosed template string in success PR commentThe template literal isn’t closed before calling createComment; this throws a syntax error.
Apply this diff:
- const comment = `## 🎉 Time-in-Area Analytics Tests Passed! + const comment = `## 🎉 Time-in-Area Analytics Tests Passed! All tests completed successfully: ✅ **Track Heatmap Viewer** - All alarm detection scenarios passed ✅ **Telegraf Pipeline** - All time-in-area calculations verified - - github.rest.issues.createComment({ +`; + github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: comment });
🧹 Nitpick comments (4)
.github/workflows/project-time-in-area-test-analytics.yml (4)
24-27: Enable pip caching to speed up installsLeverage setup-python’s built-in cache to cut CI time and bandwidth.
Apply this diff:
- name: Set up Python 3.13 - uses: actions/setup-python@v4 + uses: actions/setup-python@v5 with: python-version: "3.13" + cache: 'pip' + cache-dependency-path: project-time-in-area-analytics/test_scripts/requirements.txt
139-152: Use the correct InfluxData repo for Ubuntu 24.04 (noble)The runner is ubuntu-24.04. Point the apt source to noble instead of jammy to avoid potential dependency issues.
Apply this diff:
- # Add InfluxDB repository using jammy (22.04) for compatibility with ubuntu-24 + # Add InfluxDB repository for Ubuntu 24.04 (noble) wget -qO- https://repos.influxdata.com/influxdata-archive_compat.key | sudo gpg --dearmor -o /usr/share/keyrings/influxdata-archive-keyring.gpg - echo "deb [signed-by=/usr/share/keyrings/influxdata-archive-keyring.gpg] https://repos.influxdata.com/ubuntu jammy stable" | sudo tee /etc/apt/sources.list.d/influxdb.list + echo "deb [signed-by=/usr/share/keyrings/influxdata-archive-keyring.gpg] https://repos.influxdata.com/ubuntu noble stable" | sudo tee /etc/apt/sources.list.d/influxdb.list
371-376: Remove trailing spaces to satisfy YAMLlintThere are trailing spaces at the end of a line in the failure comment block.
Apply this diff:
- - The time-in-area calculation or threshold filtering may not be working correctly + - The time-in-area calculation or threshold filtering may not be working correctly
3-13: Consider adding a manual triggerAllow maintainers to run the workflow on-demand for debugging without pushing changes.
Apply this diff:
on: push: branches: [main, feature/*] @@ pull_request: branches: [main] paths: - "project-time-in-area-analytics/**" - ".github/workflows/project-time-in-area-test-analytics.yml" + workflow_dispatch:
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (2)
.github/workflows/project-time-in-area-test-analytics.yml(1 hunks)project-time-in-area-analytics/test_scripts/requirements.txt(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- project-time-in-area-analytics/test_scripts/requirements.txt
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/project-time-in-area-test-analytics.yml
25-25: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🪛 YAMLlint (1.37.1)
.github/workflows/project-time-in-area-test-analytics.yml
[error] 373-373: trailing spaces
(trailing-spaces)
b147bb7 to
0fa081d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (8)
.github/workflows/project-time-in-area-test-analytics.yml (8)
24-27: Update setup-python to v5 (v4 is deprecated on current runners)actions/setup-python@v4 is flagged as incompatible; use v5.
Apply this diff:
- - name: Set up Python 3.13 - uses: actions/setup-python@v4 + - name: Set up Python 3.13 + uses: actions/setup-python@v5 with: python-version: "3.13"
50-51: Fix grep regex: \s isn’t POSIX in grep -EUse POSIX character classes to reliably match whitespace.
Apply this diff:
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l)
81-83: Fix grep/sed regex: \s isn’t POSIX in grep -E/sedSwitch to POSIX classes and enable -E for sed.
Apply this diff:
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//')
115-116: Fix grep/sed regex: \s isn’t POSIX in grep -E/sedSame issue here; use POSIX whitespace classes.
Apply this diff:
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//')
181-183: Harden jq filters to ignore nullsIf .fields.track_id is missing, jq prints "null" which inflates counts.
Apply this diff:
- ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | wc -l) - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l)
224-227: Coalesce missing track_id to empty to avoid overcountingSame jq issue here; avoid counting “null”.
Apply this diff:
- ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') ACTUAL_UNIQUE_TRACKS=$(echo "$ACTUAL_TRACKS" | wc -w) - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l)
270-273: Coalesce missing track_id to empty to avoid overcountingRepeat the jq hardening here as well.
Apply this diff:
- ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') ACTUAL_UNIQUE_TRACKS=$(echo "$ACTUAL_TRACKS" | wc -w) - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l)
323-342: Close the template literal before calling createCommentThe backtick string isn’t closed; this will throw a syntax error at runtime.
Apply this diff:
- const comment = `## 🎉 Time-in-Area Analytics Tests Passed! + const comment = `## 🎉 Time-in-Area Analytics Tests Passed! All tests completed successfully: ✅ **Track Heatmap Viewer** - All alarm detection scenarios passed ✅ **Telegraf Pipeline** - All time-in-area calculations verified - - github.rest.issues.createComment({ +`; + github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: comment });
🧹 Nitpick comments (1)
project-time-in-area-analytics/test_scripts/requirements.txt (1)
1-6: Move type stubs to dev-only when possibletypes-paramiko is a type stub and not required at runtime; consider moving it to a dev/test-only requirements file or a type-check job to keep installs lean.
Apply this diff if you prefer to split a dev-only set (and reference it from a separate type-check workflow):
- types-paramiko==4.0.0.20250809Outside this file, add a dev requirements file:
# project-time-in-area-analytics/test_scripts/requirements-dev.txt types-paramiko==4.0.0.20250809
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (4)
.github/workflows/project-time-in-area-test-analytics.yml(1 hunks)project-time-in-area-analytics/test_scripts/record_real_data.py(1 hunks)project-time-in-area-analytics/test_scripts/requirements.txt(1 hunks)project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- project-time-in-area-analytics/test_scripts/record_real_data.py
- project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
🧰 Additional context used
📓 Path-based instructions (2)
project-*/**
⚙️ CodeRabbit Configuration File
project-*/**: Each example project for the FixedIT Data Agent should be self-contained and designed as a standalone example / tutorial showing how to use the FixedIT Data Agent. The project should be well documented and include all the necessary files to run the example. A reader should be able to understand the background and the purpose of the project, and be able to understand the implementation details and how to use it. If the project includes a dashboard, it should be placed in a new folder in the repo-root/dashboard-deployments/ directory instead of being included in the project folder. In the README for the project, the first time the FixedIT Data Agent is mentioned, we should link to the product page: https://fixedit.ai/products-data-agent/ Emphasize that these solutions are accessible to system integrators and IT professionals without requiring complex C/C++ ACAP development. Highlight that custom edge intelligence can be implemented using familiar IT tools (configuration files, shell scripts, REST APIs) rather than traditional embedded programming approaches. Note that bash does not exist on the Axis devices, so .sh scripts should be written in portable shell syntax.
README Structure: All project README files should follow the standardized template at.project_readme_template.md. This template ensures consistent documentation that serves both audiences: (1) IT professionals who want plug-and-play solutions, and (2) developers who want to customize and extend the examples. The template enforces a logical structure: 1) Introduction and background (project purpose, how it works, why choose this approach) 2) Technical explanation and quick start guide (compatibility, setup, troubleshooting) 3) Developer-oriented notes (local testing, implementation details)
Key README Requirements: - Include compatibility sections for both AXIS OS and FixedIT Data Agent versions - Explain host testing and its limitations and why certain parts require actual Axis d...
Files:
project-time-in-area-analytics/test_scripts/requirements.txt
**/*
⚙️ CodeRabbit Configuration File
This repository serves FixedIT Data Agent users across a spectrum from plug-and-play dashboard deployment to advanced edge device customization. Consider whether changes maintain accessibility for both DevOps professionals (server-side focus) and developers (edge customization focus). If new features are added or existing ones changed significantly, ensure documentation clearly explains the intended audience and usage level. We use prettier for formatting of common file formats like markdown, yaml, json, etc. Example projects should be placed in the repo-root in a directory named project-*/. Whenever referencing Axis device model names like "D4100-VE mk II" or "M3045", prefix it with "AXIS" in capital letters (e.g. "AXIS D4100-VE mk II"). When using Axis as a company name, use "Axis Communications", note that all-caps is only used when "AXIS" is used in their product names, not their company name. When using the name of an Axis product, google on the name to verify that it is correctly identified. Avoid using 'cameras' or 'Axis cameras' unless the solution in related to visual analytics, otherwise prefer using 'Axis devices' to show that the FixedIT Data Agent also works with strobes, speakers, door stations, etc. Images used for the README should be placed in a directory called .images/ in the affected project folder. These images might also be referred to from other sources like the top-level README.md file.
Files:
project-time-in-area-analytics/test_scripts/requirements.txt
🪛 actionlint (1.7.7)
.github/workflows/project-time-in-area-test-analytics.yml
25-25: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🔇 Additional comments (1)
project-time-in-area-analytics/test_scripts/requirements.txt (1)
1-6: Deps look aligned with the test scripts’ actual usageclick/matplotlib/numpy/paramiko map well to the referenced scripts. Pinning is good.
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
| matplotlib==3.9.4 | ||
| numpy==2.2.1 | ||
| paramiko==3.5.0 | ||
| types-paramiko==4.0.0.20250809 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should be in the test requirements in the cicd job
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (7)
project-time-in-area-analytics/test_files/sample_data_feeder.sh (1)
34-47: Emit operations and use JSON null for object_typeTwo issues:
- Frames without observations are dropped, losing DeleteOperation events.
- object_type uses the string "null" instead of JSON null.
Refactor the jq filter to emit both observations and operations, and coalesce to true null.
Apply:
- echo "$line" | jq -c ' - .frame as $frame | - if ($frame.observations | length) > 0 then - $frame.observations[] | - { - "frame": $frame.timestamp, - "timestamp": .timestamp, - "track_id": .track_id, - "object_type": (.class.type // "null"), - "bounding_box": .bounding_box - } - else - empty - end' + echo "$line" | jq -c ' + .frame as $frame + | + # Observations (if any) + if ($frame.observations | length) > 0 then + ($frame.observations[] | + { + "event_type": "observation", + "frame": $frame.timestamp, + "timestamp": .timestamp, + "track_id": .track_id, + "object_type": (.class.type // null), + "bounding_box": .bounding_box + }) + else + empty + end, + # Operations (if any) + if ($frame.operations | length) > 0 then + ($frame.operations[] | + { + "event_type": "operation", + "frame": $frame.timestamp, + "timestamp": $frame.timestamp, + "track_id": .id, + "operation_type": .type + }) + else + empty + end + 'project-time-in-area-analytics/README.md (2)
84-93: Fill in compatibility details; TODOs block users from knowing what worksPer repo guidelines, the Compatibility section must be explicit (min versions and required features), not TODOs.
Apply:
### AXIS OS Compatibility -- **Minimum AXIS OS version**: TODO -- **Required tools**: TODO +- Minimum AXIS OS version: <fill in, e.g., 11.6 or 12.x> +- Required tools on device: + - message-broker-cli (preinstalled on recent AXIS OS) + - SSH access enabled (dev mode user or equivalent) + +Notes: +- On AXIS OS 12+, SSH root is disabled; use the FixedIT Data Agent dev user. +- .sh scripts must be POSIX sh (bash is not present on Axis devices). ### FixedIT Data Agent Compatibility -- **Minimum Data Agent version**: 1.0 -- **Required features**: TODO +- Minimum FixedIT Data Agent version: 1.0 +- Required features: + - Starlark processor enabled + - JSON input and JSON output plugins + - MQTT output (if using alerts to broker)
94-104: Quick Setup is empty; add a minimal end-to-end guideProvide a concise quick start before Troubleshooting so users can run the example immediately.
Apply:
## Quick Setup - -### Troubleshooting +1) Clone and enter project: + git clone https://github.com/fixedit-ai/fixedit-data-agent-examples.git + cd fixedit-data-agent-examples/project-time-in-area-analytics + +2) Host testing (no device required): + - Install Telegraf + - Export env: + HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR + SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE + - Run time-in-area only: + telegraf --config test_files/config_input_sample_data.conf \ + --config config_process_track_duration.conf \ + --config test_files/config_output_stdout.conf \ + --once + +3) Optional: threshold filter: + export ALERT_THRESHOLD_SECONDS="2" + telegraf --config test_files/config_input_sample_data.conf \ + --config config_process_track_duration.conf \ + --config config_process_threshold_filter.conf \ + --config test_files/config_output_stdout.conf \ + --once + +4) Optional: visualize tracks: + python test_scripts/track_heatmap_viewer.py test_files/simple_tracks.jsonl --alarm-threshold 2 --no-ui + +### Troubleshooting.github/workflows/project-time-in-area-test-analytics.yml (4)
24-28: Update setup-python to v5 for runner compatibilityactions/setup-python@v4 is flagged as incompatible; use v5.
Apply:
- - name: Set up Python 3.13 - uses: actions/setup-python@v4 + - name: Set up Python 3.13 + uses: actions/setup-python@v5 with: python-version: "3.13"
50-51: Use POSIX whitespace class; \s is not portable in grep/sedReplace PCRE-style \s with POSIX [[:space:]] to avoid zero matches on GNU grep/sed.
Apply:
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//')Also applies to: 81-83, 115-116, 149-151
215-217: Harden jq filters to ignore nulls and avoid overcountingIf .fields.track_id is missing, jq -r prints “null”, inflating counts. Coalesce to empty to drop nulls.
Apply:
- ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | wc -l) - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l) @@ - ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l) @@ - ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l)Also applies to: 258-261, 304-307, 350-353
409-421: Fix unclosed template string in success PR commentThe template literal is never closed before createComment; this will throw a syntax error.
Apply:
- const comment = `## 🎉 Time-in-Area Analytics Tests Passed! + const comment = `## 🎉 Time-in-Area Analytics Tests Passed! All tests completed successfully: ✅ **Track Heatmap Viewer** - All alarm detection scenarios passed ✅ **Telegraf Pipeline** - All time-in-area calculations verified - - github.rest.issues.createComment({ +`; + github.rest.issues.createComment({ issue_number: context.issue.number, owner: context.repo.owner, repo: context.repo.repo, body: comment });
🧹 Nitpick comments (4)
project-time-in-area-analytics/test_files/sample_data_feeder.sh (1)
17-21: Guard HELPER_FILES_DIR or default to current directoryIf HELPER_FILES_DIR is unset, the file existence check resolves to "/$SAMPLE_FILE". Default to PWD to avoid surprising failures.
Apply:
# Check if sample file exists -if [ ! -f "$HELPER_FILES_DIR/$SAMPLE_FILE" ]; then +if [ -z "$HELPER_FILES_DIR" ]; then + HELPER_FILES_DIR="$(pwd)" +fi +if [ ! -f "$HELPER_FILES_DIR/$SAMPLE_FILE" ]; then echo "ERROR: Sample file not found: $HELPER_FILES_DIR/$SAMPLE_FILE" >&2 exit 1 fiproject-time-in-area-analytics/README.md (2)
3-3: Clarify MQTT/warning status and avoid TODOs in introThe intro mentions “warning (TODO)” and “MQTT (TODO)”. Readers should immediately understand what the current version does. Suggest clarifying that alerts are currently demonstrated via stdout and that MQTT/warning tiers are planned.
Apply:
-This project demonstrates how to implement time-in-area analytics for Axis fisheye cameras using the [FixedIT Data Agent](https://fixedit.ai/products-data-agent/). While AXIS Object Analytics natively supports time-in-area detection for traditional cameras, fisheye cameras lack this capability. This solution bridges that gap by consuming real-time object detection metadata from fisheye cameras and implementing custom time-in-area logic using Telegraf's Starlark processor. The system uses object tracking IDs from [AXIS Scene Metadata](https://developer.axis.com/analytics/axis-scene-metadata/reference/concepts/) to track objects within a defined rectangular area, measures time in area, and triggers both warning (TODO) and alert notifications via MQTT (TODO) when objects remain in the monitored zone beyond configured thresholds. +This project demonstrates how to implement time-in-area analytics for Axis fisheye cameras using the [FixedIT Data Agent](https://fixedit.ai/products-data-agent/). While AXIS Object Analytics natively supports time-in-area detection for traditional cameras, fisheye cameras lack this capability. This solution bridges that gap by consuming real-time object detection metadata from fisheye cameras and implementing custom time-in-area logic using Telegraf's Starlark processor. The system uses object tracking IDs from [AXIS Scene Metadata](https://developer.axis.com/analytics/axis-scene-metadata/reference/concepts/) to track objects within a defined rectangular area, measure time in area, and trigger alerts. + +Status: +- Current: alerts demonstrated via stdout for validation +- Planned: MQTT alerts and a separate “warning” threshold tier
311-319: Add validation instructions showing alarms per trackPer PR TODO, include how to print number of alarms per track. The heatmap viewer already prints a “Tracks with alarms” section; add an example and expected output to simplify validation.
Apply:
-For installation, usage details, and examples, see the [test_scripts README](test_scripts/README.md). +For installation, usage details, and examples, see the [test_scripts README](test_scripts/README.md). + +Quick validation (no UI, list alarms): +python test_scripts/track_heatmap_viewer.py test_files/simple_tracks.jsonl --alarm-threshold 2 --no-ui + +Expected excerpt: +Tracks with alarms + track_001 + track_002 + track_003 + track_005.github/workflows/project-time-in-area-test-analytics.yml (1)
368-401: Optional: quote outputs in summary step to prevent globbing/word-splittingMinor robustness improvement when echoing result strings.
Apply:
- VIS_RESULT="${{ needs.test-visualization-script.result }}" - TELEGRAF_RESULT="${{ needs.test-telegraf-pipeline.result }}" + VIS_RESULT="${{ needs.test-visualization-script.result }}" + TELEGRAF_RESULT="${{ needs.test-telegraf-pipeline.result }}" @@ - echo "vis_result=$VIS_RESULT" >> $GITHUB_OUTPUT - echo "telegraf_result=$TELEGRAF_RESULT" >> $GITHUB_OUTPUT + echo "vis_result=$VIS_RESULT" >> "$GITHUB_OUTPUT" + echo "telegraf_result=$TELEGRAF_RESULT" >> "$GITHUB_OUTPUT"
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (6)
.github/workflows/project-time-in-area-test-analytics.yml(1 hunks)project-time-in-area-analytics/README.md(1 hunks)project-time-in-area-analytics/config_process_threshold_filter.conf(1 hunks)project-time-in-area-analytics/test_files/sample_data_feeder.sh(1 hunks)project-time-in-area-analytics/test_files/simple_tracks.jsonl(1 hunks)project-time-in-area-analytics/track_duration_calculator.star(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- project-time-in-area-analytics/config_process_threshold_filter.conf
🧰 Additional context used
📓 Path-based instructions (3)
project-*/**
⚙️ CodeRabbit Configuration File
project-*/**: Each example project for the FixedIT Data Agent should be self-contained and designed as a standalone example / tutorial showing how to use the FixedIT Data Agent. The project should be well documented and include all the necessary files to run the example. A reader should be able to understand the background and the purpose of the project, and be able to understand the implementation details and how to use it. If the project includes a dashboard, it should be placed in a new folder in the repo-root/dashboard-deployments/ directory instead of being included in the project folder. In the README for the project, the first time the FixedIT Data Agent is mentioned, we should link to the product page: https://fixedit.ai/products-data-agent/ Emphasize that these solutions are accessible to system integrators and IT professionals without requiring complex C/C++ ACAP development. Highlight that custom edge intelligence can be implemented using familiar IT tools (configuration files, shell scripts, REST APIs) rather than traditional embedded programming approaches. Note that bash does not exist on the Axis devices, so .sh scripts should be written in portable shell syntax.
README Structure: All project README files should follow the standardized template at.project_readme_template.md. This template ensures consistent documentation that serves both audiences: (1) IT professionals who want plug-and-play solutions, and (2) developers who want to customize and extend the examples. The template enforces a logical structure: 1) Introduction and background (project purpose, how it works, why choose this approach) 2) Technical explanation and quick start guide (compatibility, setup, troubleshooting) 3) Developer-oriented notes (local testing, implementation details)
Key README Requirements: - Include compatibility sections for both AXIS OS and FixedIT Data Agent versions - Explain host testing and its limitations and why certain parts require actual Axis d...
Files:
project-time-in-area-analytics/test_files/simple_tracks.jsonlproject-time-in-area-analytics/track_duration_calculator.starproject-time-in-area-analytics/test_files/sample_data_feeder.shproject-time-in-area-analytics/README.md
**/*
⚙️ CodeRabbit Configuration File
This repository serves FixedIT Data Agent users across a spectrum from plug-and-play dashboard deployment to advanced edge device customization. Consider whether changes maintain accessibility for both DevOps professionals (server-side focus) and developers (edge customization focus). If new features are added or existing ones changed significantly, ensure documentation clearly explains the intended audience and usage level. We use prettier for formatting of common file formats like markdown, yaml, json, etc. Example projects should be placed in the repo-root in a directory named project-*/. Whenever referencing Axis device model names like "D4100-VE mk II" or "M3045", prefix it with "AXIS" in capital letters (e.g. "AXIS D4100-VE mk II"). When using Axis as a company name, use "Axis Communications", note that all-caps is only used when "AXIS" is used in their product names, not their company name. When using the name of an Axis product, google on the name to verify that it is correctly identified. Avoid using 'cameras' or 'Axis cameras' unless the solution in related to visual analytics, otherwise prefer using 'Axis devices' to show that the FixedIT Data Agent also works with strobes, speakers, door stations, etc. Images used for the README should be placed in a directory called .images/ in the affected project folder. These images might also be referred to from other sources like the top-level README.md file.
Files:
project-time-in-area-analytics/test_files/simple_tracks.jsonlproject-time-in-area-analytics/track_duration_calculator.starproject-time-in-area-analytics/test_files/sample_data_feeder.shproject-time-in-area-analytics/README.md
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: Documentation files should clearly communicate the dual audience: (1) server-side dashboard users who want to keep agent with bundled configs, and (2) edge device developers who want to customize agent behavior. Ensure examples and instructions are appropriate for the intended skill level and use case. Since this is a public repository, we should not include any sensitive information, the instructions should be easily understandable for a wide audience, and we should avoid using any jargon or technical terms that are not commonly used.
Anchor Links: For headings that contain Unicode characters (emojis, special characters) or backticks, use HTML anchor tags to ensure consistent linking behavior across different platforms (GitHub, VS Code, etc.). Add a separate line with<a id="anchor-name"></a>before the heading, and update the table of contents to reference this manual anchor.
Files:
project-time-in-area-analytics/README.md
🧠 Learnings (2)
📚 Learning: 2025-08-10T14:54:48.316Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:350-351
Timestamp: 2025-08-10T14:54:48.316Z
Learning: In the fixedit-data-agent-examples repository, shell portability requirements (such as Axis devices using POSIX /bin/sh instead of bash) should be documented in a general scripting guide rather than repeated in individual project README files. This approach was confirmed by daniel-falk for better documentation organization.
Applied to files:
project-time-in-area-analytics/README.md
📚 Learning: 2025-08-07T15:23:05.927Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:57-62
Timestamp: 2025-08-07T15:23:05.927Z
Learning: In the fixedit-ai/fixedit-data-agent-examples repository, when GitHub generates anchors for headings that contain backticks with spaces around them (like `### `filename` - Description`), it removes the backticks but preserves those spaces as dashes in the anchor, creating triple dashes (---) between the filename and description parts. TOC generators may not handle this correctly and need manual adjustment.
Applied to files:
project-time-in-area-analytics/README.md
🪛 actionlint (1.7.7)
.github/workflows/project-time-in-area-test-analytics.yml
25-25: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
35-35: shellcheck reported issue in this script: SC2126:style:15:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
64-64: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
98-98: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
132-132: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
191-191: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
231-231: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
277-277: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
323-323: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
381-381: shellcheck reported issue in this script: SC2086:info:4:34: Double quote to prevent globbing and word splitting
(shellcheck)
381-381: shellcheck reported issue in this script: SC2086:info:5:44: Double quote to prevent globbing and word splitting
(shellcheck)
381-381: shellcheck reported issue in this script: SC2086:info:14:29: Double quote to prevent globbing and word splitting
(shellcheck)
381-381: shellcheck reported issue in this script: SC2086:info:19:30: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 LanguageTool
project-time-in-area-analytics/README.md
[grammar] ~52-~52: There might be a mistake here.
Context: ...Contents - Compatibility - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~53-~53: There might be a mistake here.
Context: ...ompatibility) - AXIS OS Compatibility - [FixedIT Data Agent Compatibility](#fixed...
(QB_NEW_EN)
[grammar] ~54-~54: There might be a mistake here.
Context: ...y) - FixedIT Data Agent Compatibility - Quick Setup - [Troubles...
(QB_NEW_EN)
[grammar] ~55-~55: There might be a mistake here.
Context: ...data-agent-compatibility) - Quick Setup - Troubleshooting - [Co...
(QB_NEW_EN)
[grammar] ~56-~56: There might be a mistake here.
Context: ...etup](#quick-setup) - Troubleshooting - [Configuration Files](#configuration-file...
(QB_NEW_EN)
[grammar] ~57-~57: There might be a mistake here.
Context: ...#troubleshooting) - Configuration Files - [config_process_track_duration.conf and t...
(QB_NEW_EN)
[grammar] ~58-~58: There might be a mistake here.
Context: ....conf and track_duration_calculator.star](#config_process_track_durationconf-and-track_duration_calculatorstar) - [config_process_threshold_filter.conf](#c...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ... - config_process_threshold_filter.conf - [test_files/config_output_stdout.conf](#t...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ... - test_files/config_output_stdout.conf - [test_files/sample_data_feeder.sh](#test_...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...f) - test_files/sample_data_feeder.sh - [Future Enhancements](#future-enhancement...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...le_data_feedersh) - Future Enhancements - [Local Testing on Host](#local-testing-on...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...e-enhancements) - Local Testing on Host - Prerequisites - [Host...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...ocal-testing-on-host) - Prerequisites - [Host Testing Limitations](#host-testing-...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...equisites) - Host Testing Limitations - Test Commands - [Te...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...-testing-limitations) - Test Commands - [Test Time in Area Calculation Only](#tes...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ... - Test Time in Area Calculation Only - [Test Complete Alert Pipeline](#test-comp...
(QB_NEW_EN)
[grammar] ~68-~68: There might be a mistake here.
Context: ...nly) - Test Complete Alert Pipeline - [Test with Real Device Data](#test-with-r...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ...eline) - Test with Real Device Data - [Analytics Data Structure](#analytics-dat...
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...device-data) - Analytics Data Structure - Data Format - [Data Beh...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...alytics-data-structure) - Data Format - Data Behavior - [Data...
(QB_NEW_EN)
[grammar] ~72-~72: There might be a mistake here.
Context: ...Format](#data-format) - Data Behavior - [Data Transformation for Telegraf](#data-...
(QB_NEW_EN)
[grammar] ~73-~73: There might be a mistake here.
Context: ...r) - Data Transformation for Telegraf - [Track Activity Visualization](#track-act...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...elegraf) - Track Activity Visualization - Automated Testing ...
(QB_NEW_EN)
[grammar] ~75-~75: There might be a mistake here.
Context: ...vity-visualization) - Automated Testing - GitHub Workflow - [...
(QB_NEW_EN)
[grammar] ~76-~76: There might be a mistake here.
Context: ...#automated-testing) - GitHub Workflow - Test Data - [PR Comments]...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...rkflow](#github-workflow) - Test Data - PR Comments <!-- tocstop...
(QB_NEW_EN)
[grammar] ~86-~86: There might be a mistake here.
Context: ...ity - Minimum AXIS OS version: TODO - Required tools: TODO ### FixedIT Data...
(QB_NEW_EN)
[grammar] ~91-~91: There might be a mistake here.
Context: ...y - Minimum Data Agent version: 1.0 - Required features: TODO ## Quick Setu...
(QB_NEW_EN)
[grammar] ~98-~98: There might be a mistake here.
Context: ...in the Uploaded helper files section (refresh page to see updates). Note: Don't ...
(QB_NEW_EN)
[grammar] ~151-~151: There might be a mistake here.
Context: ...processor logic testing with sample data - MQTT output configuration validation (TO...
(QB_NEW_EN)
[grammar] ~152-~152: There might be a mistake here.
Context: ...T output configuration validation (TODO) - Alert threshold configuration testing *...
(QB_NEW_EN)
[grammar] ~177-~177: There might be a mistake here.
Context: ... --once ``` Expected Output: All detections with `time_in_area_second...
(QB_NEW_EN)
[grammar] ~198-~198: There might be a mistake here.
Context: ... --once ``` Expected Output: Only detections with time in area (`time...
(QB_NEW_EN)
[grammar] ~296-~296: There might be a mistake here.
Context: ...ed observations into individual messages - Preserves object bounding box coordina...
(QB_NEW_EN)
[grammar] ~297-~297: There might be a mistake here.
Context: ...serves** object bounding box coordinates - Simplifies object classification to ju...
(QB_NEW_EN)
[grammar] ~298-~298: There might be a mistake here.
Context: ...* object classification to just the type - Skips frames with no observations enti...
(QB_NEW_EN)
[grammar] ~349-~349: There might be a mistake here.
Context: ...ata with: - track_001: Appears twice with 8s gap (total time: 11.33s) - `track_00...
(QB_NEW_EN)
[grammar] ~349-~349: There might be a mistake here.
Context: ...s twice with 8s gap (total time: 11.33s) - track_002: Continuous presence for 2.22s - `track...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...rack_002: Continuous presence for 2.22s - track_003: Continuous presence for 2.22s - track...
(QB_NEW_EN)
[grammar] ~351-~351: There might be a mistake here.
Context: ...rack_003: Continuous presence for 2.22s - track_004: Single appearance (0s) - track_005`: ...
(QB_NEW_EN)
[grammar] ~352-~352: There might be a mistake here.
Context: ...2s - track_004: Single appearance (0s) - track_005: Long duration track for 2.5 minutes (1...
(QB_NEW_EN)
[grammar] ~361-~361: There might be a mistake here.
Context: ...Success confirmation when all tests pass - ❌ Specific failure diagnostics and troub...
(QB_NEW_EN)
🔇 Additional comments (5)
project-time-in-area-analytics/test_files/simple_tracks.jsonl (2)
6-9: Good inclusion of DeleteOperation; verify downstream consumptionThe dataset includes a DeleteOperation (Line 6) and a later reappearance of track_001 (Line 9), which is great for exercising reappearance/gap handling. Ensure your feeder emits operation events too; otherwise deletions won’t be visible to the pipeline and cleanup will only be time-based.
Would you like me to update the feeder here in the PR to emit DeleteOperation events and wire a corresponding branch in the Starlark to consume them?
1-18: Dataset looks consistent and fits the intended scenariosTimestamps (with microseconds) and track timelines line up with the README’s expectations (e.g., track_001 spanning ~11.33s including the short gap). This should drive the threshold tests well.
project-time-in-area-analytics/README.md (1)
349-354: Numbers align with data; keep the term “time in area” (not “dwell”)The durations listed match the sample data and the implemented semantics (bridging gaps < 60s). Good that the term “time in area” is used consistently (avoid “dwell time” per PR note).
project-time-in-area-analytics/track_duration_calculator.star (2)
3-35: Timestamp parsing is robust for fractional secondsManual fraction handling + fixed Zulu format provides microsecond precision and matches the dataset. Good.
36-62: Reduce side effects in get_time_in_area_secondsThe function both mutates state and computes time. Per PR TODO, make it pure: compute based on state, without updating it.
Apply:
-def get_time_in_area_seconds(track_id, current_seconds, track_state): +def get_time_in_area_seconds(track_id, current_seconds, track_state): @@ - if track_id not in track_state: - # First time seeing this track ID - initialize with current timestamp - track_state[track_id] = { - "first_seen_seconds": current_seconds, - "last_seen_seconds": current_seconds - } - return 0 # Time in area is 0 on first detection - - # Update last seen time - track_state[track_id]["last_seen_seconds"] = current_seconds - - # Calculate time in area - simple subtraction since both are in seconds - first_seen_seconds = track_state[track_id]["first_seen_seconds"] - time_in_area = current_seconds - first_seen_seconds - return time_in_area + # Pure computation: do not mutate track_state here + if track_id not in track_state: + return 0.0 + first_seen_seconds = track_state[track_id]["first_seen_seconds"] + return current_seconds - first_seen_secondsLikely an incorrect or invalid review comment.
| ### config_process_track_duration.conf and track_duration_calculator.star | ||
|
|
||
| Calculates time in area for each detected object using the external Starlark script `track_duration_calculator.star`. This processor: | ||
|
|
||
| - Tracks first seen and last seen timestamps for each `track_id` | ||
| - Calculates `time_in_area_seconds` for each detection | ||
| - Automatically cleans up stale tracks (not seen for 60+ seconds) | ||
| - Outputs debug messages when tracks are removed | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Gate debug removals by a debug flag to reduce noise
The text promises debug messages on stale-track removal. Implement and document that debug messages are only emitted when a debug flag is set.
Apply:
- Automatically cleans up stale tracks (not seen for 60+ seconds)
-- Outputs debug messages when tracks are removed
+- Optionally outputs debug metrics when tracks are removed (enable by setting DEBUG_ENABLED=true in the Starlark script or via config)I can wire this into the Starlark and config in this PR so it’s off by default but easy to enable.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ### config_process_track_duration.conf and track_duration_calculator.star | |
| Calculates time in area for each detected object using the external Starlark script `track_duration_calculator.star`. This processor: | |
| - Tracks first seen and last seen timestamps for each `track_id` | |
| - Calculates `time_in_area_seconds` for each detection | |
| - Automatically cleans up stale tracks (not seen for 60+ seconds) | |
| - Outputs debug messages when tracks are removed | |
| ### config_process_track_duration.conf and track_duration_calculator.star | |
| Calculates time in area for each detected object using the external Starlark script `track_duration_calculator.star`. This processor: | |
| - Tracks first seen and last seen timestamps for each `track_id` | |
| - Calculates `time_in_area_seconds` for each detection | |
| - Automatically cleans up stale tracks (not seen for 60+ seconds) | |
| - Optionally outputs debug metrics when tracks are removed (enable by setting DEBUG_ENABLED=true in the Starlark script or via config) |
🤖 Prompt for AI Agents
In project-time-in-area-analytics/README.md around lines 106 to 114, the docs
currently state that debug messages are emitted when stale tracks are removed;
update the Starlark and documentation so those debug logs are gated by a boolean
debug flag (off by default). Modify track_duration_calculator.star to accept a
debug parameter (default false) and only emit/remove debug logs when debug ==
true, add a corresponding debug = false setting to
config_process_track_duration.conf and example config snippet in the README, and
update the README text to state that debug removals are disabled by default and
can be enabled via the debug flag.
| def cleanup_stale_tracks(current_seconds, track_state, max_stale_seconds): | ||
| """Remove tracks that haven't been seen for too long | ||
|
|
||
| Args: | ||
| current_seconds: Current timestamp in Unix seconds | ||
| track_state: State dictionary for tracking objects | ||
| max_stale_seconds: Maximum time since last seen before removing track | ||
| """ | ||
| # Find tracks to remove (can't modify dict while iterating) | ||
| tracks_to_remove = [] | ||
|
|
||
| for track_id, track_data in track_state.items(): | ||
| last_seen_seconds = track_data["last_seen_seconds"] | ||
| time_since_seen = current_seconds - last_seen_seconds | ||
|
|
||
| if time_since_seen > max_stale_seconds: | ||
| tracks_to_remove.append(track_id) | ||
|
|
||
| # Remove stale tracks | ||
| for track_id in tracks_to_remove: | ||
| track_state.pop(track_id) | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Return removed track IDs from cleanup to enable optional debug metrics
Returning the removed IDs allows controlled debug metric emission (guarded by a flag).
Apply:
-def cleanup_stale_tracks(current_seconds, track_state, max_stale_seconds):
+def cleanup_stale_tracks(current_seconds, track_state, max_stale_seconds):
@@
- # Remove stale tracks
- for track_id in tracks_to_remove:
- track_state.pop(track_id)
+ # Remove stale tracks
+ for track_id in tracks_to_remove:
+ track_state.pop(track_id)
+ # Return removed IDs for optional debug emission
+ return tracks_to_remove📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| def cleanup_stale_tracks(current_seconds, track_state, max_stale_seconds): | |
| """Remove tracks that haven't been seen for too long | |
| Args: | |
| current_seconds: Current timestamp in Unix seconds | |
| track_state: State dictionary for tracking objects | |
| max_stale_seconds: Maximum time since last seen before removing track | |
| """ | |
| # Find tracks to remove (can't modify dict while iterating) | |
| tracks_to_remove = [] | |
| for track_id, track_data in track_state.items(): | |
| last_seen_seconds = track_data["last_seen_seconds"] | |
| time_since_seen = current_seconds - last_seen_seconds | |
| if time_since_seen > max_stale_seconds: | |
| tracks_to_remove.append(track_id) | |
| # Remove stale tracks | |
| for track_id in tracks_to_remove: | |
| track_state.pop(track_id) | |
| def cleanup_stale_tracks(current_seconds, track_state, max_stale_seconds): | |
| """Remove tracks that haven't been seen for too long | |
| Args: | |
| current_seconds: Current timestamp in Unix seconds | |
| track_state: State dictionary for tracking objects | |
| max_stale_seconds: Maximum time since last seen before removing track | |
| """ | |
| # Find tracks to remove (can't modify dict while iterating) | |
| tracks_to_remove = [] | |
| for track_id, track_data in track_state.items(): | |
| last_seen_seconds = track_data["last_seen_seconds"] | |
| time_since_seen = current_seconds - last_seen_seconds | |
| if time_since_seen > max_stale_seconds: | |
| tracks_to_remove.append(track_id) | |
| # Remove stale tracks | |
| for track_id in tracks_to_remove: | |
| track_state.pop(track_id) | |
| # Return removed IDs for optional debug emission | |
| return tracks_to_remove |
🤖 Prompt for AI Agents
In project-time-in-area-analytics/track_duration_calculator.star around lines 63
to 84, the cleanup_stale_tracks function currently removes stale tracks but
doesn’t return which IDs were removed; change the function to return the list of
removed track IDs (tracks_to_remove) so callers can optionally emit debug
metrics. Update the function signature/docstring to indicate it returns a list
of removed IDs, keep the same removal logic (collect IDs then pop), and ensure
any callers are adjusted to accept/ignore the returned list as needed.
| def apply(metric): | ||
| """Calculate the time in area for each metric. | ||
|
|
||
| This function will be called for each metric in the pipeline, | ||
| the function will keep a state of all the track IDs and their | ||
| first and last seen times. The function will calculate the time | ||
| in area for each track ID and add it to the metric. | ||
|
|
||
| Returns: | ||
| The input metric but with the time in area added. | ||
| """ | ||
| # Get track_id and timestamp from the metric | ||
| track_id = metric.fields.get("track_id", "") | ||
| timestamp = metric.fields.get("timestamp", "") | ||
|
|
||
| # Skip messages without track_id | ||
| if track_id == "" or timestamp == "": | ||
| return metric | ||
|
|
||
| # Parse timestamp to float seconds since Unix epoch | ||
| current_seconds = parse_timestamp_to_float_seconds(timestamp) | ||
|
|
||
| # Initialize track state subdict if it doesn't exist | ||
| if "track_state" not in state: | ||
| state["track_state"] = {} | ||
|
|
||
| # Clean up stale tracks (not seen for 60 seconds) | ||
| cleanup_stale_tracks(current_seconds, state["track_state"], 60) | ||
|
|
||
| # Get the time in area for this track ID | ||
| time_in_area = get_time_in_area_seconds(track_id, current_seconds, state["track_state"]) | ||
|
|
||
| # Add the time in area to the metric (always add it, filtering happens in next processor) | ||
| metric.fields["time_in_area_seconds"] = time_in_area | ||
|
|
||
| # Return the same metric with the time in area added | ||
| return metric |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Gate debug removals behind a flag; clarify “always add it” note; initialize/update state in apply()
- Initialize track entries and update last_seen in apply(), keeping get_time_in_area_seconds side-effect-free.
- Emit removal debug metrics only if a DEBUG_ENABLED flag is set.
- Clarify that time_in_area is always added because the next processor (config_process_threshold_filter.conf) performs filtering.
Apply:
def apply(metric):
@@
- # Clean up stale tracks (not seen for 60 seconds)
- cleanup_stale_tracks(current_seconds, state["track_state"], 60)
+ # Clean up stale tracks (not seen for 60 seconds) and optionally emit debug metrics
+ removed_tracks = cleanup_stale_tracks(current_seconds, state["track_state"], 60)
+
+ out = []
+ # Optional debug emission (off by default). See README for how to enable.
+ DEBUG_ENABLED = state.get("DEBUG_ENABLED", False)
+ if DEBUG_ENABLED:
+ for removed_id in removed_tracks:
+ debug_metric = metric.copy()
+ debug_metric.name = "detection_debug"
+ debug_metric.fields["event"] = "stale_track_removed"
+ debug_metric.fields["removed_track_id"] = removed_id
+ out.append(debug_metric)
@@
- # Get the time in area for this track ID
- time_in_area = get_time_in_area_seconds(track_id, current_seconds, state["track_state"])
+ # Initialize track record on first sighting
+ if track_id not in state["track_state"]:
+ state["track_state"][track_id] = {
+ "first_seen_seconds": current_seconds,
+ "last_seen_seconds": current_seconds,
+ }
+ # Compute time in area (pure) then update last_seen as a separate step
+ time_in_area = get_time_in_area_seconds(track_id, current_seconds, state["track_state"])
+ state["track_state"][track_id]["last_seen_seconds"] = current_seconds
@@
- # Add the time in area to the metric (always add it, filtering happens in next processor)
+ # Always add time_in_area; threshold filtering happens in config_process_threshold_filter.conf
metric.fields["time_in_area_seconds"] = time_in_area
@@
- # Return the same metric with the time in area added
- return metric
+ # Return original metric (with time_in_area) plus optional debug metrics
+ out.append(metric)
+ return outIf you prefer a config-driven flag instead of hardcoding state["DEBUG_ENABLED"], I can wire it via the Telegraf Starlark plugin variables.
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In project-time-in-area-analytics/track_duration_calculator.star around lines 85
to 121, the apply() function must initialize and update per-track entries
(first_seen and last_seen) in state["track_state"] so get_time_in_area_seconds
stays side‑effect‑free; add or update the track's entry and last_seen inside
apply() before calling get_time_in_area_seconds, ensuring new entries set
first_seen and last_seen to current_seconds. Gate any debug removal/emission
logic behind a DEBUG_ENABLED flag in state (or a config-driven variable) so
debug metrics are only emitted when enabled. Also keep the comment that
time_in_area_seconds is always added because filtering is performed by the
downstream config_process_threshold_filter.conf.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (3)
.github/workflows/project-time-in-area-test-analytics.yml (3)
24-27: Update setup-python to v5 to satisfy runner compatibility (actionlint).actions/setup-python@v4 is flagged as too old for current runners. Switch to v5.
Apply:
- - name: Set up Python 3.13 - uses: actions/setup-python@v4 + - name: Set up Python 3.13 + uses: actions/setup-python@v5 with: python-version: "3.13"
50-51: Fix grep/sed regex: \s is not portable in POSIX ERE/BRE (breaks alarm parsing).Use POSIX classes [[:space:]] and sed -E for portability on GNU grep/sed.
Apply:
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//')Also applies to: 81-83, 115-116, 149-151
215-217: Harden jq filters to ignore null track_id (prevents “null” inflating counts).Coalesce missing/null .fields.track_id to empty so counts reflect real track IDs only.
Apply:
- ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | wc -l) - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l) @@ - ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l) @@ - ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l) @@ - ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l)Also applies to: 258-261, 304-307, 350-353
🧹 Nitpick comments (2)
.github/workflows/project-time-in-area-test-analytics.yml (2)
234-234: Fix message: “expecting 3 alarms” → “expecting 4 alarms”.The expectations below assert 4 unique tracks; the message should match.
Apply:
- echo "Testing Telegraf pipeline with threshold: 2s (expecting 3 alarms)" + echo "Testing Telegraf pipeline with threshold: 2s (expecting 4 alarms)"
35-37: Optional hardening: fail fast in shell blocks.Add set -euo pipefail so scripts fail on any error and undefined var. Repeat in other run blocks as desired.
Apply:
cd project-time-in-area-analytics + set -euo pipefail + cd project-time-in-area-analyticsAlternatively, insert set -euo pipefail as the first line of each run: | block.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
📒 Files selected for processing (1)
.github/workflows/project-time-in-area-test-analytics.yml(1 hunks)
🧰 Additional context used
🪛 actionlint (1.7.7)
.github/workflows/project-time-in-area-test-analytics.yml
25-25: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
🔇 Additional comments (1)
.github/workflows/project-time-in-area-test-analytics.yml (1)
409-421: Success PR comment template string is now properly closed.The backtick is closed before createComment; this avoids the previous syntax error. Looks good.
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ❌ Track Heatmap Viewer: failure What This MeansTrack Heatmap Viewer Issues:
Telegraf Pipeline Issues:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (6)
project-time-in-area-analytics/README.md (3)
85-94: Fill in compatibility details; TODOs block users from knowing what worksPlaceholders for AXIS OS and FixedIT Data Agent compatibility should be concrete per repo guidelines. This also aligns with prior review feedback.
Apply:
### AXIS OS Compatibility -- **Minimum AXIS OS version**: TODO -- **Required tools**: TODO +- Minimum AXIS OS version: <fill in, e.g., 11.6 or 12.x> +- Required tools on device: + - message-broker-cli (preinstalled on recent AXIS OS) + - SSH access enabled (dev mode user or equivalent) + +Notes: +- On AXIS OS 12+, SSH root is disabled; use the FixedIT Data Agent dev user. ### FixedIT Data Agent Compatibility -- **Minimum Data Agent version**: 1.0 -- **Required features**: TODO +- Minimum FixedIT Data Agent version: 1.0 +- Required features: + - Starlark processor enabled + - JSON input and JSON output plugins + - MQTT output (if using alerts to broker)To verify consistency across docs, run a quick scan for remaining “TODO” placeholders in this project:
#!/bin/bash rg -n -C1 -g 'project-time-in-area-analytics/**' '\bTODO\b'
95-102: Quick Setup section is empty; add a minimal end-to-end guideA concrete quick start is required by the template. Reusing the prior suggestion.
## Quick Setup - -### Troubleshooting +1) Clone this repo and cd into project directory: + git clone https://github.com/fixedit-ai/fixedit-data-agent-examples.git + cd fixedit-data-agent-examples/project-time-in-area-analytics + +2) Host testing (no device required): + - Install Telegraf on your workstation + - Export env for sample data: + HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR + SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE + - Run time-in-area only: + telegraf --config test_files/config_input_sample_data.conf \ + --config config_process_track_duration.conf \ + --config test_files/config_output_stdout.conf \ + --once + +3) On-device testing (Axis device required): + - Ensure FixedIT Data Agent is installed and running on the device + - Upload config files from this folder into the agent’s helper files + - Configure MQTT endpoint or stdout (for testing) in the output config + - Start the pipeline and monitor logs (enable Debug temporarily) + +4) Optional: record real device data to use as SAMPLE_FILE: + python test_scripts/record_real_data.py --host <device_ip> --username <dev_user> -o test_files/real_device_data.jsonl + +### Troubleshooting
111-115: Gate stale-track debug output behind a flag (off by default)Per PR objective, avoid unconditional debug metrics. Update wording and wire a debug flag in the Starlark/config (off by default).
- Automatically cleans up stale tracks (not seen for 60+ seconds) -- Outputs debug messages when tracks are removed +- Optionally outputs debug metrics when tracks are removed (enable by setting DEBUG_ENABLED=true in the Starlark script or via config)I can update
track_duration_calculator.starandconfig_process_track_duration.confin this PR to addDEBUG_ENABLEDand document it here. Want me to push that change?project-time-in-area-analytics/test_scripts/record_real_data.py (2)
17-17: MyPy stubs missing for Paramiko; add types-paramiko or silence importAlign with CI hint to fix typing errors.
Two options:
- Add to
project-time-in-area-analytics/test_scripts/requirements.txt:
- types-paramiko==
- Or annotate import:
import paramiko # type: ignore[import-untyped]
121-126: Close the SSH channel to stop the remote command and avoid orphaned processesAfter the timeout, the remote
message-broker-clican continue running unless the channel is closed. Close it to prevent lingering processes. This was flagged previously.- finished.set() # Signal threads to stop - reader_thread.join(timeout=1) - error_thread.join(timeout=1) - - yield from lines + finished.set() # Signal threads to stop + reader_thread.join(timeout=1) + error_thread.join(timeout=1) + # Stop remote command/channel promptly to avoid lingering processes + try: + stdout.channel.close() + except Exception: + pass + + yield from linesproject-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (1)
569-619: Align gap-reset logic with README: reset time-in-area after gaps >60sCurrent algorithm accumulates time since first_seen; it doesn’t reset after long gaps. README claims both tools reset after >60s gaps. Implement last_seen tracking and reset on gaps.
- # Track first appearance times as seconds - this is our "state" for time-in-area calculation - track_first_seen: Dict[str, float] = {} + # State for time-in-area calculation + track_first_seen: Dict[str, float] = {} + track_last_seen: Dict[str, float] = {} + STALE_GAP_SECONDS = 60.0 @@ - if track_id in frame_track_ids: - # Record when we first see this track (start of time-in-area measurement) - if track_id not in track_first_seen: - track_first_seen[track_id] = current_time_seconds - - # Calculate how long this track has been in the area - first_time_seconds = track_first_seen[track_id] - time_in_area = current_time_seconds - first_time_seconds + if track_id in frame_track_ids: + # First observation starts timing + if track_id not in track_first_seen: + track_first_seen[track_id] = current_time_seconds + track_last_seen[track_id] = current_time_seconds + else: + # If the gap since last seen exceeds threshold, reset timing + gap = current_time_seconds - track_last_seen[track_id] + if gap > STALE_GAP_SECONDS: + track_first_seen[track_id] = current_time_seconds + track_last_seen[track_id] = current_time_seconds + + # Calculate time in area since (possibly reset) first_seen + time_in_area = current_time_seconds - track_first_seen[track_id]
🧹 Nitpick comments (15)
project-time-in-area-analytics/test_scripts/README.md (4)
36-44: Use “Axis devices” unless the feature is strictly camera-onlyThroughout this section, you refer to “Axis cameras.” Per repo guidelines, prefer “Axis devices” unless you truly mean camera-only features. The test script records analytics scene description via the device message broker, which applies to devices in general.
Apply:
-The `record_real_data.py` script allows you to record live analytics scene description data from Axis cameras for testing and analysis purposes. +The `record_real_data.py` script allows you to record live analytics scene description data from Axis devices for testing and analysis purposes.
76-88: Clarify AXIS OS 12+ guidance and default user wordingGood call-out about AXIS OS 12+. Minor clarity tweak and consistent product naming.
-- **AXIS OS 12+**: SSH as root is disabled. Regular SSH users cannot access the message broker. If you are a [Technology Integration Partner](https://www.axis.com/partner/technology-integration-partner-program), you can enable dev mode in the camera (instructions included in [this e-learning course](https://learning.fixedit.ai/spaces/11778313/content)) and use the FixedIT Data Agent user: +- **AXIS OS 12+**: SSH as root is disabled. Regular SSH users cannot access the message broker. If you are a [Technology Integration Partner](https://www.axis.com/partner/technology-integration-partner-program), you can enable dev mode on the device (instructions included in [this e-learning course](https://learning.fixedit.ai/spaces/11778313/content)) and use the FixedIT Data Agent dev user:
100-125: Document validation aid: per-track alarm counts are printed by the viewerThe viewer prints the number of alarm frames per track. Calling this out here improves validation UX and addresses the “visualization/validation aids” TODO from the PR objectives.
### Advanced Usage @@ Show alarm visualization (tracks exceeding threshold appear in red): @@ Get help: @@ ### Features & Example Output + +Validation tip: +- When running with `--alarm-threshold`, the viewer prints a list of tracks that exceeded the threshold and how many frames were in alarm for each track. This makes it easy to validate behavior against expected alarms per track.
96-99: Troubleshooting note is helpful—tighten phrasing and scopeMinor tightening for readability; keep the message while avoiding repetition.
-**Message Broker Connection Errors**: If you encounter errors like `❌ Command error: Failed to create data connection` when running the recording script, this typically indicates that the SSH user does not have sufficient privileges to subscribe to the message broker. This is a known limitation in **AXIS OS 12+** where SSH access as root is disabled and regular users may not have the necessary permissions to access the message broker service. See the [AXIS OS Compatibility](#axis-os-compatibility) section above for more details. +**Message Broker Connection Errors**: Errors like `❌ Command error: Failed to create data connection` typically indicate the SSH user lacks privileges to subscribe to the message broker. On **AXIS OS 12+**, SSH root is disabled and regular users cannot access the broker by default. See [AXIS OS Compatibility](#axis-os-compatibility) for options.project-time-in-area-analytics/README.md (2)
116-119: Clarify processor responsibilities and “always add it, filter later” patternDocs should explicitly state the dataflow contract between processors so readers understand why events are passed downstream unfiltered and filtered later.
### config_process_threshold_filter.conf -Filters detection frames based on the configured alert threshold. Only detections where time in area (`time_in_area_seconds`) exceeds `ALERT_THRESHOLD_SECONDS` are passed through to the output stage. +Filters detection frames based on the configured alert threshold. Only detections where time in area (`time_in_area_seconds`) exceeds `ALERT_THRESHOLD_SECONDS` are passed through to the output stage. + +Design note: +- The time-in-area calculator emits metrics for all detections (unfiltered) to keep that processor pure and reusable. Thresholding is intentionally a separate concern handled here. This “produce everything; filter in next processor” pattern simplifies testing and reconfiguration.
109-115: Reduce side effects in the time-in-area function (Starlark) for testabilityTo meet the PR objective of reducing side effects, split state management from pure computation in
track_duration_calculator.star. Keep a pure function likecompute_time_in_area(prev_state, detection) -> (new_state, time_in_area_seconds)and perform Telegraf mutations (adding fields/metrics) outside the pure function.I can draft the Starlark refactor and update the README to reflect the cleaner separation if you’d like.
project-time-in-area-analytics/test_scripts/record_real_data.py (5)
54-64: Fix misleading comment about auth order in connect()The code uses password if provided; otherwise attempts key auth. The comment says the opposite.
- # Try connecting with key first, then password if provided + # Authentication: use password if provided; otherwise attempt key-based auth
84-87: Clarify “Use threading for timeout instead of signals” with rationaleExpand the comment to explain why threads are used and how they are coordinated, addressing the PR TODO.
- # Use threading for timeout instead of signals + # Use threading for timeout instead of signals: + # - signal.alarm is not portable (doesn't work on Windows) and cannot be used reliably in threads + # - Paramiko's file-like stdout can block on reads; a reader thread + Event allows cooperative shutdown + # - We join the threads after setting `finished` to ensure deterministic cleanup
111-115: Intentionally non-daemon threads + explicit shutdown (justify choice)To address the “daemon thread” question: keeping these threads non-daemon ensures controlled shutdown and avoids abrupt termination that can leave SSH channels open. Let’s document that decision inline.
- reader_thread = threading.Thread(target=read_output) - error_thread = threading.Thread(target=read_errors) + # Non-daemon by default: ensures we can join and close channels cleanly + reader_thread = threading.Thread(target=read_output) + error_thread = threading.Thread(target=read_errors)
338-345: You announce a data sample but don’t print it—emit the first 3 linesSmall UX bug. Print a short sample so users can immediately validate the output.
- click.echo("Sample of recorded data (first 3 lines):") + click.echo("Sample of recorded data (first 3 lines):") + try: + with open(output_file, "r", encoding="utf-8") as sample_f: + for i, line in enumerate(sample_f): + if i >= 3: + break + click.echo(f" {line.rstrip()}") + except OSError: + pass
51-53: Auto-accepting host keys is insecure; consider a safer policy for non-dev useAutoAddPolicy is acceptable for a test script, but we should at least warn users or use WarningPolicy. Optional improvement.
- self.client.set_missing_host_key_policy(paramiko.AutoAddPolicy()) + # For production, prefer a known_hosts file or WarningPolicy. + self.client.set_missing_host_key_policy(paramiko.WarningPolicy())project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (4)
918-920: Rephrase “output from FixedIT Data Agent analytics” to avoid inaccuracyThe viewer accepts JSONL from test files or from a Data Agent pipeline, not necessarily “analytics output” directly. Clarify wording per PR TODO.
- such as the output from FixedIT Data Agent analytics or test data files. + such as JSONL test data files or output from a FixedIT Data Agent pipeline.
824-827: Inline the short string truncation with a ternary (style SIM108)Small style cleanup per Ruff.
- if len(class_type) > 4: - class_short = class_type[:4] + "." - else: - class_short = class_type + class_short = class_type[:4] + "." if len(class_type) > 4 else class_type
320-370: Docstring return type is outdated—function returns Frame, not a tupleThe docstring still states it returns a tuple; the function returns a Frame. Fix to avoid confusion.
- Returns: - Tuple of (Frame object, set of track IDs found in this frame) + Returns: + Frame object parsed from the JSON line
870-873: Consider including track count of alarms in verbose output tooYou already show the overlay in the plot; optionally echo the number of alarm tracks for headless runs even when
--no-uiis set.- stats_text = ( + stats_text = ( f"Tracks: {heatmap_data.num_tracks} | Frames: {heatmap_data.num_frames} | " f"Activity: {heatmap_data.activity_percentage:.1f}%{alarm_stats}" )And for headless mode, you could also
click.echo(stats_text)in main whenno_uiis True.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
project-time-in-area-analytics/.images/track-heatmap-120s.pngis excluded by!**/*.pngproject-time-in-area-analytics/.images/track-heatmap-simple.pngis excluded by!**/*.png
📒 Files selected for processing (5)
project-time-in-area-analytics/README.md(1 hunks)project-time-in-area-analytics/test_files/simple_tracks.jsonl(1 hunks)project-time-in-area-analytics/test_scripts/README.md(1 hunks)project-time-in-area-analytics/test_scripts/record_real_data.py(1 hunks)project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- project-time-in-area-analytics/test_files/simple_tracks.jsonl
🧰 Additional context used
📓 Path-based instructions (3)
project-*/**
⚙️ CodeRabbit Configuration File
project-*/**: Each example project for the FixedIT Data Agent should be self-contained and designed as a standalone example / tutorial showing how to use the FixedIT Data Agent. The project should be well documented and include all the necessary files to run the example. A reader should be able to understand the background and the purpose of the project, and be able to understand the implementation details and how to use it. If the project includes a dashboard, it should be placed in a new folder in the repo-root/dashboard-deployments/ directory instead of being included in the project folder. In the README for the project, the first time the FixedIT Data Agent is mentioned, we should link to the product page: https://fixedit.ai/products-data-agent/ Emphasize that these solutions are accessible to system integrators and IT professionals without requiring complex C/C++ ACAP development. Highlight that custom edge intelligence can be implemented using familiar IT tools (configuration files, shell scripts, REST APIs) rather than traditional embedded programming approaches. Note that bash does not exist on the Axis devices, so .sh scripts should be written in portable shell syntax.
README Structure: All project README files should follow the standardized template at.project_readme_template.md. This template ensures consistent documentation that serves both audiences: (1) IT professionals who want plug-and-play solutions, and (2) developers who want to customize and extend the examples. The template enforces a logical structure: 1) Introduction and background (project purpose, how it works, why choose this approach) 2) Technical explanation and quick start guide (compatibility, setup, troubleshooting) 3) Developer-oriented notes (local testing, implementation details)
Key README Requirements: - Include compatibility sections for both AXIS OS and FixedIT Data Agent versions - Explain host testing and its limitations and why certain parts require actual Axis d...
Files:
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/test_scripts/record_real_data.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*
⚙️ CodeRabbit Configuration File
This repository serves FixedIT Data Agent users across a spectrum from plug-and-play dashboard deployment to advanced edge device customization. Consider whether changes maintain accessibility for both DevOps professionals (server-side focus) and developers (edge customization focus). If new features are added or existing ones changed significantly, ensure documentation clearly explains the intended audience and usage level. We use prettier for formatting of common file formats like markdown, yaml, json, etc. Example projects should be placed in the repo-root in a directory named project-*/. Whenever referencing Axis device model names like "D4100-VE mk II" or "M3045", prefix it with "AXIS" in capital letters (e.g. "AXIS D4100-VE mk II"). When using Axis as a company name, use "Axis Communications", note that all-caps is only used when "AXIS" is used in their product names, not their company name. When using the name of an Axis product, google on the name to verify that it is correctly identified. Avoid using 'cameras' or 'Axis cameras' unless the solution in related to visual analytics, otherwise prefer using 'Axis devices' to show that the FixedIT Data Agent also works with strobes, speakers, door stations, etc. Images used for the README should be placed in a directory called .images/ in the affected project folder. These images might also be referred to from other sources like the top-level README.md file.
Files:
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/test_scripts/record_real_data.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: Documentation files should clearly communicate the dual audience: (1) server-side dashboard users who want to keep agent with bundled configs, and (2) edge device developers who want to customize agent behavior. Ensure examples and instructions are appropriate for the intended skill level and use case. Since this is a public repository, we should not include any sensitive information, the instructions should be easily understandable for a wide audience, and we should avoid using any jargon or technical terms that are not commonly used.
Anchor Links: For headings that contain Unicode characters (emojis, special characters) or backticks, use HTML anchor tags to ensure consistent linking behavior across different platforms (GitHub, VS Code, etc.). Add a separate line with<a id="anchor-name"></a>before the heading, and update the table of contents to reference this manual anchor.
Files:
project-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
🧠 Learnings (2)
📚 Learning: 2025-08-10T14:54:48.316Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:350-351
Timestamp: 2025-08-10T14:54:48.316Z
Learning: In the fixedit-data-agent-examples repository, shell portability requirements (such as Axis devices using POSIX /bin/sh instead of bash) should be documented in a general scripting guide rather than repeated in individual project README files. This approach was confirmed by daniel-falk for better documentation organization.
Applied to files:
project-time-in-area-analytics/README.md
📚 Learning: 2025-08-07T15:23:05.927Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:57-62
Timestamp: 2025-08-07T15:23:05.927Z
Learning: In the fixedit-ai/fixedit-data-agent-examples repository, when GitHub generates anchors for headings that contain backticks with spaces around them (like `### `filename` - Description`), it removes the backticks but preserves those spaces as dashes in the anchor, creating triple dashes (---) between the filename and description parts. TOC generators may not handle this correctly and need manual adjustment.
Applied to files:
project-time-in-area-analytics/README.md
🪛 Ruff (0.12.2)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
824-827: Use ternary operator class_short = class_type[:4] + "." if len(class_type) > 4 else class_type instead of if-else-block
Replace if-else-block with class_short = class_type[:4] + "." if len(class_type) > 4 else class_type
(SIM108)
🪛 LanguageTool
project-time-in-area-analytics/README.md
[grammar] ~52-~52: There might be a mistake here.
Context: ...Contents - Compatibility - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~53-~53: There might be a mistake here.
Context: ...ompatibility) - AXIS OS Compatibility - [FixedIT Data Agent Compatibility](#fixed...
(QB_NEW_EN)
[grammar] ~54-~54: There might be a mistake here.
Context: ...y) - FixedIT Data Agent Compatibility - Quick Setup - [Troubles...
(QB_NEW_EN)
[grammar] ~55-~55: There might be a mistake here.
Context: ...data-agent-compatibility) - Quick Setup - Troubleshooting - [Co...
(QB_NEW_EN)
[grammar] ~56-~56: There might be a mistake here.
Context: ...etup](#quick-setup) - Troubleshooting - [Configuration Files](#configuration-file...
(QB_NEW_EN)
[grammar] ~57-~57: There might be a mistake here.
Context: ...#troubleshooting) - Configuration Files - [config_process_track_duration.conf and t...
(QB_NEW_EN)
[grammar] ~58-~58: There might be a mistake here.
Context: ....conf and track_duration_calculator.star](#config_process_track_durationconf-and-track_duration_calculatorstar) - [config_process_threshold_filter.conf](#c...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ... - config_process_threshold_filter.conf - [test_files/config_output_stdout.conf](#t...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ... - test_files/config_output_stdout.conf - [test_files/sample_data_feeder.sh](#test_...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...f) - test_files/sample_data_feeder.sh - [Future Enhancements](#future-enhancement...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...le_data_feedersh) - Future Enhancements - [Local Testing on Host](#local-testing-on...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...e-enhancements) - Local Testing on Host - Prerequisites - [Host...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...ocal-testing-on-host) - Prerequisites - [Host Testing Limitations](#host-testing-...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...equisites) - Host Testing Limitations - Test Commands - [Te...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...-testing-limitations) - Test Commands - [Test Time in Area Calculation Only](#tes...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ... - Test Time in Area Calculation Only - [Test Complete Alert Pipeline](#test-comp...
(QB_NEW_EN)
[grammar] ~68-~68: There might be a mistake here.
Context: ...nly) - Test Complete Alert Pipeline - [Test with Real Device Data](#test-with-r...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ...eline) - Test with Real Device Data - [Analytics Data Structure](#analytics-dat...
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...device-data) - Analytics Data Structure - Data Format - [Data Beh...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...alytics-data-structure) - Data Format - Data Behavior - [Data...
(QB_NEW_EN)
[grammar] ~72-~72: There might be a mistake here.
Context: ...Format](#data-format) - Data Behavior - [Data Transformation for Telegraf](#data-...
(QB_NEW_EN)
[grammar] ~73-~73: There might be a mistake here.
Context: ...r) - Data Transformation for Telegraf - [Track Activity Visualization](#track-act...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...elegraf) - Track Activity Visualization - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~75-~75: There might be a mistake here.
Context: ...alization) - Recording Real Device Data - Automated Testing ...
(QB_NEW_EN)
[grammar] ~76-~76: There might be a mistake here.
Context: ...g-real-device-data) - Automated Testing - GitHub Workflow - [...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...#automated-testing) - GitHub Workflow - Test Data - [PR Comments]...
(QB_NEW_EN)
[grammar] ~78-~78: There might be a mistake here.
Context: ...rkflow](#github-workflow) - Test Data - PR Comments <!-- tocstop...
(QB_NEW_EN)
[grammar] ~87-~87: There might be a mistake here.
Context: ...ity - Minimum AXIS OS version: TODO - Required tools: TODO ### FixedIT Data...
(QB_NEW_EN)
[grammar] ~92-~92: There might be a mistake here.
Context: ...y - Minimum Data Agent version: 1.0 - Required features: TODO ## Quick Setu...
(QB_NEW_EN)
[grammar] ~99-~99: There might be a mistake here.
Context: ...in the Uploaded helper files section (refresh page to see updates). Note: Don't ...
(QB_NEW_EN)
[grammar] ~152-~152: There might be a mistake here.
Context: ...processor logic testing with sample data - MQTT output configuration validation (TO...
(QB_NEW_EN)
[grammar] ~153-~153: There might be a mistake here.
Context: ...T output configuration validation (TODO) - Alert threshold configuration testing *...
(QB_NEW_EN)
[grammar] ~178-~178: There might be a mistake here.
Context: ... --once ``` Expected Output: All detections with `time_in_area_second...
(QB_NEW_EN)
[grammar] ~199-~199: There might be a mistake here.
Context: ... --once ``` Expected Output: Only detections with time in area (`time...
(QB_NEW_EN)
[grammar] ~297-~297: There might be a mistake here.
Context: ...ed observations into individual messages - Preserves object bounding box coordina...
(QB_NEW_EN)
[grammar] ~298-~298: There might be a mistake here.
Context: ...serves** object bounding box coordinates - Simplifies object classification to ju...
(QB_NEW_EN)
[grammar] ~299-~299: There might be a mistake here.
Context: ...* object classification to just the type - Skips frames with no observations enti...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...ata with: - track_001: Appears twice with 8s gap (total time: 11.33s) - `track_00...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...s twice with 8s gap (total time: 11.33s) - track_002: Continuous presence for 2.22s - `track...
(QB_NEW_EN)
[grammar] ~351-~351: There might be a mistake here.
Context: ...rack_002: Continuous presence for 2.22s - track_003: Continuous presence for 2.22s - track...
(QB_NEW_EN)
[grammar] ~352-~352: There might be a mistake here.
Context: ...rack_003: Continuous presence for 2.22s - track_004: Single appearance (0s) - track_005`: ...
(QB_NEW_EN)
[grammar] ~353-~353: There might be a mistake here.
Context: ...2s - track_004: Single appearance (0s) - track_005: Long duration track for 2.5 minutes (1...
(QB_NEW_EN)
[grammar] ~362-~362: There might be a mistake here.
Context: ...Success confirmation when all tests pass - ❌ Specific failure diagnostics and troub...
(QB_NEW_EN)
project-time-in-area-analytics/test_scripts/README.md
[grammar] ~9-~9: There might be a mistake here.
Context: ... Contents - Installation - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...tallation) - Recording Real Device Data - Basic Usage - [Advanced...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ...rding-real-device-data) - Basic Usage - Advanced Usage - [Pa...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ...Usage](#basic-usage) - Advanced Usage - Parameters - [Authentica...
(QB_NEW_EN)
[grammar] ~13-~13: There might be a mistake here.
Context: ... Usage](#advanced-usage) - Parameters - Authentication - [AX...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...meters](#parameters) - Authentication - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ...thentication) - AXIS OS Compatibility - Use Cases - [Track Heatmap ...
(QB_NEW_EN)
[grammar] ~16-~16: There might be a mistake here.
Context: ...](#axis-os-compatibility) - Use Cases - [Track Heatmap Visualization](#track-heat...
(QB_NEW_EN)
[grammar] ~17-~17: There might be a mistake here.
Context: ...se-cases) - Track Heatmap Visualization - Basic Usage - [Advanc...
(QB_NEW_EN)
[grammar] ~18-~18: There might be a mistake here.
Context: ...-heatmap-visualization) - Basic Usage - Advanced Usage - [...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ...age](#basic-usage-1) - Advanced Usage - [Features & Example Output](#features--ex...
(QB_NEW_EN)
[grammar] ~20-~20: There might be a mistake here.
Context: ...-usage-1) - Features & Example Output - [Activity Percentage Calculation](#activi...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...) - Activity Percentage Calculation - Prerequisites <!-- toc...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ...dress or hostname (default: 192.168.1.2) - --username, -u: SSH username (default: acap-fixeditdat...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...sername (default: acap-fixeditdataagent) - --password, -p: SSH password (optional, will prompt if...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...ssword (optional, will prompt if needed) - --duration, -d: Recording duration in seconds (default...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...ording duration in seconds (default: 30) - --topic: Message broker topic to consume (defau...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...xis.analytics_scene_description.v0.beta) - --source: Message broker source (default: 1) - `...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...rce: Message broker source (default: 1) - --output-file, -o`: Output file path (default: test_files/...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...sword Prompt**: Falls back to prompting for password if key auth fails ### AXIS OS...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ...rded data with visualization tools like track_heatmap_viewer.py - Debugging: Capture problematic scenari...
(QB_NEW_EN)
[grammar] ~141-~141: There might be a mistake here.
Context: ... white box)**: Shows key dataset metrics - Tracks: 42 - Total unique track IDs de...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...que track IDs detected across all frames - Frames: 1180 - Number of frames contai...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ...one detection (not total elapsed frames) - Activity: 99.8% - Percentage of frames...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ...entage of frames with detections present - Alarms: 5 - Number of tracks exceeding...
(QB_NEW_EN)
[grammar] ~145-~145: There might be a mistake here.
Context: ...s exceeding the alarm threshold duration - X-axis (Time): Timestamps of frames wi...
(QB_NEW_EN)
[grammar] ~146-~146: There might be a mistake here.
Context: ...me gaps without detections are not shown - Y-axis (Track IDs): Individual object ...
(QB_NEW_EN)
[grammar] ~147-~147: There might be a mistake here.
Context: ... 3effc, 58cef) sorted alphabetically - Color Legend (right side): Visual scal...
(QB_NEW_EN)
🔇 Additional comments (3)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (3)
765-775: Nice validation output: per-track alarm countsPrinting alarm-bearing tracks and counts helps compare with Telegraf results. This fulfills the “add visualization/validation aids” objective.
699-706: Title and legend setup is solid and readableThe separation between 3-color and 4-color modes with proper normalization is well done.
29-34: Legend colors include unclassified state—aligns with data sparsityGood call including an “Unclassified” state to reflect early frames before classification resolves.
0782b52 to
8a56fa5
Compare
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ❌ Track Heatmap Viewer: failure What This MeansTrack Heatmap Viewer Issues:
|
8a56fa5 to
d742de0
Compare
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (3)
project-time-in-area-analytics/README.md (3)
85-94: Fill in Compatibility details; TODOs block usabilityThe Compatibility section still contains TODOs for AXIS OS and FixedIT Data Agent features. Per repo guidelines/template, these must be explicit and actionable.
Proposed structure to replace placeholders (fill with your actual minimums):
### AXIS OS Compatibility - -- **Minimum AXIS OS version**: TODO -- **Required tools**: TODO + +- Minimum AXIS OS version: <fill in, e.g., 11.6 or 12.x> +- Required tools on device: + - message-broker CLI (preinstalled on recent AXIS OS) + - SSH access enabled (dev mode user or equivalent) ### FixedIT Data Agent Compatibility - -- **Minimum Data Agent version**: 1.0 -- **Required features**: TODO + +- Minimum FixedIT Data Agent version: 1.0 +- Required features: + - Starlark processor enabled + - JSON input and JSON output plugins + - MQTT output (if using alerts to broker)
95-99: Quick Setup section is empty; add a minimal end-to-end guideThe template requires a concrete Quick Setup before Troubleshooting so users can run the example immediately.
Apply:
## Quick Setup - -### Troubleshooting +1) Clone and enter the project: + git clone https://github.com/fixedit-ai/fixedit-data-agent-examples.git + cd fixedit-data-agent-examples/project-time-in-area-analytics + +2) Host testing (no device required): + - Install Telegraf + - export HELPER_FILES_DIR="$(pwd)"; export SAMPLE_FILE="test_files/simple_tracks.jsonl" + - Run time-in-area only: + telegraf --config test_files/config_input_sample_data.conf \ + --config config_process_track_duration.conf \ + --config test_files/config_output_stdout.conf \ + --once + +3) On-device testing (AXIS device): + - Ensure FixedIT Data Agent is installed and running + - Upload helper/config files from this folder into the agent + - Configure output (MQTT or stdout) and start the pipeline + - Monitor logs; disable Debug after testing + +4) Optional: record real device data for SAMPLE_FILE: + python test_scripts/record_real_data.py --host <device_ip> --username <dev_user> -o test_files/real_device_data.jsonl + +### Troubleshooting
111-115: Gate track-removal debug output behind a debug flagDocs currently claim unconditional debug output on stale-track removal. Align with PR objectives by gating this behind a debug flag so production runs aren’t noisy.
-- Outputs debug messages when tracks are removed +- Optionally outputs debug metrics when tracks are removed (enable by setting DEBUG_ENABLED=true in the Starlark script/config)If you want, I can wire DEBUG_ENABLED into track_duration_calculator.star and the sample configs in this PR.
🧹 Nitpick comments (12)
project-time-in-area-analytics/test_scripts/README.md (4)
132-139: Add a stable HTML anchor for heading with special character (&)Headings containing special characters can produce inconsistent anchors across platforms. Add a manual anchor and update TOC to point to it.
+<a id="features-example-output"></a> ### Features & Example Output
154-156: Specify language for fenced code block (MD040)The fenced code block lacks a language identifier. Use text to satisfy markdownlint and improve readability.
-``` +```text Activity = (Frames with ≥1 detection / Total frames) × 100--- `36-36`: **Consistent product naming: use “AXIS” for devices** Per repo style, refer to devices as AXIS (all caps) when naming products/devices. ```diff -The `record_real_data.py` script allows you to record live analytics scene description data from Axis cameras for testing and analysis purposes. +The `record_real_data.py` script allows you to record live analytics scene description data from AXIS cameras for testing and analysis purposes.
76-88: Document minimum required privileges for authenticationGuidelines require specifying minimum privileges and encouraging least privilege. Add a short note indicating the recommended user role/access (dev-mode user with broker access) instead of root, especially for AXIS OS 12+.
### AXIS OS Compatibility @@ - **AXIS OS < 12**: You can SSH as root without restrictions: @@ - **AXIS OS 12+**: SSH as root is disabled. Regular SSH users cannot access the message broker. If you are a [Technology Integration Partner](https://www.axis.com/partner/technology-integration-partner-program), you can enable dev mode in the camera (instructions included in [this e-learning course](https://learning.fixedit.ai/spaces/11778313/content)) and use the FixedIT Data Agent user: ```bash python record_real_data.py --host <device_ip> --username acap-fixeditdataagent
+Minimum privileges: use the dev-mode FixedIT Data Agent user (or equivalent) with message-broker subscription access. Avoid using root unless necessary on legacy devices.
</blockquote></details> <details> <summary>project-time-in-area-analytics/README.md (4)</summary><blockquote> `3-3`: **Consistent branding: “AXIS” for devices** Use “AXIS” when referring to devices/products per style guide. ```diff -This project demonstrates how to implement time-in-area analytics for Axis fisheye cameras using the [FixedIT Data Agent](https://fixedit.ai/products-data-agent/). +This project demonstrates how to implement time-in-area analytics for AXIS fisheye cameras using the [FixedIT Data Agent](https://fixedit.ai/products-data-agent/).
7-7: Consistent branding: “AXIS” for devicesSame terminology nit as above.
-The system consumes real-time object detection data from Axis fisheye cameras and implements custom time-in-area analytics logic to track object time in area and trigger appropriate responses. +The system consumes real-time object detection data from AXIS fisheye cameras and implements custom time-in-area analytics logic to track object time in area and trigger appropriate responses.
171-180: Adjust expected output wording to reflect debug gatingIf debug is gated, mention that cleanup messages appear only when enabled.
-# Test time in area calculation only (shows all detections + debug messages) +# Test time in area calculation only (shows all detections; emits cleanup debug only if DEBUG is enabled) @@ -**Expected Output:** -All detections with `time_in_area_seconds` field plus debug messages when stale tracks are cleaned up. +**Expected Output:** +All detections with `time_in_area_seconds`. Cleanup debug lines appear only when the debug flag is enabled.
921-923: Clarify wording about data source“the output from FixedIT Data Agent analytics” is a bit vague. Recommend clarifying to “detection messages produced by the FixedIT Data Agent pipeline” to match the Data Transformation section.
- such as the output from FixedIT Data Agent analytics or test data files. + such as the detection messages produced by the FixedIT Data Agent pipeline or the provided test data files.project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (4)
333-370: Fix incorrect docstring: function returns a Frame, not a TupleThe Returns section claims a tuple but the function returns a Frame. Update to prevent confusion.
- Returns: - Tuple of (Frame object, set of track IDs found in this frame) + Returns: + Frame object
824-831: Minor style: use a conditional expression (Ruff SIM108)Simplify class_short assignment.
- if len(class_type) > 4: - class_short = class_type[:4] + "." - else: - class_short = class_type + class_short = (class_type[:4] + ".") if len(class_type) > 4 else class_type
891-936: Clarify “FixedIT Data Agent analytics” wording in CLI helpMatch the repo’s phrasing around “pipeline-produced detection messages”.
- INPUT_FILE should be a JSONL file containing frame data with track IDs, - such as the output from FixedIT Data Agent analytics or test data files. + INPUT_FILE should be a JSONL file containing frame data with track IDs, + such as detection messages produced by the FixedIT Data Agent pipeline or the provided test data files.
622-704: Extract colormap setup to reduce locals in_setup_alarm_heatmap_plot(optional)Pylint’s R0914 (too-many-locals) may trigger in both
_setup_alarm_heatmap_plotandrender_heatmap. To streamline these functions, consider:
- Extracting the colormap and norm construction into a helper, e.g.
def _build_heatmap_cmap(show_alarm: bool) -> Tuple[ListedColormap, Optional[BoundaryNorm], int, str]- Moving any stats‐overlay or annotation logic into a separate function
To verify whether R0914 is currently flagged, run:
pylint project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py \ --disable=all --enable=R0914Also applies to lines 799–889 in the same file.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
project-time-in-area-analytics/.images/track-heatmap-120s.pngis excluded by!**/*.pngproject-time-in-area-analytics/.images/track-heatmap-simple.pngis excluded by!**/*.png
📒 Files selected for processing (5)
project-time-in-area-analytics/README.md(1 hunks)project-time-in-area-analytics/test_files/simple_tracks.jsonl(1 hunks)project-time-in-area-analytics/test_scripts/README.md(1 hunks)project-time-in-area-analytics/test_scripts/record_real_data.py(1 hunks)project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- project-time-in-area-analytics/test_scripts/record_real_data.py
- project-time-in-area-analytics/test_files/simple_tracks.jsonl
🧰 Additional context used
📓 Path-based instructions (3)
project-*/**
⚙️ CodeRabbit Configuration File
project-*/**: Each example project for the FixedIT Data Agent should be self-contained and designed as a standalone example / tutorial showing how to use the FixedIT Data Agent. The project should be well documented and include all the necessary files to run the example. A reader should be able to understand the background and the purpose of the project, and be able to understand the implementation details and how to use it. If the project includes a dashboard, it should be placed in a new folder in the repo-root/dashboard-deployments/ directory instead of being included in the project folder. In the README for the project, the first time the FixedIT Data Agent is mentioned, we should link to the product page: https://fixedit.ai/products-data-agent/ Emphasize that these solutions are accessible to system integrators and IT professionals without requiring complex C/C++ ACAP development. Highlight that custom edge intelligence can be implemented using familiar IT tools (configuration files, shell scripts, REST APIs) rather than traditional embedded programming approaches. Note that bash does not exist on the Axis devices, so .sh scripts should be written in portable shell syntax.
README Structure: All project README files should follow the standardized template at.project_readme_template.md. This template ensures consistent documentation that serves both audiences: (1) IT professionals who want plug-and-play solutions, and (2) developers who want to customize and extend the examples. The template enforces a logical structure: 1) Introduction and background (project purpose, how it works, why choose this approach) 2) Technical explanation and quick start guide (compatibility, setup, troubleshooting) 3) Developer-oriented notes (local testing, implementation details)
Key README Requirements: - Include compatibility sections for both AXIS OS and FixedIT Data Agent versions - Explain host testing and its limitations and why certain parts require actual Axis d...
Files:
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*
⚙️ CodeRabbit Configuration File
This repository serves FixedIT Data Agent users across a spectrum from plug-and-play dashboard deployment to advanced edge device customization. Consider whether changes maintain accessibility for both DevOps professionals (server-side focus) and developers (edge customization focus). If new features are added or existing ones changed significantly, ensure documentation clearly explains the intended audience and usage level. We use prettier for formatting of common file formats like markdown, yaml, json, etc. Example projects should be placed in the repo-root in a directory named project-*/. Whenever referencing Axis device model names like "D4100-VE mk II" or "M3045", prefix it with "AXIS" in capital letters (e.g. "AXIS D4100-VE mk II"). When using Axis as a company name, use "Axis Communications", note that all-caps is only used when "AXIS" is used in their product names, not their company name. When using the name of an Axis product, google on the name to verify that it is correctly identified. Avoid using 'cameras' or 'Axis cameras' unless the solution in related to visual analytics, otherwise prefer using 'Axis devices' to show that the FixedIT Data Agent also works with strobes, speakers, door stations, etc. Images used for the README should be placed in a directory called .images/ in the affected project folder. These images might also be referred to from other sources like the top-level README.md file.
Files:
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: Documentation files should clearly communicate the dual audience: (1) server-side dashboard users who want to keep agent with bundled configs, and (2) edge device developers who want to customize agent behavior. Ensure examples and instructions are appropriate for the intended skill level and use case. Since this is a public repository, we should not include any sensitive information, the instructions should be easily understandable for a wide audience, and we should avoid using any jargon or technical terms that are not commonly used.
Anchor Links: For headings that contain Unicode characters (emojis, special characters) or backticks, use HTML anchor tags to ensure consistent linking behavior across different platforms (GitHub, VS Code, etc.). Add a separate line with<a id="anchor-name"></a>before the heading, and update the table of contents to reference this manual anchor.
Files:
project-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
🧠 Learnings (2)
📚 Learning: 2025-08-10T14:54:48.316Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:350-351
Timestamp: 2025-08-10T14:54:48.316Z
Learning: In the fixedit-data-agent-examples repository, shell portability requirements (such as Axis devices using POSIX /bin/sh instead of bash) should be documented in a general scripting guide rather than repeated in individual project README files. This approach was confirmed by daniel-falk for better documentation organization.
Applied to files:
project-time-in-area-analytics/README.md
📚 Learning: 2025-08-07T15:23:05.927Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:57-62
Timestamp: 2025-08-07T15:23:05.927Z
Learning: In the fixedit-ai/fixedit-data-agent-examples repository, when GitHub generates anchors for headings that contain backticks with spaces around them (like `### `filename` - Description`), it removes the backticks but preserves those spaces as dashes in the anchor, creating triple dashes (---) between the filename and description parts. TOC generators may not handle this correctly and need manual adjustment.
Applied to files:
project-time-in-area-analytics/README.md
🪛 Ruff (0.12.2)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
827-830: Use ternary operator class_short = class_type[:4] + "." if len(class_type) > 4 else class_type instead of if-else-block
Replace if-else-block with class_short = class_type[:4] + "." if len(class_type) > 4 else class_type
(SIM108)
🪛 GitHub Actions: Python Code Quality Check for the time-in-area-analytics project
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
[warning] 622-622: pylint: R0914: Too many local variables (16/15) (too-many-locals)
[warning] 799-799: pylint: R0914: Too many local variables (18/15) (too-many-locals)
🪛 LanguageTool
project-time-in-area-analytics/README.md
[grammar] ~52-~52: There might be a mistake here.
Context: ...Contents - Compatibility - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~53-~53: There might be a mistake here.
Context: ...ompatibility) - AXIS OS Compatibility - [FixedIT Data Agent Compatibility](#fixed...
(QB_NEW_EN)
[grammar] ~54-~54: There might be a mistake here.
Context: ...y) - FixedIT Data Agent Compatibility - Quick Setup - [Troubles...
(QB_NEW_EN)
[grammar] ~55-~55: There might be a mistake here.
Context: ...data-agent-compatibility) - Quick Setup - Troubleshooting - [Co...
(QB_NEW_EN)
[grammar] ~56-~56: There might be a mistake here.
Context: ...etup](#quick-setup) - Troubleshooting - [Configuration Files](#configuration-file...
(QB_NEW_EN)
[grammar] ~57-~57: There might be a mistake here.
Context: ...#troubleshooting) - Configuration Files - [config_process_track_duration.conf and t...
(QB_NEW_EN)
[grammar] ~58-~58: There might be a mistake here.
Context: ....conf and track_duration_calculator.star](#config_process_track_durationconf-and-track_duration_calculatorstar) - [config_process_threshold_filter.conf](#c...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ... - config_process_threshold_filter.conf - [test_files/config_output_stdout.conf](#t...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ... - test_files/config_output_stdout.conf - [test_files/sample_data_feeder.sh](#test_...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...f) - test_files/sample_data_feeder.sh - [Future Enhancements](#future-enhancement...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...le_data_feedersh) - Future Enhancements - [Local Testing on Host](#local-testing-on...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...e-enhancements) - Local Testing on Host - Prerequisites - [Host...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...ocal-testing-on-host) - Prerequisites - [Host Testing Limitations](#host-testing-...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...equisites) - Host Testing Limitations - Test Commands - [Te...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...-testing-limitations) - Test Commands - [Test Time in Area Calculation Only](#tes...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ... - Test Time in Area Calculation Only - [Test Complete Alert Pipeline](#test-comp...
(QB_NEW_EN)
[grammar] ~68-~68: There might be a mistake here.
Context: ...nly) - Test Complete Alert Pipeline - [Test with Real Device Data](#test-with-r...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ...eline) - Test with Real Device Data - [Analytics Data Structure](#analytics-dat...
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...device-data) - Analytics Data Structure - Data Format - [Data Beh...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...alytics-data-structure) - Data Format - Data Behavior - [Data...
(QB_NEW_EN)
[grammar] ~72-~72: There might be a mistake here.
Context: ...Format](#data-format) - Data Behavior - [Data Transformation for Telegraf](#data-...
(QB_NEW_EN)
[grammar] ~73-~73: There might be a mistake here.
Context: ...r) - Data Transformation for Telegraf - [Track Activity Visualization](#track-act...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...elegraf) - Track Activity Visualization - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~75-~75: There might be a mistake here.
Context: ...alization) - Recording Real Device Data - Automated Testing ...
(QB_NEW_EN)
[grammar] ~76-~76: There might be a mistake here.
Context: ...g-real-device-data) - Automated Testing - GitHub Workflow - [...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...#automated-testing) - GitHub Workflow - Test Data - [PR Comments]...
(QB_NEW_EN)
[grammar] ~78-~78: There might be a mistake here.
Context: ...rkflow](#github-workflow) - Test Data - PR Comments <!-- tocstop...
(QB_NEW_EN)
[grammar] ~87-~87: There might be a mistake here.
Context: ...ity - Minimum AXIS OS version: TODO - Required tools: TODO ### FixedIT Data...
(QB_NEW_EN)
[grammar] ~92-~92: There might be a mistake here.
Context: ...y - Minimum Data Agent version: 1.0 - Required features: TODO ## Quick Setu...
(QB_NEW_EN)
[grammar] ~99-~99: There might be a mistake here.
Context: ...in the Uploaded helper files section (refresh page to see updates). Note: Don't ...
(QB_NEW_EN)
[grammar] ~152-~152: There might be a mistake here.
Context: ...processor logic testing with sample data - MQTT output configuration validation (TO...
(QB_NEW_EN)
[grammar] ~153-~153: There might be a mistake here.
Context: ...T output configuration validation (TODO) - Alert threshold configuration testing *...
(QB_NEW_EN)
[grammar] ~178-~178: There might be a mistake here.
Context: ... --once ``` Expected Output: All detections with `time_in_area_second...
(QB_NEW_EN)
[grammar] ~199-~199: There might be a mistake here.
Context: ... --once ``` Expected Output: Only detections with time in area (`time...
(QB_NEW_EN)
[grammar] ~297-~297: There might be a mistake here.
Context: ...ed observations into individual messages - Preserves object bounding box coordina...
(QB_NEW_EN)
[grammar] ~298-~298: There might be a mistake here.
Context: ...serves** object bounding box coordinates - Simplifies object classification to ju...
(QB_NEW_EN)
[grammar] ~299-~299: There might be a mistake here.
Context: ...* object classification to just the type - Skips frames with no observations enti...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...ata with: - track_001: Appears twice with 8s gap (total time: 11.33s) - `track_00...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...s twice with 8s gap (total time: 11.33s) - track_002: Continuous presence for 2.22s - `track...
(QB_NEW_EN)
[grammar] ~351-~351: There might be a mistake here.
Context: ...rack_002: Continuous presence for 2.22s - track_003: Continuous presence for 2.22s - track...
(QB_NEW_EN)
[grammar] ~352-~352: There might be a mistake here.
Context: ...rack_003: Continuous presence for 2.22s - track_004: Single appearance (0s) - track_005`: ...
(QB_NEW_EN)
[grammar] ~353-~353: There might be a mistake here.
Context: ...2s - track_004: Single appearance (0s) - track_005: Long duration track for 2.5 minutes (1...
(QB_NEW_EN)
[grammar] ~362-~362: There might be a mistake here.
Context: ...Success confirmation when all tests pass - ❌ Specific failure diagnostics and troub...
(QB_NEW_EN)
project-time-in-area-analytics/test_scripts/README.md
[grammar] ~9-~9: There might be a mistake here.
Context: ... Contents - Installation - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...tallation) - Recording Real Device Data - Basic Usage - [Advanced...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ...rding-real-device-data) - Basic Usage - Advanced Usage - [Pa...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ...Usage](#basic-usage) - Advanced Usage - Parameters - [Authentica...
(QB_NEW_EN)
[grammar] ~13-~13: There might be a mistake here.
Context: ... Usage](#advanced-usage) - Parameters - Authentication - [AX...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...meters](#parameters) - Authentication - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ...thentication) - AXIS OS Compatibility - Use Cases - [Track Heatmap ...
(QB_NEW_EN)
[grammar] ~16-~16: There might be a mistake here.
Context: ...](#axis-os-compatibility) - Use Cases - [Track Heatmap Visualization](#track-heat...
(QB_NEW_EN)
[grammar] ~17-~17: There might be a mistake here.
Context: ...se-cases) - Track Heatmap Visualization - Basic Usage - [Advanc...
(QB_NEW_EN)
[grammar] ~18-~18: There might be a mistake here.
Context: ...-heatmap-visualization) - Basic Usage - Advanced Usage - [...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ...age](#basic-usage-1) - Advanced Usage - [Features & Example Output](#features--ex...
(QB_NEW_EN)
[grammar] ~20-~20: There might be a mistake here.
Context: ...-usage-1) - Features & Example Output - [Activity Percentage Calculation](#activi...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...) - Activity Percentage Calculation - Prerequisites <!-- toc...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ...Device IP address or hostname (required) - --username, -u: SSH username (default: acap-fixeditdat...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...sername (default: acap-fixeditdataagent) - --password, -p: SSH password (optional, will prompt if...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...ssword (optional, will prompt if needed) - --duration, -d: Recording duration in seconds (default...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...ording duration in seconds (default: 30) - --topic: Message broker topic to consume (defau...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...xis.analytics_scene_description.v0.beta) - --source: Message broker source (default: 1) - `...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...rce: Message broker source (default: 1) - --output-file, -o`: Output file path (default: test_files/...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...sword Prompt**: Falls back to prompting for password if key auth fails ### AXIS OS...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ...rded data with visualization tools like track_heatmap_viewer.py - Debugging: Capture problematic scenari...
(QB_NEW_EN)
[grammar] ~141-~141: There might be a mistake here.
Context: ... white box)**: Shows key dataset metrics - Tracks: 42 - Total unique track IDs de...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...que track IDs detected across all frames - Frames: 1180 - Number of frames contai...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ...one detection (not total elapsed frames) - Activity: 99.8% - Percentage of frames...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ...entage of frames with detections present - Alarms: 5 - Number of tracks exceeding...
(QB_NEW_EN)
[grammar] ~145-~145: There might be a mistake here.
Context: ...s exceeding the alarm threshold duration - X-axis (Time): Timestamps of frames wi...
(QB_NEW_EN)
[grammar] ~146-~146: There might be a mistake here.
Context: ...me gaps without detections are not shown - Y-axis (Track IDs): Individual object ...
(QB_NEW_EN)
[grammar] ~147-~147: There might be a mistake here.
Context: ... 3effc, 58cef) sorted alphabetically - Color Legend (right side): Visual scal...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
project-time-in-area-analytics/test_scripts/README.md
22-22: Link fragments should be valid
(MD051, link-fragments)
154-154: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (1)
project-time-in-area-analytics/test_scripts/README.md (1)
98-99: Good troubleshooting guidanceClear callout for typical broker access errors and the AXIS OS 12+ limitation. This will save users time.
| Both tools now behave identically, calculating total time-in-area including brief gaps under 60 seconds. If a gap is longer than 60 seconds (should not happen in data from the Axis cameras!?), then the Telegraf pipeline would forget about the track and the time-in-area would be reset to 0 once the track reappears. | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Verification agent
🧩 Analysis chain
Enforce stale-gap reset parity or adjust claim
README claims both tools reset time-in-area after gaps > 60s. The viewer’s implementation currently does not reset (it tracks only “first seen”). Either implement the reset logic in the viewer (preferred; see code review on track_heatmap_viewer.py) or adjust this claim.
After applying the code change suggested in track_heatmap_viewer.py, run these checks locally to confirm parity on the sample data:
🏁 Script executed:
#!/bin/bash
# Verify viewer resets after long gaps by counting alarm segments for a track that reappears
rg -n -C2 'def _create_alarm_matrix' project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
# Confirm README parity claim present
rg -n 'Both tools now behave identically' project-time-in-area-analytics/README.mdLength of output: 688
🏁 Script executed:
#!/bin/bash
# Locate all occurrences of track_heatmap_viewer.py in the repo
fd -t f track_heatmap_viewer.py
# Search for gap-handling logic within the implementation (excluding tests)
rg -n -C2 'gap' project-time-in-area-analytics/track_heatmap_viewer.py
rg -n -C2 'threshold' project-time-in-area-analytics/track_heatmap_viewer.py
rg -n -C2 'first_seen' project-time-in-area-analytics/track_heatmap_viewer.pyLength of output: 612
🏁 Script executed:
#!/bin/bash
# 1. Explore the project-time-in-area-analytics directory structure
ls -R project-time-in-area-analytics
# 2. Find any Python definitions of _create_alarm_matrix across the repo
rg -n 'def _create_alarm_matrix' -t py
# 3. List all Python files under project-time-in-area-analytics containing “viewer” in their name
fd -e py project-time-in-area-analytics | rg -i viewerLength of output: 720
#!/bin/bash
# Verify whether the viewer resets time-in-area after gaps in the sample script
rg -n -C3 'first_seen' project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
rg -n -C3 'last_seen' project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
rg -n -C3 'gap' project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyEnforce stale-gap reset parity or adjust README
The README states that both tools reset time-in-area after gaps > 60 s, but the viewer script in
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (in _create_alarm_matrix) does not implement this reset—it only records “first seen.”
Either:
- Implement gap-reset logic in
_create_alarm_matrix(e.g. tracklast_seen, compare timestamps, and zero the accumulator when gap > 60 s), or - Remove/modify the “reset to 0 once the track reappears” claim in
README.mdto reflect current behavior.
Affected locations:
- project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py:501+ (
def _create_alarm_matrix) - project-time-in-area-analytics/README.md:344–345
🤖 Prompt for AI Agents
In project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py around
def _create_alarm_matrix (lines ~501+) and README.md lines 344–345: the viewer
currently only records "first seen" and does not reset the time-in-area
accumulator when a track has a gap >60s as the README claims; to fix, modify
_create_alarm_matrix to track per-track last_seen timestamps and when processing
a new detection compute delta = current_ts - last_seen, and if delta > 60s reset
the track's accumulated time-in-area to zero before adding the new interval,
then update last_seen to current_ts; alternatively, if you prefer not to change
behavior, update README.md to remove or reword the assertion that the viewer
resets time-in-area after >60s gaps so documentation matches current
implementation.
| - [Features & Example Output](#features--example-output) | ||
| - [Activity Percentage Calculation](#activity-percentage-calculation) | ||
| - [Prerequisites](#prerequisites) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Fix invalid TOC link and add the missing section
The TOC includes Prerequisites, but the document has no corresponding section, triggering MD051. Either remove the TOC entry or add a Prerequisites section.
Apply this diff to update the TOC link for “Features & Example Output” (see next comment) and keep Prerequisites; then add the section at the end (shown after the diff):
- [Features & Example Output](#features--example-output)
+ - [Features & Example Output](#features-example-output)
- [Prerequisites](#prerequisites)Add the section at the end of this file:
## Prerequisites
- Python 3.10+ with pip
- A workstation with matplotlib-capable environment (for interactive plots) or use --no-ui
- Network access to your AXIS device for real-data recording (optional)
- test_scripts/requirements.txt installed in your active environment:
pip install -r requirements.txt🧰 Tools
🪛 LanguageTool
[grammar] ~20-~20: There might be a mistake here.
Context: ...-usage-1) - Features & Example Output - [Activity Percentage Calculation](#activi...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...) - Activity Percentage Calculation - Prerequisites <!-- toc...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
22-22: Link fragments should be valid
(MD051, link-fragments)
🤖 Prompt for AI Agents
In project-time-in-area-analytics/test_scripts/README.md around lines 20 to 22,
the table of contents references a "Prerequisites" section that does not exist
causing MD051; update the TOC link for “Features & Example Output” as instructed
and retain the "Prerequisites" entry, then add a new "## Prerequisites" section
at the end of the file containing the four bullet points from the reviewer
(Python 3.10+ with pip; matplotlib-capable workstation or --no-ui; network
access to AXIS device optional; pip install -r requirements.txt for
test_scripts/requirements.txt). Ensure the formatting matches existing markdown
style and run a markdown linter to confirm MD051 is resolved.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@fixedit-olatz Thank you for confirming! I'm glad the suggested fix will help resolve the TOC and prerequisites section issue. 🎯
/)/)
( ^.^)
(")_(")
Happy to help!
d742de0 to
b8a9029
Compare
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (7)
project-time-in-area-analytics/test_scripts/README.md (1)
20-22: Fix invalid TOC anchor and add the missing “Prerequisites” section“Features & Example Output” anchor is invalid (MD051), and TOC lists “Prerequisites” without a section. Update the anchor and add the section at the end.
Apply this diff to fix the anchor:
- - [Features & Example Output](#features--example-output) + - [Features & Example Output](#features-example-output) - - [Prerequisites](#prerequisites) + - [Prerequisites](#prerequisites)Add this section at the end of the file:
## Prerequisites - Python 3.10+ with pip - A workstation with matplotlib-capable environment (for interactive plots) or use --no-ui - Network access to your AXIS device for real-data recording (optional) - test_scripts/requirements.txt installed in your active environment: pip install -r requirements.txtproject-time-in-area-analytics/README.md (4)
85-94: Replace TODOs in compatibility with actionable detailsCompatibility sections still contain TODOs. Per repo guidelines, these must be explicit and actionable.
Proposed structure (fill exact versions/features you support):
### AXIS OS Compatibility -- **Minimum AXIS OS version**: TODO -- **Required tools**: TODO +- Minimum AXIS OS version: <fill exact> +- Required tools on device: + - message-broker-cli (preinstalled on recent AXIS OS) + - SSH access enabled (dev mode user or equivalent) + +Notes: +- On AXIS OS 12+, SSH root is disabled; use the FixedIT Data Agent dev user. +- .sh scripts must be POSIX sh (bash is not present on Axis devices). ### FixedIT Data Agent Compatibility - **Minimum Data Agent version**: 1.0 -- **Required features**: TODO +- Required features: + - Starlark processor enabled + - JSON input and JSON output plugins + - MQTT output (if using alerts to broker)If you want, I can open a follow-up PR once you confirm the minimum AXIS OS version.
95-101: Quick Setup is empty; add a minimal end-to-end guideAdd a succinct, actionable quick start before Troubleshooting so users can run the example immediately.
## Quick Setup - -### Troubleshooting +1) Clone this repo and cd into project directory: + git clone https://github.com/fixedit-ai/fixedit-data-agent-examples.git + cd fixedit-data-agent-examples/project-time-in-area-analytics + +2) Host testing (no device required): + - Install Telegraf on your workstation + - Export env for sample data: + HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR + SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE + - Run time-in-area only: + telegraf --config test_files/config_input_sample_data.conf \ + --config config_process_track_duration.conf \ + --config test_files/config_output_stdout.conf \ + --once + +3) On-device testing (Axis device required): + - Ensure FixedIT Data Agent is installed and running on the device + - Upload config files from this folder into the agent’s helper files + - Configure MQTT endpoint or stdout (for testing) in the output config + - Start the pipeline and monitor logs (enable Debug temporarily) + +4) Optional: record real device data to use as SAMPLE_FILE: + python test_scripts/record_real_data.py --host <device_ip> --username <dev_user> -o test_files/real_device_data.jsonl + +### Troubleshooting
111-115: Gate debug-removal metrics behind a debug flagDocs say “Outputs debug messages when tracks are removed” but this should be optional to reduce noise.
- Automatically cleans up stale tracks (not seen for 60+ seconds) -- Outputs debug messages when tracks are removed +- Optionally outputs debug metrics when tracks are removed (enable by setting DEBUG_ENABLED=true in the Starlark script or via config)I can wire this into track_duration_calculator.star and config in this PR so it’s off by default.
344-345: Enforce 60s stale-gap reset parity or adjust claimREADME claims both tools reset after >60s gaps, but the viewer currently doesn’t implement this. Either implement reset in the viewer or reword the claim.
If you accept implementing parity, apply the viewer change in the linked review (see comment on track_heatmap_viewer.py Lines 569–619). After updating, verify parity:
#!/bin/bash # Expect: viewer contains track_last_seen and STALE_GAP_SECONDS logic rg -n -C2 'track_last_seen|STALE_GAP_SECONDS' project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/test_scripts/record_real_data.py (1)
116-126: Close remote channel to avoid orphaned processes and ensure clean terminationAfter timeout, the remote command may continue running until the channel is closed. Close stdout channel and streams in a finally block.
- finished.set() # Signal threads to stop - reader_thread.join(timeout=1) - error_thread.join(timeout=1) - - yield from lines + finished.set() # Signal threads to stop + reader_thread.join(timeout=1) + error_thread.join(timeout=1) + + try: + for line in lines: + yield line + finally: + # Stop remote command/channel promptly to avoid lingering processes + try: + stdout.channel.close() + except Exception: + pass + try: + stdout.close() + except Exception: + pass + try: + stderr.close() + except Exception: + passproject-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (1)
569-619: Implement 60s stale-gap reset to match pipeline and READMEThe alarm computation accumulates from first_seen only, so it never resets after long gaps. Track last_seen and reset when gap > 60s, aligning with the pipeline’s stale-track cleanup.
- # Track first appearance times as seconds - this is our "state" for time-in-area calculation - track_first_seen: Dict[str, float] = {} + # State for time-in-area calculation + track_first_seen: Dict[str, float] = {} + track_last_seen: Dict[str, float] = {} + STALE_GAP_SECONDS = 60.0 @@ - # Check each track to see if it's present and calculate time in area + # Check each track to see if it's present and calculate time in area for track_idx, track_id in enumerate(sorted_track_ids): if track_id in frame_track_ids: - # Record when we first see this track (start of time-in-area measurement) - if track_id not in track_first_seen: - track_first_seen[track_id] = current_time_seconds - - # Calculate how long this track has been in the area - first_time_seconds = track_first_seen[track_id] - time_in_area = current_time_seconds - first_time_seconds + # Initialize on first observation + if track_id not in track_first_seen: + track_first_seen[track_id] = current_time_seconds + track_last_seen[track_id] = current_time_seconds + else: + # Reset timing if the gap since last_seen exceeds the stale threshold + gap = current_time_seconds - track_last_seen[track_id] + if gap > STALE_GAP_SECONDS: + track_first_seen[track_id] = current_time_seconds + track_last_seen[track_id] = current_time_seconds + + # Calculate time in area since (possibly reset) first_seen + time_in_area = current_time_seconds - track_first_seen[track_id]
🧹 Nitpick comments (11)
project-time-in-area-analytics/test_scripts/README.md (1)
154-156: Specify a language for the fenced code block (MD040)Add a language identifier to the calculation block to satisfy MD040.
-``` +```text Activity = (Frames with ≥1 detection / Total frames) × 100</blockquote></details> <details> <summary>project-time-in-area-analytics/README.md (1)</summary><blockquote> `312-321`: **Clarify validation flow and align wording with project outputs** Add a short “Validation” note linking how to compare alarm counts between the viewer and Telegraf pipeline on the same dataset. Also avoid implying the viewer strictly consumes “output from FixedIT Data Agent analytics”; it can consume this project’s JSONL format. ```diff -For installation, usage details, and examples, see the [test_scripts README](test_scripts/README.md). +For installation, usage details, and examples, see the [test_scripts README](test_scripts/README.md). + +Validation: +- Run the Telegraf pipeline on simple_tracks.jsonl with ALERT_THRESHOLD_SECONDS=2 and note the list of alarming track_ids. +- Run the viewer with --alarm-threshold 2.0 on the same file; compare alarmed track_ids and counts printed to stdout. +- Both tools should report identical alarmed tracks and counts when using the same threshold and stale-gap settings.project-time-in-area-analytics/test_scripts/record_real_data.py (4)
81-86: Clarify “threading for timeout” rationaleAdd a brief rationale for using threads (portable across platforms; signal/alarm isn’t reliable on Windows and conflicts within libraries).
- # Use threading for timeout instead of signals + # Use threading for timeout instead of signals: + # - Works cross-platform (signal/alarm is Unix-only and not thread-safe) + # - Avoids interfering with libraries that also use signal handlers
54-64: Connection method comment contradicts implementationThe comment says “Try key first, then password,” but connect() prefers password if provided. Align the comment to avoid confusion.
- # Try connecting with key first, then password if provided + # Use password auth when provided; otherwise attempt key-based auth
49-64: Optional: fail fast on unknown hosts or allow user-supplied host key policyAuto-adding host keys is convenient for tests but unsafe by default. Consider allowing a flag to control host key policy or document that this is test-only.
If you want, I can add a --insecure-auto-add-host-key flag defaulting to False, and use RejectPolicy unless the flag is set.
84-110: Stream output instead of buffering all lines in memory (optional)Accumulating all lines can spike memory on long recordings. Using a thread-safe queue to yield incrementally keeps memory bounded.
I can provide a small refactor using queue.SimpleQueue to push lines from the reader thread and yield them in run_command() loop until timeout.
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (5)
319-370: Fix _parse_jsonL_line docstring return descriptionDocstring says it returns a Tuple with track IDs, but it returns only a Frame. Correct the text.
- Returns: - Tuple of (Frame object, set of track IDs found in this frame) + Returns: + Frame object parsed from the JSON line
662-683: Rename temporary ‘N’ to snake_case and satisfy Pylint C0103Use a descriptive snake_case name to placate Pylint.
- N = 4 - tick_positions = [0.5, 1.5, 2.5, 3.5] + n_colors = 4 + tick_positions = [0.5, 1.5, 2.5, 3.5] @@ - N = 3 - tick_positions = [0.5, 1.5, 2.5] + n_colors = 3 + tick_positions = [0.5, 1.5, 2.5] @@ - cmap = plt.matplotlib.colors.ListedColormap(colors, N=N) + cmap = plt.matplotlib.colors.ListedColormap(colors, N=n_colors)
1-17: Temporarily silence “too many lines” to get CI green; plan split laterModule exceeds Pylint’s 1000-line limit. Either split into modules or silence now and refactor later.
#!/usr/bin/env python3 +# pylint: disable=too-many-lines """ Track Heatmap Viewer.If you want, I can propose a module split: models.py, parse.py, heatmap.py, cli.py.
866-869: One-liner for brevity (Ruff SIM108)Use a ternary to simplify.
- if len(class_type) > 4: - class_short = class_type[:4] + "." - else: - class_short = class_type + class_short = class_type[:4] + "." if len(class_type) > 4 else class_type
1009-1026: Reword “output from FixedIT Data Agent analytics” to be preciseThe viewer can consume JSONL from this project’s pipeline or recorded test data, not strictly “output from FixedIT Data Agent analytics.” Minor wording change to avoid confusion.
- such as the output from FixedIT Data Agent analytics or test data files. + such as JSONL produced by this project's pipeline, the FixedIT Data Agent, or test data files.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
project-time-in-area-analytics/.images/track-heatmap-120s.pngis excluded by!**/*.pngproject-time-in-area-analytics/.images/track-heatmap-simple.pngis excluded by!**/*.png
📒 Files selected for processing (5)
project-time-in-area-analytics/README.md(1 hunks)project-time-in-area-analytics/test_files/simple_tracks.jsonl(1 hunks)project-time-in-area-analytics/test_scripts/README.md(1 hunks)project-time-in-area-analytics/test_scripts/record_real_data.py(1 hunks)project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- project-time-in-area-analytics/test_files/simple_tracks.jsonl
🧰 Additional context used
📓 Path-based instructions (3)
project-*/**
⚙️ CodeRabbit Configuration File
project-*/**: Each example project for the FixedIT Data Agent should be self-contained and designed as a standalone example / tutorial showing how to use the FixedIT Data Agent. The project should be well documented and include all the necessary files to run the example. A reader should be able to understand the background and the purpose of the project, and be able to understand the implementation details and how to use it. If the project includes a dashboard, it should be placed in a new folder in the repo-root/dashboard-deployments/ directory instead of being included in the project folder. In the README for the project, the first time the FixedIT Data Agent is mentioned, we should link to the product page: https://fixedit.ai/products-data-agent/ Emphasize that these solutions are accessible to system integrators and IT professionals without requiring complex C/C++ ACAP development. Highlight that custom edge intelligence can be implemented using familiar IT tools (configuration files, shell scripts, REST APIs) rather than traditional embedded programming approaches. Note that bash does not exist on the Axis devices, so .sh scripts should be written in portable shell syntax.
README Structure: All project README files should follow the standardized template at.project_readme_template.md. This template ensures consistent documentation that serves both audiences: (1) IT professionals who want plug-and-play solutions, and (2) developers who want to customize and extend the examples. The template enforces a logical structure: 1) Introduction and background (project purpose, how it works, why choose this approach) 2) Technical explanation and quick start guide (compatibility, setup, troubleshooting) 3) Developer-oriented notes (local testing, implementation details)
Key README Requirements: - Include compatibility sections for both AXIS OS and FixedIT Data Agent versions - Explain host testing and its limitations and why certain parts require actual Axis d...
Files:
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/test_scripts/record_real_data.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*
⚙️ CodeRabbit Configuration File
This repository serves FixedIT Data Agent users across a spectrum from plug-and-play dashboard deployment to advanced edge device customization. Consider whether changes maintain accessibility for both DevOps professionals (server-side focus) and developers (edge customization focus). If new features are added or existing ones changed significantly, ensure documentation clearly explains the intended audience and usage level. We use prettier for formatting of common file formats like markdown, yaml, json, etc. Example projects should be placed in the repo-root in a directory named project-*/. Whenever referencing Axis device model names like "D4100-VE mk II" or "M3045", prefix it with "AXIS" in capital letters (e.g. "AXIS D4100-VE mk II"). When using Axis as a company name, use "Axis Communications", note that all-caps is only used when "AXIS" is used in their product names, not their company name. When using the name of an Axis product, google on the name to verify that it is correctly identified. Avoid using 'cameras' or 'Axis cameras' unless the solution in related to visual analytics, otherwise prefer using 'Axis devices' to show that the FixedIT Data Agent also works with strobes, speakers, door stations, etc. Images used for the README should be placed in a directory called .images/ in the affected project folder. These images might also be referred to from other sources like the top-level README.md file.
Files:
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/test_scripts/record_real_data.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: Documentation files should clearly communicate the dual audience: (1) server-side dashboard users who want to keep agent with bundled configs, and (2) edge device developers who want to customize agent behavior. Ensure examples and instructions are appropriate for the intended skill level and use case. Since this is a public repository, we should not include any sensitive information, the instructions should be easily understandable for a wide audience, and we should avoid using any jargon or technical terms that are not commonly used.
Anchor Links: For headings that contain Unicode characters (emojis, special characters) or backticks, use HTML anchor tags to ensure consistent linking behavior across different platforms (GitHub, VS Code, etc.). Add a separate line with<a id="anchor-name"></a>before the heading, and update the table of contents to reference this manual anchor.
Files:
project-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
🧠 Learnings (2)
📚 Learning: 2025-08-10T14:54:48.316Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:350-351
Timestamp: 2025-08-10T14:54:48.316Z
Learning: In the fixedit-data-agent-examples repository, shell portability requirements (such as Axis devices using POSIX /bin/sh instead of bash) should be documented in a general scripting guide rather than repeated in individual project README files. This approach was confirmed by daniel-falk for better documentation organization.
Applied to files:
project-time-in-area-analytics/README.md
📚 Learning: 2025-08-07T15:23:05.927Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:57-62
Timestamp: 2025-08-07T15:23:05.927Z
Learning: In the fixedit-ai/fixedit-data-agent-examples repository, when GitHub generates anchors for headings that contain backticks with spaces around them (like `### `filename` - Description`), it removes the backticks but preserves those spaces as dashes in the anchor, creating triple dashes (---) between the filename and description parts. TOC generators may not handle this correctly and need manual adjustment.
Applied to files:
project-time-in-area-analytics/README.md
🧬 Code Graph Analysis (1)
project-time-in-area-analytics/test_scripts/record_real_data.py (1)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (1)
main(1001-1062)
🪛 Ruff (0.12.2)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
866-869: Use ternary operator class_short = class_type[:4] + "." if len(class_type) > 4 else class_type instead of if-else-block
Replace if-else-block with class_short = class_type[:4] + "." if len(class_type) > 4 else class_type
(SIM108)
🪛 GitHub Actions: Python Code Quality Check for the time-in-area-analytics project
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
[error] 1-1: Pylint: C0302 Too many lines in module (1066/1000) (too-many-lines). Step: 'pylint *.py'.
[error] 670-670: Pylint: C0103 - Variable name 'N' doesn't conform to snake_case naming style (invalid-name). Step: 'pylint *.py'.
[error] 681-681: Pylint: C0103 - Variable name 'N' doesn't conform to snake_case naming style (invalid-name). Step: 'pylint *.py'.
🪛 LanguageTool
project-time-in-area-analytics/README.md
[grammar] ~52-~52: There might be a mistake here.
Context: ...Contents - Compatibility - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~53-~53: There might be a mistake here.
Context: ...ompatibility) - AXIS OS Compatibility - [FixedIT Data Agent Compatibility](#fixed...
(QB_NEW_EN)
[grammar] ~54-~54: There might be a mistake here.
Context: ...y) - FixedIT Data Agent Compatibility - Quick Setup - [Troubles...
(QB_NEW_EN)
[grammar] ~55-~55: There might be a mistake here.
Context: ...data-agent-compatibility) - Quick Setup - Troubleshooting - [Co...
(QB_NEW_EN)
[grammar] ~56-~56: There might be a mistake here.
Context: ...etup](#quick-setup) - Troubleshooting - [Configuration Files](#configuration-file...
(QB_NEW_EN)
[grammar] ~57-~57: There might be a mistake here.
Context: ...#troubleshooting) - Configuration Files - [config_process_track_duration.conf and t...
(QB_NEW_EN)
[grammar] ~58-~58: There might be a mistake here.
Context: ....conf and track_duration_calculator.star](#config_process_track_durationconf-and-track_duration_calculatorstar) - [config_process_threshold_filter.conf](#c...
(QB_NEW_EN)
[grammar] ~59-~59: There might be a mistake here.
Context: ... - config_process_threshold_filter.conf - [test_files/config_output_stdout.conf](#t...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ... - test_files/config_output_stdout.conf - [test_files/sample_data_feeder.sh](#test_...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...f) - test_files/sample_data_feeder.sh - [Future Enhancements](#future-enhancement...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...le_data_feedersh) - Future Enhancements - [Local Testing on Host](#local-testing-on...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...e-enhancements) - Local Testing on Host - Prerequisites - [Host...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...ocal-testing-on-host) - Prerequisites - [Host Testing Limitations](#host-testing-...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...equisites) - Host Testing Limitations - Test Commands - [Te...
(QB_NEW_EN)
[grammar] ~66-~66: There might be a mistake here.
Context: ...-testing-limitations) - Test Commands - [Test Time in Area Calculation Only](#tes...
(QB_NEW_EN)
[grammar] ~67-~67: There might be a mistake here.
Context: ... - Test Time in Area Calculation Only - [Test Complete Alert Pipeline](#test-comp...
(QB_NEW_EN)
[grammar] ~68-~68: There might be a mistake here.
Context: ...nly) - Test Complete Alert Pipeline - [Test with Real Device Data](#test-with-r...
(QB_NEW_EN)
[grammar] ~69-~69: There might be a mistake here.
Context: ...eline) - Test with Real Device Data - [Analytics Data Structure](#analytics-dat...
(QB_NEW_EN)
[grammar] ~70-~70: There might be a mistake here.
Context: ...device-data) - Analytics Data Structure - Data Format - [Data Beh...
(QB_NEW_EN)
[grammar] ~71-~71: There might be a mistake here.
Context: ...alytics-data-structure) - Data Format - Data Behavior - [Data...
(QB_NEW_EN)
[grammar] ~72-~72: There might be a mistake here.
Context: ...Format](#data-format) - Data Behavior - [Data Transformation for Telegraf](#data-...
(QB_NEW_EN)
[grammar] ~73-~73: There might be a mistake here.
Context: ...r) - Data Transformation for Telegraf - [Track Activity Visualization](#track-act...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...elegraf) - Track Activity Visualization - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~75-~75: There might be a mistake here.
Context: ...alization) - Recording Real Device Data - Automated Testing ...
(QB_NEW_EN)
[grammar] ~76-~76: There might be a mistake here.
Context: ...g-real-device-data) - Automated Testing - GitHub Workflow - [...
(QB_NEW_EN)
[grammar] ~77-~77: There might be a mistake here.
Context: ...#automated-testing) - GitHub Workflow - Test Data - [PR Comments]...
(QB_NEW_EN)
[grammar] ~78-~78: There might be a mistake here.
Context: ...rkflow](#github-workflow) - Test Data - PR Comments <!-- tocstop...
(QB_NEW_EN)
[grammar] ~87-~87: There might be a mistake here.
Context: ...ity - Minimum AXIS OS version: TODO - Required tools: TODO ### FixedIT Data...
(QB_NEW_EN)
[grammar] ~92-~92: There might be a mistake here.
Context: ...y - Minimum Data Agent version: 1.0 - Required features: TODO ## Quick Setu...
(QB_NEW_EN)
[grammar] ~99-~99: There might be a mistake here.
Context: ...in the Uploaded helper files section (refresh page to see updates). Note: Don't ...
(QB_NEW_EN)
[grammar] ~152-~152: There might be a mistake here.
Context: ...processor logic testing with sample data - MQTT output configuration validation (TO...
(QB_NEW_EN)
[grammar] ~153-~153: There might be a mistake here.
Context: ...T output configuration validation (TODO) - Alert threshold configuration testing *...
(QB_NEW_EN)
[grammar] ~178-~178: There might be a mistake here.
Context: ... --once ``` Expected Output: All detections with `time_in_area_second...
(QB_NEW_EN)
[grammar] ~199-~199: There might be a mistake here.
Context: ... --once ``` Expected Output: Only detections with time in area (`time...
(QB_NEW_EN)
[grammar] ~297-~297: There might be a mistake here.
Context: ...ed observations into individual messages - Preserves object bounding box coordina...
(QB_NEW_EN)
[grammar] ~298-~298: There might be a mistake here.
Context: ...serves** object bounding box coordinates - Simplifies object classification to ju...
(QB_NEW_EN)
[grammar] ~299-~299: There might be a mistake here.
Context: ...* object classification to just the type - Skips frames with no observations enti...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...ata with: - track_001: Appears twice with 8s gap (total time: 11.33s) - `track_00...
(QB_NEW_EN)
[grammar] ~350-~350: There might be a mistake here.
Context: ...s twice with 8s gap (total time: 11.33s) - track_002: Continuous presence for 2.22s - `track...
(QB_NEW_EN)
[grammar] ~351-~351: There might be a mistake here.
Context: ...rack_002: Continuous presence for 2.22s - track_003: Continuous presence for 2.22s - track...
(QB_NEW_EN)
[grammar] ~352-~352: There might be a mistake here.
Context: ...rack_003: Continuous presence for 2.22s - track_004: Single appearance (0s) - track_005`: ...
(QB_NEW_EN)
[grammar] ~353-~353: There might be a mistake here.
Context: ...2s - track_004: Single appearance (0s) - track_005: Long duration track for 2.5 minutes (1...
(QB_NEW_EN)
[grammar] ~362-~362: There might be a mistake here.
Context: ...Success confirmation when all tests pass - ❌ Specific failure diagnostics and troub...
(QB_NEW_EN)
project-time-in-area-analytics/test_scripts/README.md
[grammar] ~9-~9: There might be a mistake here.
Context: ... Contents - Installation - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...tallation) - Recording Real Device Data - Basic Usage - [Advanced...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ...rding-real-device-data) - Basic Usage - Advanced Usage - [Pa...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ...Usage](#basic-usage) - Advanced Usage - Parameters - [Authentica...
(QB_NEW_EN)
[grammar] ~13-~13: There might be a mistake here.
Context: ... Usage](#advanced-usage) - Parameters - Authentication - [AX...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...meters](#parameters) - Authentication - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ...thentication) - AXIS OS Compatibility - Use Cases - [Track Heatmap ...
(QB_NEW_EN)
[grammar] ~16-~16: There might be a mistake here.
Context: ...](#axis-os-compatibility) - Use Cases - [Track Heatmap Visualization](#track-heat...
(QB_NEW_EN)
[grammar] ~17-~17: There might be a mistake here.
Context: ...se-cases) - Track Heatmap Visualization - Basic Usage - [Advanc...
(QB_NEW_EN)
[grammar] ~18-~18: There might be a mistake here.
Context: ...-heatmap-visualization) - Basic Usage - Advanced Usage - [...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ...age](#basic-usage-1) - Advanced Usage - [Features & Example Output](#features--ex...
(QB_NEW_EN)
[grammar] ~20-~20: There might be a mistake here.
Context: ...-usage-1) - Features & Example Output - [Activity Percentage Calculation](#activi...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...) - Activity Percentage Calculation - Prerequisites <!-- toc...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ...Device IP address or hostname (required) - --username, -u: SSH username (default: acap-fixeditdat...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...sername (default: acap-fixeditdataagent) - --password, -p: SSH password (optional, will prompt if...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...ssword (optional, will prompt if needed) - --duration, -d: Recording duration in seconds (default...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...ording duration in seconds (default: 30) - --topic: Message broker topic to consume (defau...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...xis.analytics_scene_description.v0.beta) - --source: Message broker source (default: 1) - `...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...rce: Message broker source (default: 1) - --output-file, -o`: Output file path (default: test_files/...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...sword Prompt**: Falls back to prompting for password if key auth fails ### AXIS OS...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ...rded data with visualization tools like track_heatmap_viewer.py - Debugging: Capture problematic scenari...
(QB_NEW_EN)
[grammar] ~141-~141: There might be a mistake here.
Context: ... white box)**: Shows key dataset metrics - Tracks: 42 - Total unique track IDs de...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...que track IDs detected across all frames - Frames: 1180 - Number of frames contai...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ...one detection (not total elapsed frames) - Activity: 99.8% - Percentage of frames...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ...entage of frames with detections present - Alarms: 5 - Number of tracks exceeding...
(QB_NEW_EN)
[grammar] ~145-~145: There might be a mistake here.
Context: ...s exceeding the alarm threshold duration - X-axis (Time): Timestamps of frames wi...
(QB_NEW_EN)
[grammar] ~146-~146: There might be a mistake here.
Context: ...me gaps without detections are not shown - Y-axis (Track IDs): Individual object ...
(QB_NEW_EN)
[grammar] ~147-~147: There might be a mistake here.
Context: ... 3effc, 58cef) sorted alphabetically - Color Legend (right side): Visual scal...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
project-time-in-area-analytics/test_scripts/README.md
22-22: Link fragments should be valid
(MD051, link-fragments)
154-154: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
| click.echo("Starting data recording...") | ||
|
|
||
| try: | ||
| # Record data | ||
| recorder = DataRecorder(ssh_runner) | ||
| line_count = recorder.record_data(topic, source, output_path, duration) | ||
|
|
||
| if line_count > 0: | ||
| click.echo( | ||
| f"✅ Successfully recorded {line_count} lines of real device " f"data" | ||
| ) | ||
| click.echo(f"📁 Saved to: {output_file}") | ||
| click.echo("") | ||
| click.echo("Sample of recorded data (first 3 lines):") | ||
| else: | ||
| click.echo("ℹ️ No data was recorded during the timeout period.") | ||
| click.echo("This is normal if:") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Actually print sample lines after recording
You announce a sample preview but never print the first lines. Read the file and print the first three.
- click.echo("")
- click.echo("Sample of recorded data (first 3 lines):")
+ click.echo("")
+ click.echo("Sample of recorded data (first 3 lines):")
+ try:
+ with output_path.open("r", encoding="utf-8") as f:
+ for i, line in enumerate(f):
+ if i >= 3:
+ break
+ click.echo(f" {line.rstrip()}")
+ except OSError:
+ click.echo(" (Unable to read file for preview)")📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| click.echo("Starting data recording...") | |
| try: | |
| # Record data | |
| recorder = DataRecorder(ssh_runner) | |
| line_count = recorder.record_data(topic, source, output_path, duration) | |
| if line_count > 0: | |
| click.echo( | |
| f"✅ Successfully recorded {line_count} lines of real device " f"data" | |
| ) | |
| click.echo(f"📁 Saved to: {output_file}") | |
| click.echo("") | |
| click.echo("Sample of recorded data (first 3 lines):") | |
| else: | |
| click.echo("ℹ️ No data was recorded during the timeout period.") | |
| click.echo("This is normal if:") | |
| click.echo("Starting data recording...") | |
| try: | |
| # Record data | |
| recorder = DataRecorder(ssh_runner) | |
| line_count = recorder.record_data(topic, source, output_path, duration) | |
| if line_count > 0: | |
| click.echo( | |
| f"✅ Successfully recorded {line_count} lines of real device " f"data" | |
| ) | |
| click.echo(f"📁 Saved to: {output_file}") | |
| click.echo("") | |
| click.echo("Sample of recorded data (first 3 lines):") | |
| try: | |
| with output_path.open("r", encoding="utf-8") as f: | |
| for i, line in enumerate(f): | |
| if i >= 3: | |
| break | |
| click.echo(f" {line.rstrip()}") | |
| except OSError: | |
| click.echo(" (Unable to read file for preview)") | |
| else: | |
| click.echo("ℹ️ No data was recorded during the timeout period.") | |
| click.echo("This is normal if:") |
🤖 Prompt for AI Agents
In project-time-in-area-analytics/test_scripts/record_real_data.py around lines
329 to 345, the script announces "Sample of recorded data (first 3 lines):" but
never actually prints the sample; after successfully recording (line_count > 0)
open the output_file, read the first three non-empty lines (or first three lines
if simpler), and print them using click.echo (preserve existing formatting and
handle fewer than three lines gracefully). Ensure the file is closed (use with
open(...)) and handle any IO errors by logging a brief warning instead of
crashing.
b8a9029 to
d190c1c
Compare
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 1
♻️ Duplicate comments (8)
project-time-in-area-analytics/test_scripts/README.md (1)
20-22: Fix invalid TOC anchor and add the missing Prerequisites sectionThe TOC link for “Features & Example Output” points to a non-existent anchor (MD051). Also, “Prerequisites” appears in TOC but is missing in the document. Align TOC and add the section.
Apply this diff to fix the anchor and keep the TOC entry:
- - [Features & Example Output](#features--example-output) + - [Features & Example Output](#features-example-output) - [Prerequisites](#prerequisites)Append this section at the end of the file:
+## Prerequisites + +- Python 3.10+ with pip +- A workstation with matplotlib-capable environment (for interactive plots) or use --no-ui +- Network access to your AXIS device for real-data recording (optional) +- test_scripts/requirements.txt installed in your active environment: + pip install -r requirements.txtproject-time-in-area-analytics/README.md (3)
425-426: Parity claim depends on viewer implementing 60s stale-gap reset; currently not trueThe README states both tools reset time-in-area after >60s gaps. The viewer’s _create_alarm_matrix currently doesn’t implement last-seen tracking or reset. Either implement the reset (preferred; see suggested diff in track_heatmap_viewer.py review) or adjust the README claim.
If you keep the claim, please add the gap-reset in the viewer and run the CI workflow to confirm the parity scenarios still pass.
103-105: Fill in Quick Setup; TODO blocks users from running the examplePer repo template, Quick Setup must be concrete and minimal so users can run E2E without guesswork.
## Quick Setup - -### TODO +1) Clone and enter project: + git clone https://github.com/fixedit-ai/fixedit-data-agent-examples.git + cd fixedit-data-agent-examples/project-time-in-area-analytics + +2) Host testing (no device required): + - Install Telegraf and jq on your workstation + - Export env for sample data: + HELPER_FILES_DIR="$(pwd)"; export HELPER_FILES_DIR + CONSUMER_SCRIPT="test_files/sample_data_feeder.sh"; export CONSUMER_SCRIPT + SAMPLE_FILE="test_files/simple_tracks.jsonl"; export SAMPLE_FILE + - Run time-in-area only: + telegraf --config config_input_scene_detections.conf \ + --config config_process_track_duration.conf \ + --config test_files/config_output_stdout.conf \ + --once + +3) Full pipeline with threshold (e.g., 2s): + export ALERT_THRESHOLD_SECONDS="2" + telegraf --config config_input_scene_detections.conf \ + --config config_process_track_duration.conf \ + --config config_process_threshold_filter.conf \ + --config test_files/config_output_stdout.conf \ + --once + +4) Optional: record real device data to use as SAMPLE_FILE: + python test_scripts/record_real_data.py --host <device_ip> --username <dev_user> -o test_files/real_device_data.jsonl + +### Troubleshooting
165-166: Gate debug removals under a debug flag (do not always emit)Doc promises debug on removals; per objectives, make them optional (off by default) to reduce noise.
-- Outputs debug messages when tracks are removed +- Optionally outputs debug metrics when tracks are removed (enable by setting DEBUG_ENABLED=true in the Starlark script or via config).github/workflows/project-time-in-area-test-analytics.yml (3)
25-27: Update setup-python to v5 (actionlint error)actions/setup-python@v4 is flagged as too old for current runners.
- - name: Set up Python 3.13 - uses: actions/setup-python@v4 + - name: Set up Python 3.13 + uses: actions/setup-python@v5 with: python-version: "3.13"
50-51: Fix POSIX regex: \s is not portable; use [[:space:]]GNU grep/sed don’t support \s. Your current match likely returns zero lines, breaking alarm counts.
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | wc -l) - ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E "^\s+track_" | sed 's/^\s*//' | sort | tr '\n' ' ' | sed 's/ $//') + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_TRACKS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | sed -E 's/^[[:space:]]*//' | sort | tr '\n' ' ' | sed 's/ $//')Also applies to: 81-83, 115-116, 149-151
323-325: Harden jq filters to ignore nulls and avoid overcountingWhen .fields.track_id is missing, jq -r prints “null”, skewing counts. Coalesce to empty to drop them.
- ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | wc -l) - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_UNIQUE_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l) @@ - ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') + ACTUAL_TRACKS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | sort -u | tr '\n' ' ' | sed 's/ $//') @@ - ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | .fields.track_id' 2>/dev/null | wc -l) + ACTUAL_ALARM_DETECTIONS=$(echo "$TELEGRAF_OUTPUT" | jq -r 'select(.name == "detection_frame") | (.fields.track_id // empty)' 2>/dev/null | wc -l)Also applies to: 367-370, 414-417, 461-464
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (1)
569-619: Implement 60s stale-gap reset to match pipeline and READMEAlarm logic doesn’t reset after long gaps; it only uses first_seen. Track last_seen and reset first_seen when gap exceeds a stale threshold (e.g., 60s). This aligns with the Telegraf processor and README parity claim.
@@ - # Track first appearance times as seconds - this is our "state" for time-in-area calculation - track_first_seen: Dict[str, float] = {} + # State for time-in-area calculation + track_first_seen: Dict[str, float] = {} + track_last_seen: Dict[str, float] = {} + STALE_GAP_SECONDS = 60.0 @@ - for track_idx, track_id in enumerate(sorted_track_ids): + for track_idx, track_id in enumerate(sorted_track_ids): if track_id in frame_track_ids: - # Record when we first see this track (start of time-in-area measurement) - if track_id not in track_first_seen: - track_first_seen[track_id] = current_time_seconds - - # Calculate how long this track has been in the area - first_time_seconds = track_first_seen[track_id] - time_in_area = current_time_seconds - first_time_seconds + # Initialize on first observation + if track_id not in track_first_seen: + track_first_seen[track_id] = current_time_seconds + track_last_seen[track_id] = current_time_seconds + else: + # Reset timing if the gap since last_seen exceeds the stale threshold + gap = current_time_seconds - track_last_seen[track_id] + if gap > STALE_GAP_SECONDS: + track_first_seen[track_id] = current_time_seconds + track_last_seen[track_id] = current_time_seconds + + # Calculate time in area since (possibly reset) first_seen + time_in_area = current_time_seconds - track_first_seen[track_id]Optional: expose STALE_GAP_SECONDS as a CLI flag to mirror the pipeline config.
🧹 Nitpick comments (15)
project-time-in-area-analytics/test_scripts/README.md (2)
132-134: Add a manual anchor for a heading with special character (ampersand) to ensure consistent linkingTOC generators and GitHub anchor generation can differ on headings with “&”. Add a manual anchor to stabilize links across tools.
+<a id="features-example-output"></a> ### Features & Example Output
154-156: Specify a language on fenced code block (MD040)Add a language hint to satisfy linters.
-``` +```text Activity = (Frames with ≥1 detection / Total frames) × 100</blockquote></details> <details> <summary>project-time-in-area-analytics/README.md (1)</summary><blockquote> `136-138`: **Add language for fenced code block (MD040)** The log/error snippet should specify a language for markdown linters. ```diff - ``` + ```text [2025-08-20 11:43:40] 2025-08-20T09:43:40Z E! [telegraf] Error running agent: could not initialize processor processors.starlark: :6:23: unexpected input character '$'</blockquote></details> <details> <summary>.github/workflows/project-time-in-area-test-analytics.yml (2)</summary><blockquote> `492-501`: **Quote $GITHUB_OUTPUT to avoid word splitting (SC2086)** Protect the output file path in case of spaces or special chars. ```diff - echo "vis_result=$VIS_RESULT" >> $GITHUB_OUTPUT - echo "telegraf_result=$TELEGRAF_RESULT" >> $GITHUB_OUTPUT + echo "vis_result=$VIS_RESULT" >> "$GITHUB_OUTPUT" + echo "telegraf_result=$TELEGRAF_RESULT" >> "$GITHUB_OUTPUT"
35-35: Shellcheck: prefer grep -c over grep|wc -lStyle improvement; not required for correctness, but keeps the workflow cleaner.
Example:
- ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' | wc -l) + ACTUAL_ALARMS=$(echo "$HEATMAP_OUTPUT" | grep -A 10 "Tracks with alarms" | grep -E '^[[:space:]]+track_' -c)Also applies to: 64-64, 98-98, 132-132
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py (3)
1-16: Silence module length for CLI script to pass pylint (C0302)This single-file CLI with rich docstrings exceeds 1000 lines. Either split into modules or disable the rule at file level.
Quick unblock (add after module docstring):
""" Track Heatmap Viewer. @@ data. """ +# pylint: disable=too-many-linesLonger term: consider moving dataclasses and parsing helpers into a module.
866-869: Apply simple ternary (ruff SIM108)Minor style improvement.
- if len(class_type) > 4: - class_short = class_type[:4] + "." - else: - class_short = class_type + class_short = class_type[:4] + "." if len(class_type) > 4 else class_type
1011-1013: Clarify “output from FixedIT Data Agent analytics” phrasingBe specific about the JSONL source to avoid ambiguity (per PR objectives).
- such as the output from FixedIT Data Agent analytics or test data files. + such as per-detection JSONL produced by axis_scene_detection_consumer.sh (parsed by Telegraf as detection_frame), + or the provided test data files in test_files/.project-time-in-area-analytics/axis_scene_detection_consumer.sh (2)
27-29: Allow environment overrides for TOPIC and SOURCEParameterize to make testing/customization easier while keeping defaults.
-# Analytics scene description topic (camera-specific) -TOPIC="com.axis.analytics_scene_description.v0.beta" -SOURCE="1" +# Analytics scene description topic (camera-specific) +# Allow environment overrides with safe defaults +TOPIC="${TOPIC:-com.axis.analytics_scene_description.v0.beta}" +SOURCE="${SOURCE:-1}"
30-35: Add presence check for message-broker-cli, not just jqFail fast with a clear error if the broker CLI is unavailable.
if ! command -v jq >/dev/null 2>&1; then echo "ERROR: jq is required but not available on this camera" >&2 exit 1 fi + +if ! command -v message-broker-cli >/dev/null 2>&1; then + echo "ERROR: message-broker-cli is required but not available on this camera" >&2 + exit 1 +fiproject-time-in-area-analytics/config_input_scene_detections.conf (5)
15-16: Fix default script name and clarify relative path semantics.Comments say the default is axis_metadata_consumer.sh, but the actual default is axis_scene_detection_consumer.sh. Also clarify that CONSUMER_SCRIPT is relative to HELPER_FILES_DIR to avoid users providing absolute paths that will be incorrectly prefixed.
Apply this diff to align the comments:
-# - CONSUMER_SCRIPT: Path to consumer script (defaults to axis_metadata_consumer.sh) +# - CONSUMER_SCRIPT: Consumer script filename relative to HELPER_FILES_DIR (defaults to axis_scene_detection_consumer.sh)- # Default: axis_metadata_consumer.sh (live camera) - # Override: Set CONSUMER_SCRIPT to use different script (e.g., test_files/sample_data_feeder.sh) + # Default: axis_scene_detection_consumer.sh (live camera) + # Override: Set CONSUMER_SCRIPT to use a different script relative to HELPER_FILES_DIR (e.g., test_files/sample_data_feeder.sh)Also applies to: 26-27
28-31: Make command resolution robust (absolute CONSUMER_SCRIPT, and fail fast if HELPER_FILES_DIR is unset).As written, if HELPER_FILES_DIR is unset the constructed path becomes “/axis_scene_detection_consumer.sh” (leading slash), which will likely fail. Also, if users set CONSUMER_SCRIPT to an absolute path, it will be incorrectly prefixed with HELPER_FILES_DIR. Use a small POSIX sh wrapper to handle both cases and to error out cleanly when HELPER_FILES_DIR is missing.
Apply this diff:
- command = [ - "${HELPER_FILES_DIR}/${CONSUMER_SCRIPT:-axis_scene_detection_consumer.sh}" - ] + command = [ + "sh", + "-c", + 'CS="${CONSUMER_SCRIPT:-axis_scene_detection_consumer.sh}"; case "$CS" in /*) exec "$CS" ;; *) exec "${HELPER_FILES_DIR:?HELPER_FILES_DIR unset}/$CS" ;; esac' + ]
32-35: Add restart backoff and document the “no signals” rationale (aligns with PR TODO on threading vs signals).Good choice to set signal = "none". Add a small restart backoff so Telegraf doesn’t thrash on rapid exits, and document why signals are avoided (timeouts/reloads should be handled internally by the consumer, e.g., threads/timers, and graceful SIGTERM).
# No signal handling - let the script run continuously # The script will keep running and outputting JSON until Telegraf stops signal = "none" + # Rationale: avoid signal-based timeouts/reloads; the consumer should manage its own timeouts + # (e.g., via threads/timers) and exit cleanly on SIGTERM from Telegraf during shutdown. + # If the consumer process exits unexpectedly, Telegraf will restart it after this delay: + restart_delay = "5s"
43-45: Promote object_type to a tag for easier filtering (bounded cardinality).This keeps filtering/snapping by type efficient downstream. Avoid tagging track_id due to high cardinality.
# String fields that should be preserved as strings during JSON parsing json_string_fields = ["timestamp", "track_id", "object_type", "frame"] + # Promote low-cardinality strings to tags for filtering: + tag_keys = ["object_type"]
36-39: Use event timestamps for accurate metricsYour input data uses ISO-8601 timestamps with microsecond precision (e.g.
"2024-01-15T10:00:01.123456Z"), so you can have Telegraf stamp metrics with the actual event time instead of ingestion time. If you want Telegraf to use the"timestamp"field:• Add (or uncomment) in config_input_scene_detections.conf (around lines 36–39 and 43–45):
# Use event time as the metric timestamp # json_time_key = "timestamp" # json_time_format = "2006-01-02T15:04:05.000000Z07:00" # or "RFC3339Nano"• Then remove
"timestamp"from your json_string_fields:- json_string_fields = ["timestamp", "track_id", "object_type", "frame"] + json_string_fields = ["track_id", "object_type", "frame"]If you’re intentionally keeping
"timestamp"as a string for later parsing in Starlark, you can ignore this.
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
💡 Knowledge Base configuration:
- MCP integration is disabled by default for public repositories
- Jira integration is disabled by default for public repositories
- Linear integration is disabled by default for public repositories
You can enable these sources in your CodeRabbit configuration.
⛔ Files ignored due to path filters (2)
project-time-in-area-analytics/.images/track-heatmap-120s.pngis excluded by!**/*.pngproject-time-in-area-analytics/.images/track-heatmap-simple.pngis excluded by!**/*.png
📒 Files selected for processing (8)
.github/workflows/project-time-in-area-test-analytics.yml(1 hunks)project-time-in-area-analytics/README.md(1 hunks)project-time-in-area-analytics/axis_scene_detection_consumer.sh(1 hunks)project-time-in-area-analytics/config_input_scene_detections.conf(1 hunks)project-time-in-area-analytics/test_files/simple_tracks.jsonl(1 hunks)project-time-in-area-analytics/test_scripts/README.md(1 hunks)project-time-in-area-analytics/test_scripts/record_real_data.py(1 hunks)project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- project-time-in-area-analytics/test_scripts/record_real_data.py
- project-time-in-area-analytics/test_files/simple_tracks.jsonl
🧰 Additional context used
📓 Path-based instructions (3)
project-*/**
⚙️ CodeRabbit Configuration File
project-*/**: Each example project for the FixedIT Data Agent should be self-contained and designed as a standalone example / tutorial showing how to use the FixedIT Data Agent. The project should be well documented and include all the necessary files to run the example. A reader should be able to understand the background and the purpose of the project, and be able to understand the implementation details and how to use it. If the project includes a dashboard, it should be placed in a new folder in the repo-root/dashboard-deployments/ directory instead of being included in the project folder. In the README for the project, the first time the FixedIT Data Agent is mentioned, we should link to the product page: https://fixedit.ai/products-data-agent/ Emphasize that these solutions are accessible to system integrators and IT professionals without requiring complex C/C++ ACAP development. Highlight that custom edge intelligence can be implemented using familiar IT tools (configuration files, shell scripts, REST APIs) rather than traditional embedded programming approaches. Note that bash does not exist on the Axis devices, so .sh scripts should be written in portable shell syntax.
README Structure: All project README files should follow the standardized template at.project_readme_template.md. This template ensures consistent documentation that serves both audiences: (1) IT professionals who want plug-and-play solutions, and (2) developers who want to customize and extend the examples. The template enforces a logical structure: 1) Introduction and background (project purpose, how it works, why choose this approach) 2) Technical explanation and quick start guide (compatibility, setup, troubleshooting) 3) Developer-oriented notes (local testing, implementation details)
Key README Requirements: - Include compatibility sections for both AXIS OS and FixedIT Data Agent versions - Explain host testing and its limitations and why certain parts require actual Axis d...
Files:
project-time-in-area-analytics/config_input_scene_detections.confproject-time-in-area-analytics/axis_scene_detection_consumer.shproject-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*
⚙️ CodeRabbit Configuration File
This repository serves FixedIT Data Agent users across a spectrum from plug-and-play dashboard deployment to advanced edge device customization. Consider whether changes maintain accessibility for both DevOps professionals (server-side focus) and developers (edge customization focus). If new features are added or existing ones changed significantly, ensure documentation clearly explains the intended audience and usage level. We use prettier for formatting of common file formats like markdown, yaml, json, etc. Example projects should be placed in the repo-root in a directory named project-*/. Whenever referencing Axis device model names like "D4100-VE mk II" or "M3045", prefix it with "AXIS" in capital letters (e.g. "AXIS D4100-VE mk II"). When using Axis as a company name, use "Axis Communications", note that all-caps is only used when "AXIS" is used in their product names, not their company name. When using the name of an Axis product, google on the name to verify that it is correctly identified. Avoid using 'cameras' or 'Axis cameras' unless the solution in related to visual analytics, otherwise prefer using 'Axis devices' to show that the FixedIT Data Agent also works with strobes, speakers, door stations, etc. Images used for the README should be placed in a directory called .images/ in the affected project folder. These images might also be referred to from other sources like the top-level README.md file.
Files:
project-time-in-area-analytics/config_input_scene_detections.confproject-time-in-area-analytics/axis_scene_detection_consumer.shproject-time-in-area-analytics/test_scripts/track_heatmap_viewer.pyproject-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
**/*.md
⚙️ CodeRabbit Configuration File
**/*.md: Documentation files should clearly communicate the dual audience: (1) server-side dashboard users who want to keep agent with bundled configs, and (2) edge device developers who want to customize agent behavior. Ensure examples and instructions are appropriate for the intended skill level and use case. Since this is a public repository, we should not include any sensitive information, the instructions should be easily understandable for a wide audience, and we should avoid using any jargon or technical terms that are not commonly used.
Anchor Links: For headings that contain Unicode characters (emojis, special characters) or backticks, use HTML anchor tags to ensure consistent linking behavior across different platforms (GitHub, VS Code, etc.). Add a separate line with<a id="anchor-name"></a>before the heading, and update the table of contents to reference this manual anchor.
Files:
project-time-in-area-analytics/README.mdproject-time-in-area-analytics/test_scripts/README.md
🧠 Learnings (2)
📚 Learning: 2025-08-10T14:54:48.316Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:350-351
Timestamp: 2025-08-10T14:54:48.316Z
Learning: In the fixedit-data-agent-examples repository, shell portability requirements (such as Axis devices using POSIX /bin/sh instead of bash) should be documented in a general scripting guide rather than repeated in individual project README files. This approach was confirmed by daniel-falk for better documentation organization.
Applied to files:
project-time-in-area-analytics/README.md
📚 Learning: 2025-08-07T15:23:05.927Z
Learnt from: daniel-falk
PR: fixedit-ai/fixedit-data-agent-examples#5
File: project-strobe-color-from-github-workflow/README.md:57-62
Timestamp: 2025-08-07T15:23:05.927Z
Learning: In the fixedit-ai/fixedit-data-agent-examples repository, when GitHub generates anchors for headings that contain backticks with spaces around them (like `### `filename` - Description`), it removes the backticks but preserves those spaces as dashes in the anchor, creating triple dashes (---) between the filename and description parts. TOC generators may not handle this correctly and need manual adjustment.
Applied to files:
project-time-in-area-analytics/README.md
🪛 actionlint (1.7.7)
.github/workflows/project-time-in-area-test-analytics.yml
25-25: the runner of "actions/setup-python@v4" action is too old to run on GitHub Actions. update the action's version to fix this issue
(action)
35-35: shellcheck reported issue in this script: SC2126:style:15:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
64-64: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
98-98: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
132-132: shellcheck reported issue in this script: SC2126:style:17:78: Consider using 'grep -c' instead of 'grep|wc -l'
(shellcheck)
191-191: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
298-298: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
339-339: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
386-386: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
433-433: shellcheck reported issue in this script: SC2155:warning:6:8: Declare and assign separately to avoid masking return values
(shellcheck)
492-492: shellcheck reported issue in this script: SC2086:info:4:34: Double quote to prevent globbing and word splitting
(shellcheck)
492-492: shellcheck reported issue in this script: SC2086:info:5:44: Double quote to prevent globbing and word splitting
(shellcheck)
492-492: shellcheck reported issue in this script: SC2086:info:14:29: Double quote to prevent globbing and word splitting
(shellcheck)
492-492: shellcheck reported issue in this script: SC2086:info:19:30: Double quote to prevent globbing and word splitting
(shellcheck)
🪛 Ruff (0.12.2)
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
866-869: Use ternary operator class_short = class_type[:4] + "." if len(class_type) > 4 else class_type instead of if-else-block
Replace if-else-block with class_short = class_type[:4] + "." if len(class_type) > 4 else class_type
(SIM108)
🪛 GitHub Actions: Python Code Quality Check for the time-in-area-analytics project
project-time-in-area-analytics/test_scripts/track_heatmap_viewer.py
[error] 1-1: Pylint: too many lines in module (1066/1000) (C0302). Command: pylint *.py
[error] 670-670: Pylint: invalid-name 'N' doesn't conform to snake_case naming style (C0103).
[error] 681-681: Pylint: invalid-name 'N' doesn't conform to snake_case naming style (C0103).
🪛 LanguageTool
project-time-in-area-analytics/README.md
[grammar] ~93-~93: There might be a mistake here.
Context: ...Minimum AXIS OS version: AXIS OS 12+ - Required tools: Uses `message-broker-c...
(QB_NEW_EN)
[grammar] ~98-~98: There might be a mistake here.
Context: ...y - Minimum Data Agent version: 1.1 - Required features: Uses the `inputs.ex...
(QB_NEW_EN)
[grammar] ~113-~113: There might be a mistake here.
Context: ...in the Uploaded helper files section (refresh page to see updates). Note: Don't ...
(QB_NEW_EN)
[style] ~149-~149: To form a complete sentence, be sure to include a subject.
Context: ...ta into individual detection messages. Can also be used for reproducible testing o...
(MISSING_IT_THERE)
[grammar] ~154-~154: There might be a mistake here.
Context: ...tory containing project files (required) - CONSUMER_SCRIPT: Path to consumer script (defaults to `...
(QB_NEW_EN)
[grammar] ~155-~155: There might be a mistake here.
Context: ...s to axis_scene_detection_consumer.sh) - SAMPLE_FILE: Path to sample data file (required whe...
(QB_NEW_EN)
[grammar] ~203-~203: There might be a mistake here.
Context: ...processor logic testing with sample data - MQTT output configuration validation (TO...
(QB_NEW_EN)
[grammar] ~204-~204: There might be a mistake here.
Context: ...T output configuration validation (TODO) - Alert threshold configuration testing *...
(QB_NEW_EN)
[grammar] ~232-~232: There might be a mistake here.
Context: ...ra infrastructure. Expected Output: All detections with `time_in_area_second...
(QB_NEW_EN)
[grammar] ~279-~279: There might be a mistake here.
Context: ...e camera hardware. Expected Output: Only detections with time in area (`time...
(QB_NEW_EN)
[grammar] ~378-~378: There might be a mistake here.
Context: ...ed observations into individual messages - Preserves object bounding box coordina...
(QB_NEW_EN)
[grammar] ~379-~379: There might be a mistake here.
Context: ...serves** object bounding box coordinates - Simplifies object classification to ju...
(QB_NEW_EN)
[grammar] ~380-~380: There might be a mistake here.
Context: ...* object classification to just the type - Skips frames with no observations enti...
(QB_NEW_EN)
[grammar] ~431-~431: There might be a mistake here.
Context: ...ata with: - track_001: Appears twice with 8s gap (total time: 11.33s) - `track_00...
(QB_NEW_EN)
[grammar] ~431-~431: There might be a mistake here.
Context: ...s twice with 8s gap (total time: 11.33s) - track_002: Continuous presence for 2.22s - `track...
(QB_NEW_EN)
[grammar] ~432-~432: There might be a mistake here.
Context: ...rack_002: Continuous presence for 2.22s - track_003: Continuous presence for 2.22s - track...
(QB_NEW_EN)
[grammar] ~433-~433: There might be a mistake here.
Context: ...rack_003: Continuous presence for 2.22s - track_004: Single appearance (0s) - track_005`: ...
(QB_NEW_EN)
[grammar] ~434-~434: There might be a mistake here.
Context: ...2s - track_004: Single appearance (0s) - track_005: Long duration track for 2.5 minutes (1...
(QB_NEW_EN)
[grammar] ~443-~443: There might be a mistake here.
Context: ...Success confirmation when all tests pass - ❌ Specific failure diagnostics and troub...
(QB_NEW_EN)
project-time-in-area-analytics/test_scripts/README.md
[grammar] ~9-~9: There might be a mistake here.
Context: ... Contents - Installation - [Recording Real Device Data](#recording-r...
(QB_NEW_EN)
[grammar] ~10-~10: There might be a mistake here.
Context: ...tallation) - Recording Real Device Data - Basic Usage - [Advanced...
(QB_NEW_EN)
[grammar] ~11-~11: There might be a mistake here.
Context: ...rding-real-device-data) - Basic Usage - Advanced Usage - [Pa...
(QB_NEW_EN)
[grammar] ~12-~12: There might be a mistake here.
Context: ...Usage](#basic-usage) - Advanced Usage - Parameters - [Authentica...
(QB_NEW_EN)
[grammar] ~13-~13: There might be a mistake here.
Context: ... Usage](#advanced-usage) - Parameters - Authentication - [AX...
(QB_NEW_EN)
[grammar] ~14-~14: There might be a mistake here.
Context: ...meters](#parameters) - Authentication - [AXIS OS Compatibility](#axis-os-compatib...
(QB_NEW_EN)
[grammar] ~15-~15: There might be a mistake here.
Context: ...thentication) - AXIS OS Compatibility - Use Cases - [Track Heatmap ...
(QB_NEW_EN)
[grammar] ~16-~16: There might be a mistake here.
Context: ...](#axis-os-compatibility) - Use Cases - [Track Heatmap Visualization](#track-heat...
(QB_NEW_EN)
[grammar] ~17-~17: There might be a mistake here.
Context: ...se-cases) - Track Heatmap Visualization - Basic Usage - [Advanc...
(QB_NEW_EN)
[grammar] ~18-~18: There might be a mistake here.
Context: ...-heatmap-visualization) - Basic Usage - Advanced Usage - [...
(QB_NEW_EN)
[grammar] ~19-~19: There might be a mistake here.
Context: ...age](#basic-usage-1) - Advanced Usage - [Features & Example Output](#features--ex...
(QB_NEW_EN)
[grammar] ~20-~20: There might be a mistake here.
Context: ...-usage-1) - Features & Example Output - [Activity Percentage Calculation](#activi...
(QB_NEW_EN)
[grammar] ~21-~21: There might be a mistake here.
Context: ...) - Activity Percentage Calculation - Prerequisites <!-- toc...
(QB_NEW_EN)
[grammar] ~60-~60: There might be a mistake here.
Context: ...Device IP address or hostname (required) - --username, -u: SSH username (default: acap-fixeditdat...
(QB_NEW_EN)
[grammar] ~61-~61: There might be a mistake here.
Context: ...sername (default: acap-fixeditdataagent) - --password, -p: SSH password (optional, will prompt if...
(QB_NEW_EN)
[grammar] ~62-~62: There might be a mistake here.
Context: ...ssword (optional, will prompt if needed) - --duration, -d: Recording duration in seconds (default...
(QB_NEW_EN)
[grammar] ~63-~63: There might be a mistake here.
Context: ...ording duration in seconds (default: 30) - --topic: Message broker topic to consume (defau...
(QB_NEW_EN)
[grammar] ~64-~64: There might be a mistake here.
Context: ...xis.analytics_scene_description.v0.beta) - --source: Message broker source (default: 1) - `...
(QB_NEW_EN)
[grammar] ~65-~65: There might be a mistake here.
Context: ...rce: Message broker source (default: 1) - --output-file, -o`: Output file path (default: test_files/...
(QB_NEW_EN)
[grammar] ~74-~74: There might be a mistake here.
Context: ...sword Prompt**: Falls back to prompting for password if key auth fails ### AXIS OS...
(QB_NEW_EN)
[grammar] ~93-~93: There might be a mistake here.
Context: ...rded data with visualization tools like track_heatmap_viewer.py - Debugging: Capture problematic scenari...
(QB_NEW_EN)
[grammar] ~141-~141: There might be a mistake here.
Context: ... white box)**: Shows key dataset metrics - Tracks: 42 - Total unique track IDs de...
(QB_NEW_EN)
[grammar] ~142-~142: There might be a mistake here.
Context: ...que track IDs detected across all frames - Frames: 1180 - Number of frames contai...
(QB_NEW_EN)
[grammar] ~143-~143: There might be a mistake here.
Context: ...one detection (not total elapsed frames) - Activity: 99.8% - Percentage of frames...
(QB_NEW_EN)
[grammar] ~144-~144: There might be a mistake here.
Context: ...entage of frames with detections present - Alarms: 5 - Number of tracks exceeding...
(QB_NEW_EN)
[grammar] ~145-~145: There might be a mistake here.
Context: ...s exceeding the alarm threshold duration - X-axis (Time): Timestamps of frames wi...
(QB_NEW_EN)
[grammar] ~146-~146: There might be a mistake here.
Context: ...me gaps without detections are not shown - Y-axis (Track IDs): Individual object ...
(QB_NEW_EN)
[grammar] ~147-~147: There might be a mistake here.
Context: ... 3effc, 58cef) sorted alphabetically - Color Legend (right side): Visual scal...
(QB_NEW_EN)
🪛 markdownlint-cli2 (0.17.2)
project-time-in-area-analytics/README.md
22-22: Link fragments should be valid
(MD051, link-fragments)
154-154: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
project-time-in-area-analytics/test_scripts/README.md
22-22: Link fragments should be valid
(MD051, link-fragments)
154-154: Fenced code blocks should have a language specified
(MD040, fenced-code-language)
🔇 Additional comments (3)
project-time-in-area-analytics/axis_scene_detection_consumer.sh (1)
43-58: LGTM: JSONL unpack and transformation match the documented schemaThe sed filter and jq transformation correctly normalize frame events to per-detection JSON, compatible with the processing pipeline. Good use of -c for compact output.
project-time-in-area-analytics/config_input_scene_detections.conf (2)
24-31: LGTM on execd usage and consumer wiring.Continuous execd, array-style command, and environment-driven script selection are all appropriate for this pipeline’s real-time ingestion.
40-42: Confirmed “detection_frame” is used by downstream processorsBoth downstream configs already reference the overridden measurement name:
- project-time-in-area-analytics/config_process_track_duration.conf:
namepass = ["detection_frame"]- project-time-in-area-analytics/config_process_threshold_filter.conf:
namepass = ["detection_frame"]No further changes required.
d190c1c to
9532b44
Compare
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
4c42a46 to
f1514e6
Compare
🎨 Code Formatting RequiredPrettier Formatting IssuesSome files need to be formatted with Prettier (JSON, YAML, Markdown). Files that need prettier formatting:
To fix prettier issues: npx prettier@3.6.0 --write "**/*.{json,yml,yaml,md}" ".github/**/*.{json,yml,yaml,md}" ".*.{yml,yaml,json,md}" --ignore-unknownAfter fixing:
|
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
f1514e6 to
1703eca
Compare
🎨 Code Formatting RequiredPrettier Formatting IssuesSome files need to be formatted with Prettier (JSON, YAML, Markdown). Files that need prettier formatting:
To fix prettier issues: npx prettier@3.6.0 --write "**/*.{json,yml,yaml,md}" ".github/**/*.{json,yml,yaml,md}" ".*.{yml,yaml,json,md}" --ignore-unknownAfter fixing:
|
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
1703eca to
3de1f08
Compare
🎨 Code Formatting RequiredPrettier Formatting IssuesSome files need to be formatted with Prettier (JSON, YAML, Markdown). Files that need prettier formatting:
To fix prettier issues: npx prettier@3.6.0 --write "**/*.{json,yml,yaml,md}" ".github/**/*.{json,yml,yaml,md}" ".*.{yml,yaml,json,md}" --ignore-unknownAfter fixing:
|
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
3de1f08 to
494903d
Compare
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
🎉 Time-in-Area Analytics Tests Passed!All tests completed successfully: ✅ Track Heatmap Viewer - All alarm detection scenarios passed |
🎨 Code Formatting RequiredTrailing Whitespace IssuesSome files have trailing whitespace (spaces/tabs at end of lines). Files with trailing whitespace:
To fix: Remove the trailing spaces/tabs at the end of lines in your favorite editor. After fixing:
|
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ❌ Track Heatmap Viewer: failure What This MeansTrack Heatmap Viewer Issues:
Telegraf Pipeline Issues:
|
76bbc17 to
7eebdef
Compare
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ❌ Track Heatmap Viewer: failure What This MeansTrack Heatmap Viewer Issues:
|
This commit adds configuration and helper files for event support in the time-in-area example. These files will monitor AOA detections and send a stateful event when a detection has been in the monitored zone for too long (expressed as active=1), as well as when the detection leaves the monitored area (expressed as active=0).
Add files for event support to time-in-area project
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ❌ Track Heatmap Viewer: failure What This MeansTrack Heatmap Viewer Issues:
|
| center_x=$(echo "$json_input" | jq -r '.fields.center_x // empty') | ||
| center_y=$(echo "$json_input" | jq -r '.fields.center_y // empty') | ||
|
|
||
| debug_log_file "Extracted fields - track_id: $track_id, time_in_area_seconds: $object_type, object_type: $object_type" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # the object. | ||
| if [ -z "$track_id" ] || [ "$track_id" = "null" ] || \ | ||
| [ -z "$time_in_area" ] || [ "$time_in_area" = "null" ] || \ | ||
| [ -z "$object_type" ] || [ -z "$timestamp" ] || [ "$timestamp" = "null" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Logic Rejects Allowed Empty Values
The comment states "We allow object_type to be empty (null)" but the validation logic checks [ -z "$object_type" ] which rejects empty values. This contradicts the stated intention and will cause the script to exit with error 12 when object_type is null or empty, even though the comment says this should be allowed.
| printf "Missing required track info fields in JSON input. "\ | ||
| "Required: track_id, time_in_area_seconds, timestamp. "\ | ||
| "Received: track_id='%s', time_in_area_seconds='%s', timestamp='%s'" \ | ||
| "$track_id" "$time_in_area" "$object_type" "$timestamp" >&2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Error Messages Display Wrong Information
The error message format string has 3 placeholders but the arguments are misaligned. The third placeholder is labeled "timestamp" but receives $object_type instead of $timestamp. This causes the error message to display the object type value where the timestamp should be, making debugging more difficult. The fourth argument $timestamp is unused.
|
❌ Time-in-Area Analytics Tests FailedSome tests did not pass: ❌ Track Heatmap Viewer: failure What This MeansTrack Heatmap Viewer Issues:
|
| # the object. | ||
| if [ -z "$track_id" ] || [ "$track_id" = "null" ] || \ | ||
| [ -z "$time_in_area" ] || [ "$time_in_area" = "null" ] || \ | ||
| [ -z "$object_type" ] || [ -z "$timestamp" ] || [ "$timestamp" = "null" ]; then |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Validation rejects empty object_type despite comment allowing it
The comment on lines 307-309 explicitly states "We allow object_type to be empty (null) since this happens before the video object detection has been able to classify the object." However, line 312 includes [ -z "$object_type" ] in the validation condition, which causes the script to exit with error code 12 when object_type is empty. This means valid detections where the object classification hasn't completed yet will be incorrectly rejected, contradicting the stated intent.
| printf "Missing required track info fields in JSON input. "\ | ||
| "Required: track_id, time_in_area_seconds, timestamp. "\ | ||
| "Received: track_id='%s', time_in_area_seconds='%s', timestamp='%s'" \ | ||
| "$track_id" "$time_in_area" "$object_type" "$timestamp" >&2 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Bug: Printf error message shows object_type as timestamp
The printf format string has 3 %s placeholders for displaying track_id, time_in_area_seconds, and timestamp, but 4 arguments are passed: $track_id, $time_in_area, $object_type, $timestamp. This causes the third placeholder labeled timestamp= to incorrectly display the $object_type value, while the actual $timestamp value is never shown. This produces misleading error messages that could confuse debugging efforts.



This is the beginning of a new project to trigger alarms on objects that has been in an area for too long time.
TODO:
Summary by CodeRabbit
New Features
Documentation
Tests
Chores
Note
Introduces a complete time-in-area analytics project with Telegraf/Starlark pipeline, overlay/event outputs, visualization/test tooling, and end-to-end CI plus a reusable Python quality action.
project-time-in-area-analytics:config_*for input, class/zone filtering, track duration, threshold, rate limit, overlay transform; scriptszone_filter.star,track_duration_calculator.star.axis_scene_detection_consumer.sh,overlay_manager.sh, event outputconfig_output_events.conf.README.md,README_INCLUDE_ZONE.md.test_files/*.jsonl, stdout config, overlay test input.test_scripts/track_heatmap_viewer.py,visualize_zone*.py,record_real_data.py,requirements.txt..github/workflows/project-time-in-area-test-analytics.yml(heatmap viewer and Telegraf pipeline scenarios + PR comments)..github/actions/python-quality-checkand workflows for analytics/timelapse projects..gitignoreentries; minor.coderabbit.yamlguidance additions.Written by Cursor Bugbot for commit f837825. This will update automatically on new commits. Configure here.