Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
20 changes: 12 additions & 8 deletions project-time-in-area-analytics/README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Time-in-Area Analytics

This project demonstrates how to implement time-in-area analytics for Axis fisheye cameras using the [FixedIT Data Agent](https://fixedit.ai/products-data-agent/). While AXIS Object Analytics natively supports time-in-area detection for traditional cameras, fisheye cameras lack this capability. This solution bridges that gap by consuming real-time object detection metadata from fisheye cameras and implementing custom time-in-area logic using Telegraf's Starlark processor. The system uses object tracking IDs from [AXIS Scene Metadata](https://developer.axis.com/analytics/axis-scene-metadata/reference/concepts/) to track objects within a defined rectangular area, measures time in area, and triggers both warning (TODO) and alert notifications via MQTT (TODO) when objects remain in the monitored zone beyond configured thresholds.
This project demonstrates how to implement time-in-area analytics for Axis fisheye cameras using the [FixedIT Data Agent](https://fixedit.ai/products-data-agent/). While AXIS Object Analytics natively supports time-in-area detection for traditional cameras, fisheye cameras lack this capability. This solution bridges that gap by consuming real-time object detection metadata from fisheye cameras and implementing custom time-in-area logic using Telegraf's Starlark processor. The system uses object tracking IDs from [AXIS Scene Metadata](https://developer.axis.com/analytics/axis-scene-metadata/reference/concepts/) to track objects within a defined rectangular area, measures time in area, and triggers alert notifications via events when objects remain in the monitored zone beyond configured thresholds.

## How It Works

Expand Down Expand Up @@ -29,8 +29,9 @@ flowchart TD
C5 -->|detection_frame_with_duration| D["config_process_threshold_filter.conf:<br/>Filter for<br/>time in area > ALERT_THRESHOLD_SECONDS"]
X2["Configuration variables: ALERT_THRESHOLD_SECONDS"] --> D

D -->|alerting_frame| E["🚨 MQTT Output<br/>Alert messages (TODO)"]
X3["Configuration variables: TODO"] --> E
D -->|alerting_frame_two| E0["config_process_alarming_state.conf:<br/>Check if any alerting detections have happened during the last second"]
E0 -->|alerting_state_change| E01["config_output_events.conf:<br/>Run the event handler binary with information about the detection status"]
E01 --> E["🚨 Event Output<br/>Alert messages"]

D -->|alerting_frame| E1["config_process_rate_limit.conf:<br/>Rate limit to 1 message per second<br/>using Starlark state"]
E1 -->|rate_limited_alert_frame| F["config_process_overlay_transform.conf:<br/>Recalculate coordinates for overlay visualization"]
Expand All @@ -49,6 +50,8 @@ flowchart TD
style C5 fill:#ffffff,stroke:#673ab7
style CX fill:#fff3e0,stroke:#fb8c00
style D fill:#f3e5f5,stroke:#8e24aa
style E0 fill:#f3e5f5,stroke:#8e24aa
style E01 fill:#f3e5f5,stroke:#8e24aa
style E fill:#ffebee,stroke:#e53935
style E1 fill:#f3e5f5,stroke:#8e24aa
style F fill:#f3e5f5,stroke:#8e24aa
Expand All @@ -58,7 +61,6 @@ flowchart TD
style X1a fill:#f5f5f5,stroke:#9e9e9e
style X1b fill:#f5f5f5,stroke:#9e9e9e
style X2 fill:#f5f5f5,stroke:#9e9e9e
style X3 fill:#f5f5f5,stroke:#9e9e9e
style X4 fill:#f5f5f5,stroke:#9e9e9e
```

Expand Down Expand Up @@ -135,8 +137,8 @@ Color scheme:

### FixedIT Data Agent Compatibility

- **Minimum Data Agent version**: 1.1
- **Required features**: Uses the `inputs.execd`, `processors.starlark` plugins and the `HELPER_FILES_DIR` environment variable set by the FixedIT Data Agent. It is recommended to use version 1.1 or higher since the load order of config files was not visible in the web user interface in version 1.0.
- **Minimum Data Agent version**: <TODO: TBD>
- **Required features**: Uses the `inputs.execd`, `processors.starlark` plugins and the `HELPER_FILES_DIR` environment variable set by the FixedIT Data Agent. Uses the `output_event` binary packaged with versions of the application <TODO: TBD> and above.

## Quick Setup

Expand All @@ -153,10 +155,12 @@ cat config_agent.conf \
config_process_threshold_filter.conf \
config_process_rate_limit.conf \
config_process_overlay_transform.conf \
config_output_overlay.conf > combined.conf
config_output_overlay.conf \
config_process_alarming_state.conf \
config_output_events.conf > combined.conf
```

Then upload `combined.conf` as a config file and `overlay_manager.sh`, `axis_scene_detection_consumer.sh`, `zone_filter.star`, and `track_duration_calculator.star` as helper files.
Then upload `combined.conf` as a config file and `overlay_manager.sh`, `axis_scene_detection_consumer.sh`, `zone_filter.star` and `track_duration_calculator.star` as helper files.

Set `Extra Env` to:

Expand Down
68 changes: 68 additions & 0 deletions project-time-in-area-analytics/config_output_events.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@
# Output to send all 'alerting_state_change' metrics to the event producer binary.
# The event is configured through the GKeyFile content specified when
# running the binary, using the following format:
# [topics]
# namespace = <NAMESPACE_NAME_STRING>
# nice_name = <NICE_NAME_STRING>
# topic_0 = <TOPIC_0_NAME_STRING>
# topic_1 = <TOPIC_1_NAME_STRING>
# topic_2 = <TOPIC_2_NAME_STRING>
#
# [settings]
# # true if event should be stateful
# # false if event should be stateless
# stateful = <true|false>
#
# [item.<ITEM_NAME>]
# kind = <data|source>
# data_type = <int|double|bool|string>
# value = <STARTING_VALUE>
#
# The [topics] and [settings] groups and all their key-value pairs
# are mandatory for successfully declaring the event.
# Items are optional, and multiple can be defined.
#
# The binary expects the passed json metrics to have the following
# format:
#
# {
# "fields": {<ITEM_NAME>: <NEW_VALUE> (...)},
# "name":"<NAME>",
# "tags":{<TAGS>},
# "timestamp":<TIMESTAMP>
# }
#
# Only the "fields" value matters, since it is the only part
# that is used by the binary. It will parse every key-value
# pair in "fields" and use those as the values to update
# in the event's items before sending it. If any item present
# during event declaration isn't specified in "fields", the
# the event will simply send the event with that key's
# previous value.
[[outputs.execd]]
# Only consume the 'alerting_state_change' metrics
namepass = ["alerting_state_change"]

# The binary expects JSON formatted metrics
data_format = "json"

# Command to run the binary, event structure is
# provided through GKeyFile-formatted input.
command = [
"${EXECUTABLES_DIR}/output_event", "--config-inline",
"""[topics]
namespace = tnsaxis
nice_name = FixedIT Time-in-Area Event
topic_0 = CameraApplicationPlatform
topic_1 = FixedITDataAgent
topic_2 = TimeInArea

[settings]
stateful = true

[item.active]
kind = data
data_type = bool
value = false
"""
]
85 changes: 85 additions & 0 deletions project-time-in-area-analytics/config_process_alarming_state.conf
Original file line number Diff line number Diff line change
@@ -0,0 +1,85 @@
# This configuration file sets up a heartbeat metric
# and applies a Starlark processor for inactivity monitoring.
# The Starlark processor checks if there have been any alarming
# detections since the last heartbeat, and if not, sets
# the alarming state to "false". Note that it only monitors
# for alarming state metrics in general, it does not keep
# track of what or how many objects triggered the alarm.

# ---- Heartbeat (1 metric/second) ----
[[inputs.exec]]
# This is a static heartbeat to trigger the inactivity monitor,
# so it does not matter what data it includes.
commands = ["sh -c 'echo heartbeat value=1i'"]
data_format = "influx"
interval = "1s"
name_override = "alarming_state_heartbeat"

# ---- Starlark processor ----
# This gets triggered by both the heartbeat and any alarming state metrics.
# This makes sure the code is run at least once per second even if there
# are no alarming state metrics.
[[processors.starlark]]
namepass = ["alarming_state_heartbeat", "alerting_frame_two"]
source = '''
"""
Monitor if an alerting_frame_two metric has not been sent
since the last alarming_state_heartbeat metric.
When we have an alerting object in the monitored zone,
we will get a metric every time we observe that object.
Once the object leaves the area, we stop receiving
alerting_frame_two metrics. This function makes sure that
we send a metric every time we go from no alerting objects
to at least one alerting object, and from at least one
alerting object to no alerting object.
"""
load("logging.star", "log")

"""
We initialize the state to keep it as a persistent
state between calls. We can use it to store information
such as the "has_alarm_since_last_heartbeat" or
"previous_alarm_state" values.
"""
state = {
# This variable is used to track if an "alerting_frame_two"
# metric has been received since the last
# "alarming_state_heartbeat" metric.
"has_alarm_since_last_heartbeat": False,

# This variable is used to check the alarm states
# previous value, to see if there has been a change
# from active to inactive or the other way around,
# since we only want to report on state changes.
"previous_alarm_state": None
}

def apply(metric):
# If we got an alerting frame, update the state to alerting state
# and return without producing any metric.
if metric.name == "alerting_frame_two":
state["has_alarm_since_last_heartbeat"] = True
return

# Validate that the metric is a heartbeat
if metric.name != "alarming_state_heartbeat":
log.debug("Error: received metric with unexpected name: " + metric.name)
return

has_alarm_since_last_heartbeat = state.get("has_alarm_since_last_heartbeat")
previous_alarm_state = state.get("previous_alarm_state")

# We want to track if the alarm has been triggered between
# heartbeats, so we always reset the state to False at
# each heartbeat so we can start monitoring again.
state["has_alarm_since_last_heartbeat"] = False
state["previous_alarm_state"] = has_alarm_since_last_heartbeat

# We only want to report state changes, so we check the previous state
if (has_alarm_since_last_heartbeat != previous_alarm_state):
alarming_state_metric = Metric("alerting_state_change")
alarming_state_metric.time = metric.time
alarming_state_metric.fields["active"] = has_alarm_since_last_heartbeat
return alarming_state_metric
return
'''
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@
# field formatting.

[[processors.starlark]]
namepass = ["rate_limited_alert_frame"]
# Source code for the transformation logic
source = '''
load("logging.star", "log")
Expand Down
Original file line number Diff line number Diff line change
@@ -1,5 +1,6 @@
# Rate limit messages to 1 per second to protect the overlay API
[[processors.starlark]]
namepass = ["alerting_frame"]
source = '''
load("time.star", "time")
load("logging.star", "log")
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,12 @@ def apply(metric):
# Create a new metric with the alerting name
alerting_metric = deepcopy(metric)
alerting_metric.name = "alerting_frame"
return alerting_metric

# Duplicate the metric, since it needs to get
# to two processors
alerting_metric_two = deepcopy(metric)
alerting_metric_two.name = "alerting_frame_two"
return [alerting_metric, alerting_metric_two]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: Violates Single Metric Return Convention (Bugbot Rules)

The apply function returns a list of metrics [alerting_metric, alerting_metric_two], but the reviewer explicitly stated this approach is buggy and there's no need to return more than one metric. Telegraf's Starlark processor may not properly handle list returns, and this pattern isn't used anywhere else in the codebase where all processors return either a single metric or None.

Fix in Cursor Fix in Web


# Track doesn't exceed threshold - don't output
log.debug("apply: track_id=" + track_id + " duration=" + str(time_in_area) + "s < threshold=" + str(threshold) + "s - FILTER OUT")
Expand Down