Skip to content

Commit

Permalink
Updated documentation with new vectorized distances.
Browse files Browse the repository at this point in the history
  • Loading branch information
facundo-lezama committed Nov 2, 2022
1 parent 703bf5d commit 2549262
Show file tree
Hide file tree
Showing 3 changed files with 5 additions and 5 deletions.
6 changes: 3 additions & 3 deletions docs/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ from norfair import Detection, Tracker, Video, draw_tracked_objects

detector = MyDetector() # Set up a detector
video = Video(input_path="video.mp4")
tracker = Tracker(distance_function="frobenius", distance_threshold=100)
tracker = Tracker(distance_function="euclidean", distance_threshold=100)

for frame in video:
detections = detector(frame)
Expand Down Expand Up @@ -76,11 +76,11 @@ After inspecting the detections you might find issues with the tracking, several
- Objects take **too long to start**, this can have multiple causes:
- `initialization_delay` is too big on the Tracker. Makes the TrackedObject stay on initializing for too long, `3` is usually a good value to start with.
- `distance_threshold` is too small on the Tracker. Prevents the Detections to be matched with the correct TrackedObject. The best value depends on the distance used.
- Incorrect `distance_function` on the Tracker. Some distances might not be valid in some cases, for instance, if using IoU but the objects in your video move so quickly that there is never an overlap between the detections of consecutive frames. Try different distances, `frobenius` or `create_normalized_mean_euclidean_distance` are good starting points.
- Incorrect `distance_function` on the Tracker. Some distances might not be valid in some cases, for instance, if using IoU but the objects in your video move so quickly that there is never an overlap between the detections of consecutive frames. Try different distances, `euclidean` or `create_normalized_mean_euclidean_distance` are good starting points.
- Objects take **too long to disappear**. Lower `hit_counter_max` on the Tracker.
- Points or bounding boxes **jitter too much**. Increase `R` (measurement error) or lower `Q` (estimate or process error) on the `OptimizedKalmanFilterFactory` or `FilterPyKalmanFilterFactory`. This makes the Kalman Filter put less weight on the measurements and trust more on the estimate, stabilizing the result.
- **Camera motion** confuses the Tracker. If the camera moves, the apparent movement of objects can become too erratic for the Tracker. Use `MotionEstimator`.
- **Incorrect matches** between Detections and TrackedObjects, a couple of scenarios can cause this:
- `distance_threshold` is too big so the Tracker matches Detections to TrackedObjects that are simply too far. Lower the threshold until you fix the error, the correct value will depend on the distance function that you're using.
- Mismatches when objects overlap. In this case, tracking becomes more challenging, usually, the quality of the detection degrades causing one of the objects to be missed or creating a single big detection that includes both objects. On top of the detection issues, the tracker needs to decide which detection should be matched to which TrackedObject which can be error-prone if only considering spatial information. The solution is not easy but incorporating the notion of the appearance similarity based on some kind of embedding to your distance_finction can help.
- Mismatches when objects overlap. In this case, tracking becomes more challenging, usually, the quality of the detection degrades causing one of the objects to be missed or creating a single big detection that includes both objects. On top of the detection issues, the tracker needs to decide which detection should be matched to which TrackedObject which can be error-prone if only considering spatial information. The solution is not easy but incorporating the notion of the appearance similarity based on some kind of embedding to your distance_function can help.
- Can't **recover** an object **after occlusions**. Use ReID distance, see [this demo](https://github.com/tryolabs/norfair/tree/master/demos/reid) for an example but for real-world use you will need a good ReID model that can provide good embeddings.
2 changes: 1 addition & 1 deletion norfair/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
>>> from norfair import Detection, Tracker, Video, draw_tracked_objects
>>> detector = MyDetector() # Set up a detector
>>> video = Video(input_path="video.mp4")
>>> tracker = Tracker(distance_function="frobenious", distance_threshold=50)
>>> tracker = Tracker(distance_function="euclidean", distance_threshold=50)
>>> for frame in video:
>>> detections = detector(frame)
>>> norfair_detections = [Detection(points) for points in detections]
Expand Down
2 changes: 1 addition & 1 deletion norfair/drawing.py
Original file line number Diff line number Diff line change
Expand Up @@ -790,7 +790,7 @@ class FixedCamera:
Examples
--------
>>> # setup
>>> tracker = Tracker("frobenious", 100)
>>> tracker = Tracker("euclidean", 100)
>>> motion_estimator = MotionEstimator()
>>> video = Video(input_path="video.mp4")
>>> fixed_camera = FixedCamera()
Expand Down

0 comments on commit 2549262

Please sign in to comment.