Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
25 commits
Select commit Hold shift + click to select a range
f594d2e
Was double-adding pose esitmates
tbowers7 Feb 7, 2026
e2d8fad
Fix double-count of pose estimation; add kS for swerve turn
tbowers7 Feb 7, 2026
ecff3ea
Clean up the consumer syntax
tbowers7 Feb 7, 2026
e4b0e3e
Add extra logging for when the vision pose doesn't work
tbowers7 Feb 9, 2026
8726812
Documentation Updates (#126)
tbowers7 Feb 9, 2026
08233aa
Update vendor dependencies (#138)
tbowers7 Feb 9, 2026
35964c5
Add a "RBSI-standard" AdvantageScope layout (#131)
tbowers7 Feb 9, 2026
d1bf6c8
Pull in `develop` ahead of version 26.1 release (#143)
tbowers7 Feb 9, 2026
13962fd
Bump com.diffplug.spotless from 8.1.0 to 8.2.0 (#121)
dependabot[bot] Jan 26, 2026
e313e27
Bump com.diffplug.spotless from 8.2.0 to 8.2.1 (#129)
dependabot[bot] Jan 31, 2026
248fe9c
Pull in `admin` ahead of 26.1 release (#130)
tbowers7 Feb 9, 2026
2b8c50e
Call the pose estimator "TimedPose"
tbowers7 Feb 9, 2026
3bf64b5
Move the odometry queue draining into a VirtualSubsystem
tbowers7 Feb 11, 2026
570b093
Merge remote-tracking branch 'origin/main' into fix_pose_bug
tbowers7 Feb 11, 2026
5923145
Update AprilTag definitions
tbowers7 Feb 13, 2026
1c0875f
Clean up Subsystem defs
tbowers7 Feb 13, 2026
5569b2b
Clean up the IMU code a little
tbowers7 Feb 14, 2026
5fa0ae4
Clean up Drive and DriveOdometry code
tbowers7 Feb 14, 2026
01d0cc2
Commenting the Vision.java file
tbowers7 Feb 14, 2026
f6a938e
PoseBuffer: Correctly align timestamps and poses
tbowers7 Feb 16, 2026
03cbadf
😮‍💨 Got it
tbowers7 Feb 16, 2026
1b4006d
Clean up diff versus `main`
tbowers7 Feb 17, 2026
8b3dafa
Cleaning and making things line up.
tbowers7 Feb 17, 2026
621ae9b
Cleaning up the pose buffering more
tbowers7 Feb 18, 2026
fac0fba
Clean logging
tbowers7 Feb 20, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
541 changes: 541 additions & 0 deletions AdvantageScope RBSI Standard.json

Large diffs are not rendered by default.

2 changes: 1 addition & 1 deletion build.gradle
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@ plugins {
id "java"
id "edu.wpi.first.GradleRIO" version "2026.2.1"
id "com.peterabeles.gversion" version "1.10.3"
id "com.diffplug.spotless" version "8.1.0"
id "com.diffplug.spotless" version "8.2.1"
id "io.freefair.lombok" version "9.2.0"
id "com.google.protobuf" version "0.9.6"
}
Expand Down
9 changes: 7 additions & 2 deletions doc/INSTALL.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ following components on your laptop and devices.
This includes all motors, CANivore, Pigeon 2.0, and all CANcoders!
* Rev Hardware Client `2.0`, with the PDH and all SparkMax's, and other devices
running firmware `26.1` or newer.
* Vivid Hosting Radio firmware `2.0` or newer is required for competition this
* Vivid Hosting Radio firmware `2.0.1` or newer is required for competition this
year.
* Photon Vision ([Orange Pi or other device](https://docs.photonvision.org/en/latest/docs/quick-start/quick-install.html))
**running `26.1` or newer** (make sure you are **not** acidentially running
Expand All @@ -22,6 +22,8 @@ following components on your laptop and devices.

It is highly recommmended to update all you devices, and label what can id's or ip adresses and firmware versions they are running. This helps your team, and the FRC field staff quickly identify issues.

If you are running a RoboRIO 1.0 (no sd card) you also neeed to disable the web server ([Instructions Here](https://docs.wpilib.org/en/stable/docs/software/wpilib-tools/roborio-team-number-setter/index.html))

--------

### Getting Az-RBSI
Expand Down Expand Up @@ -118,7 +120,10 @@ steps you need to complete:
https://github.com/CrossTheRoadElec/Phoenix6-Examples/blob/1db713d75b08a4315c9273cebf5b5e6a130ed3f7/java/SwerveWithPathPlanner/src/main/java/frc/robot/generated/TunerConstants.java#L171-L175).
Before removing them, both lines will be marked as errors in VSCode.

5. In `TunerConstants.java`, change `kSteerInertia` to `0.004` and
5. In `TunerConstants.java`, change `kSlipCurrent` to `60` amps. This will
keep your robot from tearing holes in the carpet at competition!

6. In `TunerConstants.java`, change `kSteerInertia` to `0.004` and
`kDriveInertia` to `0.025` to allow the AdvantageKit simulation code to
operate as expected.

Expand Down
Binary file added doc/PV_Cameras.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/PV_Network.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
21 changes: 20 additions & 1 deletion doc/RBSI-GSG.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,10 @@ modifications to extant RBSI code will be done to files within the

### Tuning constants for optimal performance

**It cannot be overemphasized the importance of tuning your drivetrain for
smooth and consistent performance, battery longevity, and not tearing up the
field.**

4. Over the course of your robot project, you will need to tune PID parameters
for both your drivebase and any mechanisms you build to play the game.
AdvantageKit includes detailed instructions for how to tune the various
Expand Down Expand Up @@ -128,7 +132,22 @@ section of [each release](https://github.com/AZ-First/Az-RBSI/releases).

See the [PhotonVision Wiring documentation
](https://docs.photonvision.org/en/latest/docs/quick-start/wiring.html) for
more details.
more details. DO NOT put the orange pi's (or any devices that cannnot loose power) on port 23 of the PDH. It is a mechanical switch, and if the robot is hit, it briefly will loose power.

Mounting the case to the robot requires 4x #10-32 nylock nuts (placed in the
hex-shaped mounts inside the case) and 4x #10-32 bolts.

Order of addembly of the Orange Pi Double Case matters given tight clearances:
1. Super-glue the nylock nuts into the hex mounting holes.
2. Intall the fans and grates into the case side.
3. Assemble the Pi's into the standoffs outside the box.
4. Solder / mount the Voltage Regular solution of your choice.
5. Connect the USB-C power cables to the Pi's.
6. Connect the fan power to the 5V (red) and GND (black) pins in the Pi's.
7. Install the Pi/standoff assembly into the case using screws at the bottom,
be careful of the tight clearance between the USB sockets and the case opening.
8. Tie a knot in the incoing power line _to be placed inside the box
for strain relief_, and pass the incoming power line through the notch
in the lower case.
9. Install the cover on the box using screws.
10. Mount the case to your robot using the #10-32 screws.
150 changes: 150 additions & 0 deletions doc/RBSI-PoseBuffer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,150 @@
# Az-RBSI Pose Buffering

This page contains documentation for the Odometry and Vision Pose Buffering
system included in Az-RBSI. It is more for reference than instructing teams in
how to do something.

--------

### Background for Need

In previous versions of the Az-RBSI (or the base [AdvantageKit templates](
https://docs.advantagekit.org/getting-started/template-projects)), there is a
temporal disconnect between the "now" state of the drivetrain odometry (wheel
encoders + gyro) and the "delayed" state of vision measurements (by exposure
time, pipeline processing, and network transport). Essentially, **the
different sensors on the robot do not all report data at the same time**. This
delay can be 30–120 ms -- which is huge when your robot can move a foot in that
time. Attempting to correct the "now" odometric pose with a "delayed" vision
estimate introduces systematic error than can cause jitter in the pose estimate
and incorrect downstream computed values.


### What is Pose Buffering

A pose buffer lets you store a time history of your robot’s estimated pose so
that when a delayed vision measurement arrives, you can rewind the state
estimator to the exact timestamp the image was captured, inject the correction,
and then replay forward to the present. In essence, pose buffers enable **time
alignment between subsystems**. Since the Az-RBSI assumes input from multiple
cameras and combining that with the IMU yaw queues and high-frequency module
odometry, everything must agree on a common timebase. Introducing a pose
buffer allows us to query, "What did odometry think the robot pose was at time
`t`?" and compute the transform between two timestamps. This is the key to
making atency-aware fusion mathematically valid.

Pose buffers dramatically improve **stability and predictability** as well.
They can prevent feedback oscillations caused by injecting corrections at
inconsistent times and enable smoothing, gating, and replay-based estimators.
These are incredibly important for accurate autonomous paths, reliable
auto-aim, and multi-camera fusion.


### Implementation as Virtual Subsystems

The Az-RBSI pose buffer implementation is based on the principle that **Drive
owns the authoritative pose history** via a `poseBuffer` keyed by FPGA
timestamp, and we make sure that buffer is populated using the *same timebase*
as the estimator. Rather than updating the estimator only “once per loop,”
**all high-rate odometry samples** collected since the last loop are replayed
and inserted into the buffer.

We have a series of three Virtual Subsystems that work together to compute the
estimated position of the robot each loop (20 ms), polled in this order:
* Imu
* DriveOdometry
* Vision

The IMU is treated separately from the rest of the drive odometry because we
use its values in the Accelerometer virtual subsystem to compute the
accelerations the robot undergoes. Its `inputs` snapshot is refreshed before
odometry replay runs so that during odometry replay, we prefer using the IMU’s
**odometry yaw queue** when it exists and is aligned to the drivetrain odometry
timestamps. If now, we fall back to interpolating yaw from `yawBuffer` (or
"now" yaw if we have no queue). This allows for stable yaw-rate gating
(single-camera measurements discarded if the robot is spinning too quickly)
because `yawRateBuffer` is updated in the same timebase as the odometry replay.
When vision asks, "what is the max yaw rate over `[ts-lookback, ts]`," it is
querying a consistent history instead of a mix of current-time samples.

The `DriveOdometry` virtual subsystem drains the PhoenixOdometryThread queues
to get a canonical timestamp array upon which is built the
`SwerveModulePosition` snapshots for each sample index. The YAW from the IMU
is computed for that same sample time, then the module positions and YAW are
added to the pose estimator using the `.updateWithTime()` function for each
timestamp in the queue. At the same time, we add the sample to the pose buffer
so that later consumers (vision alignment, gating, smoothing) can ask, "What
did the robot think at time `t`?" Practically, it runs early in the loop,
drains module odometry queues, performs the `updateWithTime(...)` replay, and
keeps the `poseBuffer`, `yawBuffer`, and `yawRateBuffer` coherent.

Vision measurements get included *after* odometry has advanced and buffered
poses for the relevant timestamps. Vision reads all camera observations,
applies various gates, chooses one best observation per camera, then fuses them
by picking a fusion time `tF` (newest accepted timestamp), and **time-aligning
each camera estimate from its `ts` to `tF` using Drive’s pose buffer**. The
smoothed/fused result is then injected through the `addVisionMeasurement()`
consumer in `Drive` at the correct timestamp. The key is: we never try to
"correct the present" with delayed vision; we correct the past, and the
estimator/pose buffer machinery carries that correction forward coherently.

To guarantee the right computation order, we implemented a simple
**priority/ordering mechanism for VirtualSubsystems** rather than relying on
construction order. Conceptually: `Imu` runs first (refresh sensor snapshot and
yaw queue), then `DriveOdometry` runs (drain odometry queues, replay estimator,
update buffers), then `Vision` runs (query pose history, fuse, inject
measurements), and finally anything downstream (targeting, coordinators, etc.).
With explicit ordering, vision always sees a pose buffer that is current
through the latest replayed timestamps, and its time alignment transform uses
consistent odometry states.


### Relationship Between Pose Buffering and Hardware Control Subsystems

The `DriveOdometry` virtual subsystem exists to **isolate the heavy,
timing-sensitive replay logic** from the rest of `Drive` "control" behavior.
This separation allows `Drive`’s main subsystem to focus on setpoints/commands,
while `DriveOdometry` guarantees that by the time anything else runs, odometry
state and buffers are already up to date for the current cycle.

The pose buffering system sits cleanly between **raw hardware sampling** and
**high-level control**, acting as a time-synchronized "memory layer" for the
robot’s physical motion. At the bottom, hardware devices are sampled at high
frequency with timestamps measurements in FPGA time and compensation for CAN
latency. Those timestamped samples are drained and replayed inside
`DriveOdometry`, which feeds the `SwerveDrivePoseEstimator` object. This means
the estimator is not tied to the 20 ms main loop -- it advances according to
the *actual sensor sample times*. The result is a pose estimate that reflects
real drivetrain physics at the rate the hardware can provide, not just the
scheduler tick rate.

On the control side, everything -- heading controllers, auto-alignment, vision
fusion, targeting -- consumes pose data that is both temporally accurate and
historically queryable. Controllers still run at the 20 ms loop cycle, but
they operate on a pose that was built from high-rate, latency-compensated
measurements. When vision injects corrections, it does so at the correct
historical timestamp, and the estimator propagates that correction forward
consistently. The net effect is tighter autonomous path tracking, more stable
aiming, and reduced oscillation under aggressive maneuvers -- because control
decisions are based on a pose model that more faithfully represents the real
robot’s motion and sensor timing rather than a simplified "latest value only"
approximation.


### Behavior when the Robot is Disabled

In normal operation (robot enabled), vision measurements are incorporated using standard Kalman fusion via addVisionMeasurement(). This is a probabilistic update: the estimator weighs the vision measurement against the predicted state based on covariance, producing a smooth, statistically optimal correction. Small errors are gently nudged toward vision; large discrepancies are handled proportionally according to the reported measurement uncertainty. This is the correct and intended behavior for a moving robot.

When the robot is disabled, however, the estimator is no longer operating in a dynamic system. Wheel odometry is effectively static, process noise collapses, and repeated vision corrections can cause pathological estimator behavior (particularly in translation). Instead of performing another Kalman update in this regime, the system switches to a controlled pose blending strategy. Each accepted vision pose is blended toward the current estimate using a fixed interpolation factor (e.g., 10–20%), and the estimator is explicitly reset to that blended pose. This produces a gradual convergence toward the vision solution without allowing covariance collapse or runaway corrections.

The result is a stable and intuitive pre-match behavior: while disabled, the robot will slowly “walk” its pose estimate toward the vision solution if needed, but it will not snap violently or diverge numerically. Once enabled, the system seamlessly returns to proper Kalman fusion, where vision and odometry interact in a fully dynamic and statistically grounded manner.

### Design Rationale – Split Enabled/Disabled Vision Handling

The core reason for separating enabled and disabled behavior is that the Kalman filter assumes a **dynamic system model**. When the robot is enabled, the drivetrain is actively moving and the estimator continuously predicts forward using wheel odometry and gyro inputs. Vision measurements then act as bounded corrections against that prediction. In this regime, Kalman fusion is mathematically appropriate: process noise and measurement noise are balanced, covariance evolves realistically, and corrections remain stable.

When the robot is disabled, however, the system is no longer dynamic. Wheel distances stop changing, gyro rate is near zero, and process noise effectively collapses. If vision continues injecting measurements into the estimator with timestamps that are slightly offset from the estimator’s internal state, the filter can enter a degenerate regime. Because translational covariance may shrink aggressively while no true motion exists, even small inconsistencies between time-aligned vision and odometry can produce disproportionately large corrections. This is why translation can numerically “explode” while rotation often remains stable—rotation is typically better constrained by the gyro and wraps naturally, while translation depends more directly on integrated wheel deltas and covariance scaling.

The disabled blending pattern avoids this pathological case by temporarily stepping outside strict Kalman fusion. Instead of applying repeated measurement updates against a near-zero process model, we treat vision as a slowly converging reference and explicitly blend the current estimate toward it. This maintains numerical stability, prevents covariance collapse artifacts, and still allows the pose to settle accurately before a match begins.

Once the robot transitions back to enabled, the estimator resumes normal probabilistic fusion with a healthy process model. The split-mode approach therefore preserves mathematical correctness while the robot is moving, and guarantees numerical stability while it is stationary.
80 changes: 80 additions & 0 deletions doc/RBSI-Vision.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,80 @@
# Az-RBSI Vision Integration

This page includes detailed steps for integrating robot vision for your
2026 REBUILT robot.

--------

### PhotonVision

The preferred method for adding vision to your robot is with [PhotonVision](
https://photonvision.org/). This community-developed open-source package
combines coprocessor-based camera control and analysis with a Java library
for consuming the processed targeting information in the robot code.

#### Recommended Setup with Az-RBSI

We recommend using Arducam [OV9281](https://www.amazon.com/dp/B096M5DKY6)
(black & white) and/or [OV9782](https://www.amazon.com/dp/B0CLXZ29F9) (color)
cameras for robot vision due to their Global Shutter, Low Distortion, and USB
connection. In addition to the lens delivered with the camera, supplementary
lenses may be purchased to vary the FOV available to the detector for various
robot applications, such as [Low-Distortion](
https://www.amazon.com/dp/B07NW8VR71) or [General Purpose](
https://www.amazon.com/dp/B096V2NP2T).

For the coprocessor that controls the cameras and analyzes the images for
AprilTag and gamepiece detection, we recommend using one or two Orange Pi 5
single-board computers -- although PhotonVision does support a number of
[different coprocessor options](
https://docs.photonvision.org/en/latest/docs/quick-start/quick-install.html).
As decribed in the [Getting Started Guide](RBSI-GSG.md), we include a 3D print
for a case that can hold one or two of these computers.

#### Setting up PhotonVision on the Coprocessor

Download the appropriate [disk image](
https://github.com/PhotonVision/photonvision/releases/tag/v2026.2.1) for your
coprocessor and burn it to an SD card using the [Raspberry Pi Imager](
https://www.raspberrypi.com/software). Connect the powered-on coprocessor
to the Vivivid radio (port 2 or 3) via ethernet, or connect to a network switch connected to the radio via ethernet, and connect to the PhotonVision software
at the address ``http://photonvision.local:5800``.

Before you connect the coprocessor to your robot, be sure to set your team
number, set the IP address to "Static" and give it the number ``10.TE.AM.11``,
where "TE.AM" is the approprate parsing of your team number into IP address,
as used by your robot radio and RoboRIO. If desired, you can also give your
coprocessor a hostname.

![PhotonVision Network Settings](PV_Network.png)

We suggest you give your first coprocessor the static IP address
``10.TE.AM.11``, and your second coprocessor (if desired) ``10.TE.AM.12``.
The static address allows for more stable operation, and the these particular
addresses do not conflict with other devices on your robot network.

Plug in cameras (two or three per coprocessor) and navigate to the Camera
Configs page (see below). Activate the cameras.

![PhotonVision Camera Configs](PV_Cameras.png)

#### Configuring and Calibrating your Cameras

This is the most important part!

Instructions are in the [PhotonVision Documentation](
https://docs.photonvision.org/en/latest/docs/calibration/calibration.html).

You should consider calibrating your cameras early and often, including daily
during a competition to ensure that the cameras are reporting as accurate a
pose as possible for your odometry. Also, double-check your calibration by
using a measuring tape to compare the reported vision-derived distance from
each camera to one or more AprilTags with reality.


#### Using PhotonVision for Vision Simulation

This is an advanced topic, and is therefore in the Restricted Section. (More
information about vision simulation to come in a future release.)

![Restricted Section](restricted_section.jpg)
Binary file added doc/restricted_section.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading