Gray Matter LogoGray Matter Workshop

Implementing Vision

Integrating Vision into Robot Code

Connecting vision systems to robot code involves reading NetworkTables data, integrating AprilTag measurements into odometry, and using vision feedback for control. This section demonstrates practical vision integration patterns.

Key Concept: Vision data transforms autonomous accuracy and enables intelligent teleop assistance.

Vision Implementation Strategy

Implementing vision requires a systematic approach to ensure reliable pose estimation. Follow these steps to integrate Limelight vision data into your robot's odometry system while maintaining accuracy and trust.

🚀 Implementation Sequence

1

LimelightHelpers Library

First, import the Limelight helper library available on GitHub. It contains pre-built NetworkTables wrappers that provide clean access to vision data without manual NetworkTables subscriptions.

2

Limelight Subsystem

Next, create a new subsystem to pull values using the Limelight helper tool. In this subsystem there are three things we need in order to add them to our pose estimator: Pose, Timestamp, and Standard Deviation (how much we will trust the reading). Both pose and timestamp are provided by LimelightHelpers, however we need to create a formula for how much to trust vision.

3

Utilizing CTRE Pose Estimator

Once we have the three values above, we can pass them into the CTRE Pose Estimator. It has pre-programmed functions that accept these values. However, we need to pass this pose estimator to the vision subsystem to add measurements.

4

RobotContainer Setup

At this point we have the pose estimator in drivetrain and now can create a vision subsystem that takes in drivetrain to add values to it.

Why This Approach?

  • Library First: LimelightHelpers abstracts NetworkTables complexity.
  • Validation Layer: The Limelight subsystem filters bad measurements before they make it to your pose estimate
  • Dynamic Trust: Standard deviations adjust based on measurement quality, preventing bad data from degrading odometry

Standard Deviation & Filtering

Trusting vision data correctly is just as important as receiving it. We use a combination of dynamic standard deviations and filtering to ensure only high-quality data affects our odometry.

Formula for Workshop

We use a simple formula based on tag count and distance. As the robot gets further from tags, the standard deviation increases (trust decreases). More tags visible decreases the standard deviation (trust increases).

Standard Deviation FormulaJAVA

Suggested Filtering Strategies

Beyond the formula, we apply several filters to reject bad data entirely:

  • •Field Boundary Check: Reject poses that are outside the field perimeter.
  • •Ambiguity Filter: For single-tag detections, reject if the ambiguity score is too high (indicating the tag might be flipped).
  • •Z-Height Check: Reject poses where the robot is calculated to be flying or underground.

Camera Setup & Calibration

Accurate camera calibration ensures vision measurements integrate correctly with your odometry, providing reliable pose estimates.

Limelight Camera Configuration

Set up your Limelight camera with proper positioning, focus, and calibration.

1

Change Pipeline to AprilTag

Access the Limelight web interface and switch the active pipeline to AprilTag mode. This enables 3D pose estimation using AprilTags for accurate robot localization.

2

Adjust Exposure

In the camera settings, reduce the exposure as low as possible while still reliably detecting AprilTags. Lower exposure reduces motion blur and improves tag detection accuracy during fast robot movement.

3

Set Camera Offsets

Accurately measure and enter your camera's position and angle relative to the robot's center. This transform is critical for converting camera detections into accurate field-relative robot poses. Follow the Limelight documentation for detailed instructions.

4

Camera Calibration

Use a Limelight calibration board to calibrate your camera. This corrects for lens distortion and improves pose accuracy, especially at the edges of the field of view. Follow the Limelight Calibration Guide for detailed instructions.

The custom field map includes AprilTag positions specifically arranged for workshop exercises. You'll need to upload this file to your Limelight to match the physical setup. After uploading the map, reboot your Limelight for the changes to take effect.

Download Custom Field Map

Reading Limelight Data

Limelight publishes vision data to NetworkTables. The LimelightHelpers library (provided by Limelight on GitHub) provides a clean API for reading this data without direct NetworkTables access.

LimelightHelpers.java

Loading file...

Limelight.java

Loading file...

RobotContainer.java

RobotContainer includes the setup for vision integration, showing how the Limelight subsystem connects with the swerve drivetrain and command bindings.

Loading file...

Workshop Code Implementation

The Workshop-Code repository includes complete vision implementation on the 3-Limelight branch, demonstrating Limelight integration with swerve drive and odometry. The code examples above are all taken directly from this branch, showing real working implementations you can reference and adapt for your own robot.

Vision Best Practices

Do

  • Validate vision data before using it
  • Account for latency (automatically done)
  • Use appropriate standard deviations
  • Test different exposures (lower is better)
  • Log vision data for debugging

Don't

  • Trust vision measurements blindly
  • Ignore latency compensation
  • Use vision as only odometry source
  • Forget to tune camera settings
  • Skip testing in match conditions

Additional Resources

What's Next?

Up Next: Dynamic Flywheel

With vision integrated into your odometry, you're ready to implement dynamic flywheel control using vision-based distance measurements to shoot accurately from anywhere on the field.