PhotonVision

public class PhotonVision extends SubsystemBase

The PhotonVision class is a subsystem that interfaces with multiple PhotonVision cameras to provide vision tracking and pose estimation capabilities. This subsystem is a Singleton that manages multiple CameraModules and selects the best result based on pose ambiguity.

This subsystem provides methods to get the estimated global pose of the robot, the distance to targets, and the yaw of detected AprilTags. It also provides methods to check if a tag is visible and get the pivot position based on distance calculations.

Properties

Link copied to clipboard
private final PhotonTrackedTarget currentTarget

Gets the current tracked target.

Link copied to clipboard
private final Double dist
Link copied to clipboard
private final Double distanceToAprilTag
Link copied to clipboard
private final Transform3d estimatedGlobalPose
Link copied to clipboard
public final static PhotonVision INSTANCE
Link copied to clipboard

Gets the current target pose ambiguity.

Link copied to clipboard
private final Double y
Link copied to clipboard
private final Double yaw

Gets the current yaw angle to the target.

Functions

Link copied to clipboard
public final PhotonTrackedTarget getCurrentTarget()
Link copied to clipboard
public final Double getDist()
Link copied to clipboard

Calculates the straight-line distance to the currently tracked AprilTag.

Link copied to clipboard
public final Transform3d getEstimatedGlobalPose()

Gets the estimated global pose of the robot as a Transform3d.

public final EstimatedRobotPose getEstimatedGlobalPose(Pose2d prevEstimatedRobotPose)

Gets the estimated global pose of the robot using the best available camera.

Link copied to clipboard
Link copied to clipboard
public final Double getY()
Link copied to clipboard
public final Double getYaw()
Link copied to clipboard
public final Boolean hasTag()

Checks if there is a visible AprilTag.

Link copied to clipboard
public Unit periodic()

This method is called periodically by the CommandScheduler. It updates the tracked targets, selects the best camera based on pose ambiguity, and updates dashboard information.