You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hey team! Great job getting started on the AI Vision component for the robot! It's fantastic to see exploration into machine learning for vision, which can be a huge advantage on the field.
Summary
This pull request introduces a Python notebook for vision model training and a placeholder Java file for PhotonVision configuration. It's a solid first step in setting up our vision pipeline!
Positive Highlights
Initiative in Vision: It's awesome to see you diving into AI vision. This area has a lot of potential for our robot!
Structured Learning: The tensorflow_datasets.ipynb notebook provides a good, well-commented example of a TensorFlow Keras training pipeline, which is a great foundation for learning.
Configuration Separation: Creating photonconfig.java is a good practice for organizing robot-specific constants related to PhotonVision.
Suggestions
For src/main/java/frc/robot/subsystems/Vision/Training_Notebooks/tensorflow_datasets.ipynb
This notebook looks like a good general example for training a neural network!
Clarify Goal for FRC Object Detection: The PR description mentions "object detection" for our vision model, but this notebook specifically focuses on the MNIST dataset, which is typically used for image classification (identifying handwritten digits). For FRC, object detection usually involves finding and localizing specific game pieces (like notes or amps) within an image, often using bounding boxes.
Recommendation: Consider adapting this notebook, or creating a new one, that demonstrates a workflow more aligned with FRC object detection. This would involve using a dataset of FRC game pieces (or a similar object detection dataset like COCO or Pascal VOC) and a model architecture suitable for object detection (e.g., YOLO, SSD, or EfficientDet). This will help bridge the gap between this learning example and our robot's specific needs.
Educational Context: MNIST is a fantastic "hello world" for neural networks, but object detection requires different model architectures and data preparation (like bounding box annotations) compared to simple classification. PhotonVision or similar tools often expect models trained specifically for object detection.
Location of Notebooks: While it's great to have these notebooks, typically src/main/java is reserved for our robot's Java code.
Recommendation: It might be helpful to store training notebooks in a separate, top-level directory (e.g., vision_training/ or ml_models/) outside of the src/main/java structure. This keeps the robot code clean and separates development tools from deployable robot software.
For src/main/java/frc/robot/subsystems/Vision/photonconfig.java
This is a good start for a configuration file!
Type-Safe Units: When you start adding actual configuration values (like camera mounting heights, focal lengths, or target dimensions), remember to use WPILib's type-safe units. This helps prevent common conversion errors and makes the code much clearer.
Example:
importedu.wpi.first.units.*;
importstaticedu.wpi.first.units.Units.*;
publicclassPhotonConfig {
// Consider adding a Javadoc comment for the class/** * Configuration constants for the robot's PhotonVision system. */// Good: Using type-safe units for clarity and error preventionpublicstaticfinalMeasure<Distance> CAMERA_HEIGHT = Meters.of(0.5);
publicstaticfinalMeasure<Angle> CAMERA_PITCH = Degrees.of(25.0);
// Bad (example of what to avoid for physical measurements)// public static final double CAMERA_MOUNT_X = 0.1; // What units?
}
Javadoc Documentation: As this class grows to hold important configuration, adding a Javadoc comment to the class and any public constants will be very helpful for future team members to understand its purpose and the meaning of each value.
Questions
Could you share a bit more about how you plan to transition from this MNIST example to training a model for FRC-specific object detection? What game pieces are you hoping to detect first?
Do you have a plan for how the trained model (once created) will be integrated into the robot's Java code using PhotonVision?
Keep up the excellent work! This is an exciting area to explore, and I'm looking forward to seeing our robot gain some vision capabilities!
This review was automatically generated by AI. Please use your judgment and feel free to discuss any suggestions!
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Python Notebooks for training the Vision Model for object detection.