This project develops a ROS solution that enables Baxter to autonomously play chess. Baxter packages and tutorials were provided, which include Baxter setting up the chessboard with at least 5 pieces and moving each piece in a sequence of valid chess moves.
The full solution utilizes ROS, Gazebo, RViz, and MoveIt. A demonstration video and the source code are included in the repository.
🎥 Click here to watch Baxter in action! 🎥
Our approach to the problem was threefold:
- Implementing the broadcaster and listener.
- Placing the chess pieces autonomously.
- Executing a sequence of valid chess moves.
For a detailed breakdown of our approach, the challenges faced, and our proposed improvements, refer to the Reflective Analysis section below.
- Courswork Description: Contains intructions and requirements for the coursework.
- Video Demonstration: A visual showcase of Baxter completing the tasks.
- ROS Package: Contains the ROS package with all the source code.
- Chess Models: All the
.sdfmodels of chess board and peices.
- ROS
- Gazebo
- RViz
- MoveIt
cd ~/rf_ws
git clone your-repo-link
catkin_makeBefore launching, rename the package to chess_baxter.
-
Startup:
roslaunch baxter_gazebo baxter_world.launch
-
Enable and publish joint trajectories:
rosrun baxter_tools enable_robot.py -e rosrun baxter_interface joint_trajectory_action_server.py
-
Enable grippers:
roslaunch baxter_moveit_config baxter_grippers.launch
-
Spawn chessboard:
rosrun chess_baxter spawn_chessboard.py
-
Broadcast positions:
rosrun chess_baxter gazebo2tfframe.py
-
Place chess pieces:
rosrun chess_baxter pick_and_place_moveit.py
-
Play chess:
rosrun chess_baxter play_chess
-
Delete Board:
rosrun chess_baxter delete_chess_game.py
Our system utilizes a broadcaster node to publish the position and orientation of each block. It's then parsed by a listener within the pick and place node, providing the pose of each block relative to the base. This enables Baxter to determine how to pick up each block.
Assumptions made:
- Chess pieces are given ground truth positions, eliminating the need for environmental scanning.
- No obstacles or barriers obstruct the robot's movement.
- All chess pieces are represented as cubes, simplifying the gripping process.
One major challenge was the inconsistency in simulation results. Different outcomes would sometimes result from the same commands due to environmental and software limitations.
Solutions:
- Initialize conditions, ensuring the robot always starts from the same state.
- Run multiple simulations and average over the results for more consistent outcomes.
- Add checks that ensure a robot reaches a specific pose before the next command.
For a deeper dive into the challenges faced and our strategies for overcoming them, refer to the full analysis.
To replace the Canny edge detector and image moments from Lab 5, a deep learning model could be implemented using an image database of chess pieces. The robot vision system must be able to classify the type of piece, its location, and orientation.
Steps include:
- Annotate images for orientation and position.
- Preprocess data, resize images, normalize pixel values, and split datasets.
- Design and train a convolutional neural network (CNN) for classification.
Special thanks to:
- Dr. Gerardo Aragon-Camarasa
- Lab Demonstrators: Florent Audonnet and Anith Ravindran