-
0) ROS2 + Arm Bringup (Simulation First) — Create a ROS2 workspace, bring up the arm in RViz/Gazebo with a clean launch pipeline.
-
1) URDF/Xacro Modeling of Your Arm — Build the arm model (links/joints, limits, inertias) and validate it in RViz with TF.
-
2) ros2_control Hardware Interface (Real Arm) — Implement the driver layer (joint states + joint commands) so ROS2 can read/command the physical motors.
-
3) Basic Joint Control (Position/Velocity Control) — Move single joints and then multi-joint trajectories with safety limits and smooth interpolation.
-
4) Calibration + Zeroing + Repeatability Checks — Establish homing, joint offsets, and repeatability tests so the arm behaves predictably every run.
-
5) Forward Kinematics (FK) + TF Verification — Compute end-effector pose from joint angles and verify it matches RViz/TF numerically.
-
6) Inverse Kinematics (IK) for Target Poses — Solve “move end-effector to (x,y,z,orientation)” using a practical IK approach for your arm.
-
7) Trajectory Generation (Time-Parameterized Motion) — Generate smooth joint trajectories (limits-aware) and execute them reliably on real hardware.
-
8) Dynamics Basics (Why Motion Feels Different in Reality) — Introduce torque/load, gravity effects, and tuning considerations that impact real motion quality.
-
9) MoveIt 2 Integration (Planning Pipeline) — Use MoveIt 2 for planning + execution, including planning scene, collision objects, and constraints.
-
10) Motion Planning with Obstacles (Collision Avoidance) — Plan around simple obstacles in simulation, then mirror the same planning scene on the real arm.
-
11) Pick/Place “Fixed Targets” (First Manipulation) — Execute repeatable pick-and-place using pre-defined object poses and approach/retreat motions.
-
12) Perception Setup (RGB-D Camera in ROS2) — Integrate a depth camera, publish point clouds, and align camera-to-arm calibration (extrinsics).
-
13) Scene Mapping for Manipulation (Voxel/Octomap) — Build a 3D scene representation and feed it to planning for runtime collision awareness.
-
14) Runtime Object Picking (Pose from Depth) — Detect an object, estimate its 6D pose from depth, plan grasp, and execute pick-and-place online.
-
15) QR Code-Based Pick/Place (Structured Perception) — Use QR tags to get robust IDs + poses for teaching structured automation workflows.
-
16) Failure Handling + Recovery Behaviors — Add “retry grasp,” “re-localize,” “safe retreat,” and “home” behaviors for real-world robustness.
-
17) Data Logging + Evaluation Harness — Record ROS bags, define success metrics (time, accuracy, retries), and build a repeatable test suite.
-
18) RL for Motion Primitives (Simulation to Real Concepts) — Train a small policy for a constrained skill (e.g., insertion/approach) in sim, then discuss safe transfer.
-
19) VLA / “Language-to-Action” Task Layer — Add a high-level instruction interface that selects skills (“pick red block,” “place in bin”) while ROS2 executes safely.
-
20) Capstone: “Perception-Driven Autonomous Workcell” — End-to-end system: camera → mapping → object pose → planning → pick/place → logging + recovery.
-
Notifications
You must be signed in to change notification settings - Fork 0
Robotisim/dev_robotic_arm_engineering
Folders and files
| Name | Name | Last commit message | Last commit date | |
|---|---|---|---|---|
Repository files navigation
About
LEARNING PATH : Robotic Arm
Resources
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published