PyReach is an implementation of point-to-point arm reaching motions. Dynamics of the upper-limb has been simplified to a two-link-arm model. The goal is to replicate classical motor control experiments using various control schemes from classical approaches to RL-based methods.
- A gym-compatible environment of the upper-limb.
- Three controllers:
- Impedance-based.
- Soft actor-critic (SAC).
- Deep Deterministic Policy Gradiant (DDPG).
- Tools to replicate motor control experiments.
-
Arm:
arm_params.pyMechanical properties of the upper-limb.
Arm2DEnv.pyA gym-like environment of goal-directed arm movement.
-
Controllers:
imp_ctrl.pyClassical impendace control based on minimum-jerk trajectory.
sac.pyTrain soft actor-critic controller.
ddpgTrain deep deterministic policy gradiant controller.
-
Tools
mainReach.ipynbHook a controller to the environment and visualize trajectories, rewards, etc.
mainExperiment.ipynbHook a controller to the environment and visualize experimental results.
utils.pyHelper functions for all codes.
If you want to train an RL controller (SAC, or DDPG), run the corresponding code. When running the training code a snapshot of all codes will be saved to ./sandbox/ directory marked with a timestamp and the name of the algorithm. All training logs will be saved to the same directory. While the training is in progress, you can load the logs and the model's checkpoint using the snapshots of the codes and visualzie results.
Alireza Rezazadeh | rezaz003@umn.edu | Fall 2020

