-
Shanghai Jiao Tong University
- Shanghai, China
-
23:25
- 8h ahead
Lists (1)
Sort Name ascending (A-Z)
Stars
[CoRL 2024] Open-TeleVision: Teleoperation with Immersive Active Visual Feedback
[IROS 2024] LEEPS : Learning End-to-End Legged Perceptive Parkour Skills on Challenging Terrains
[ICRA 2025] Official Implementation of "Robust Robot Walker: Learning Agile Locomotion over Tiny Traps"
[RSS 2024] Agile But Safe: Learning Collision-Free High-Speed Legged Locomotion
Repository for our paper: Learning Agile Skills via Adversarial Imitation of Rough Partial Demonstrations Proceedings of the 6th Conference on Robot Learning (CoRL),
[CoRL 2024] HumanPlus: Humanoid Shadowing and Imitation from Humans
UMI on Legs: Making Manipulation Policies Mobile with Manipulation-Centric Whole-body Controllers
AIWintermuteAI / IsaacGymEnvs
Forked from isaac-sim/IsaacGymEnvsIsaac Gym Reinforcement Learning Environments
Project Page for Lifelike Agility and Play in Quadrupedal Robots using Reinforcement Learning and Generative Pre-trained Models
phase-based Observations, Rewards, Coupling Ablation
Fast and simple implementation of RL algorithms, designed to run fully on GPU.
Unified framework for robot learning built on NVIDIA Isaac Sim
Unitree Go2, Unitree G1 support for Nvidia Isaac Lab (Isaac Gym / Isaac Sim)
unitree Go2 robot learns locomotion with N-P3O algorithm and HIM alike policy trained by isaacgym
Bringing Characters to Life with Computer Brains in Unity
The official implementation of Flexible Motion In-betweening with Diffusion Models, SIGGRAPH 2024
ViPlanner: Visual Semantic Imperative Learning for Local Navigation
Learning Agile Quadrupedal Locomotion on Challenging Terrains
[ICRA 2024]: Train your parkour robot in less than 20 hours.
An example of loading custom terrain and a husky robot into Gazebo
Simulating a Stewart platform in Gazebo using a plugin to allow control of a closed loop manipulator with ROS.
Safe RLHF: Constrained Value Alignment via Safe Reinforcement Learning from Human Feedback
Code for Prompt a Robot to Walk with Large Language Models https://arxiv.org/abs/2309.09969