From 1169aee99db6f188e12ec86c808b217fafa2ada0 Mon Sep 17 00:00:00 2001 From: Guptajakala Date: Tue, 9 Jan 2024 23:45:07 -0800 Subject: [PATCH] add papers --- papers.md | 3 +++ 1 file changed, 3 insertions(+) diff --git a/papers.md b/papers.md index 58178b8..c529513 100644 --- a/papers.md +++ b/papers.md @@ -2,6 +2,9 @@ Awesome Papers -------------- Papers and implementations of papers that could have use in robotics. Implementations here may not be actively developed. While implementations may often be the author's original implementation, that isn't always the case. +- [BundleSDF: Neural 6-DoF Tracking and 3D Reconstruction of Unknown Objects](https://bundlesdf.github.io/) - 2023 - 6D pose tracking and 3D reconstruction of unknown objects +- [BundleTrack: 6D Pose Tracking for Novel Objects without Instance or Category-Level 3D Models](https://github.com/wenbowen123/BundleTrack) - 2021 - 6D object pose tracking without needing any CAD models +- [se(3)-TrackNet: Data-driven 6D Pose Tracking by Calibrating Image Residuals in Synthetic Domains](https://github.com/wenbowen123/iros20-6d-pose-tracking) - 2020 - 6D object pose tracking trained solely on synthetic data - ["Good Robot!": Efficient Reinforcement Learning for Multi-Step Visual Tasks with Sim to Real Transfer](https://github.com/jhu-lcsr/good_robot) - 2020 - Real robot learns to complete multi-step tasks like table clearing, making stacks, and making rows in <20k simulated actions. [paper](https://arxiv.org/abs/1909.11730) (disclaimer: @ahundt is first author) [!["Good Robot!": Efficient Reinforcement Learning for Multi Step Visual Tasks via Reward Shaping](https://img.youtube.com/vi/MbCuEZadkIw/0.jpg)](https://youtu.be/MbCuEZadkIw) - [Transporter Networks: Rearranging the Visual World for Robotic Manipulation](https://transporternets.github.io/) - [Ravens Simulator code](https://github.com/google-research/google-research/tree/master/ravens) - 2020 - Ravens is a collection of simulated tasks in PyBullet for learning vision-based robotic manipulation, with emphasis on pick and place. It features a Gym-like API with 10 tabletop rearrangement tasks, each with (i) a scripted oracle that provides expert demonstrations (for imitation learning), and (ii) reward functions that provide partial credit (for reinforcement learning).