Skip to content

Sim-to-Real via Sim-to-Sim using fast.ai's U-net

Notifications You must be signed in to change notification settings

initmaks/ran2can

Repository files navigation

ran2can

This repository contains all of the tools necessary to replicate the following results:

static/sim2real_paper_frontimg.png

The project is insiped by James et al., 2019 - Sim-to-Real via Sim-to-Sim: Data-efficient Robotic Grasping via Randomized-to-Canonical Adaptation Networks. However, instead of training a GAN loss, it uses a Perceptual (Feature) Loss objective. model, which is usually used for image segmentation tasks. In this case, instead of classifying each pixel (i.e. predict the segmentation mask), we will make the model convert a domain-randomized image into a canonical (non-randomized) version.

I did not have a robotic arm, hence it's only trained on box with random objects.

Results

Below are few examples (input, output and ground-truth):

Sim-to-sim - (randomized simulated image to canonical): static/simsim.png static/simsim2.png

Real-to-sim - (real photo to canonical) - : static/real2sim.png * note model has never seen these objects in the scene, hence the noise.

For a reference here are the results from the original paper (they also have a mask generated): static/paperres.png

TODO:

  • Upload requirements.txt
  • Instructions for image generation
  • Instructions for model training

Credits:

About

Sim-to-Real via Sim-to-Sim using fast.ai's U-net

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages