Skip to content
Michał Nowicki edited this page Jan 24, 2016 · 16 revisions

TO DO!

For now:

  • 3D point -> remove camera distortion every time (especially before depth usage)
  • Prepare program to use 2D features
  • Do the evaluation instruction
  • Kinect/Xtion/Pgrey -> initPose from file (right now everytime identity)
  • LDB as 3rd party
  • Detector parameters
  • Core dumped when closing in matching
  • Detection on BGR image
  • DBScan
  • removing features with depth > 6m
  • test solution using mean error in RANSAC
  • depthless features in optimization
  • octomap from ROS
  • Use some kind o adjusting mechanism in detection step
  • Separate folder for Demo sources
  • Uncertainty model in RANSAC
  • RANSAC on reprojection error
  • Garbage collector in map
  • Cechy pobierane jedynie z póz bliskich euklidesowo/kątowo
  • Usage of patches; incremental correction based on patches
  • What happens when VO is lost
  • Main PUTSLAM class
  • Remove PCL dependency

Sprint till May:

  • new ORB implementation based on ORBSLAM
  • Umeyama vs g2o
  • Motion model
  • Point of view threshold -> more than 17,2 deg
  • features from map (What does two thresholds in ORBSLAM do?)
  • project features on an image that is a little bit bigger
  • garbage collector
  • keyframe -> removing unnecessary measurements -> so called policemen :)
  • Draw map.size() vs frame no. (M.Nowicki)
  • Delay VO to have optimized graph (M.Nowicki)
  • getVisibleFeatures - smarter!
  • Loop Closure (M.Nowicki)
  • backend cleaning (M.Nowicki)
  • MiT stata (M.Nowicki)
  • optimization on reprojection error
  • separate parameters for Tracking and LC

Later:

  • OpenCV
  • ROS
  • Dataset MiT
  • Visualization OpenGL

KLT parameters used in SVO:

  • winSize = 30
  • maxIter = 30
  • eps = 0.001
  • maxLvl = 4