Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How to make own configure file and Dataset examples are running but getting stuck on first pair #1

Open
farhanmubasher opened this issue Jan 3, 2019 · 7 comments

Comments

@farhanmubasher
Copy link

Hi,
JPP is running but it stuck on first pair, can you please tell me what is wrong.
And i also want it to run it real time but how can i make config file for real time.

Terminal output:

./bin/jpp -n 33 -d KITTI/ -c calibration/kitti_2011_09_26.yml -j cfg/kitti.cfg -o astar -v 1
Processing pair 1
good
@sourishg
Copy link
Collaborator

sourishg commented Jan 6, 2019

Hi @farhanmubasher you need press any key to move on to the next pair. Sorry, I should've mentioned that in the README.

As for the real-time config file, @srabiee can you help?

@srabiee
Copy link

srabiee commented Jan 7, 2019

Hi @farhanmubasher ,
You can look at here for an example of using JPP real-time. Regarding the config file, you can start with the same config file that you have used for static images. The only parameters that you should change in the config file are the image size and robot dimensions. You can then fine-tune the conf_pos_thresh and conf_neg_thresh to achieve the best results given your setup.

@farhanmubasher
Copy link
Author

farhanmubasher commented Jan 7, 2019

Thankyou very much :)
I have successfuly run it on KITTI and ARML datasets, bu ti don't know how to use it for real time, How can i make my own configure file for real time obstacle avoidance.
This is the config file for KITTI dataset, i don't know what these parameters mean and how can i change them for my stereo camera. I have not make any own configure file yet.

# JPP configurations for KITTI dataset
# Units in mm

calib_img_width = 1392; // calibration img width
calib_img_height = 512; // calibration img height

rect_img_width = 489; // rectified (final) img width
rect_img_height = 180; // rectified (final) img height

daisy_params = {
  R = 5.0; // Descriptor radius
  Q = 2;  // Radius quantization
  T = 1;  // Angular quantization
  H = 1;  // Histogram quantization
};

bot_length = 2000;
bot_width = 300;
bot_height = 1000;

grid_size = 500; // planning grid cell size

conf_pos_thresh = 0.75;
conf_neg_thresh = 1.1;

SAD_window_size = 5;

spatial_filter_window = 200;
spatial_filter_inc = 100;
spatial_filter_ratio = 0.95;

conf_neg_inc = 100;
conf_neg_filter_ratio = 0.6;

start_x = 2500;
max_x = 8000;
max_y = 3000;

convex_world = false;

can you please help me what these parameters are? and how can i change it for my stereo camera.
i don't know what does "daisy_params, grid_size, SAD_window_size, start_x, max_x/y, convex_world" mean. please give some description on these parameters so i can set them according to my stereo camera.

@farhanmubasher farhanmubasher changed the title Running but getting stuck on first pair How to make own configure file and Dataset examples are running but getting stuck on first pair Jan 7, 2019
@srabiee
Copy link

srabiee commented Jan 8, 2019

grid_size is the distance of adjacent points that are checked for obstacles on the 2D grid in millimeters. start_x , max_x, and max_y define the borders of the 2D grid in millimeters. These values are in the robot's base reference frame. The two opposing corners of the grid are (start_x, -max_y) and (max_x, max_y).
If convex_world is set to true, it is assumed that if any point P(x, y, z) is unoccupied then the set of points {P(x, y, z'): 0 < z < z'} are also unoccupied. convex_world should be kept false in order for the robot to avoid hanging obstacles such as table tops. SAD_window_size is the size of the window of pixels used for calculating the similarity of two image points on the stereo images. For more detailed information, you can refer to the paper. daisy_params are also the parameters of the Daisy image descriptor, which is used for disparity checks as defined here.
I suggest you start with keeping these parameters as they are, and only change calib_img_width and calib_img_height to match the size of your input images. rect_img_width and rect_img_height are the sizes of rectified final images which will be processed for obstacle detection. Use a scaled down version of calib_img_width and calib_img_height for these parameters.

@farhanmubasher
Copy link
Author

I have changed calib_img_width/height parameters and have changed config file according to my stereo camera. It is running in real time but giving same output of terminal as below:

rosrun jpp navigation -l /stereo/left/image_raw -r tereo/right/image_raw -c calibration/tara_cal.yml -j cfg/tara.cfg -o astar -d 1 -v 1

goal.x: 8000, goal.y: 0
y_intercept: 0
[ INFO] [1546959788.586667160]: stop
[ INFO] [1546959788.586813625]: real_path.poses[real_path.size()]: 0.000000
goal.x: 8000, goal.y: 0
y_intercept: 0
[ INFO] [1546959789.136889986]: stop
[ INFO] [1546959789.136984610]: real_path.poses[real_path.size()]: 0.000000
goal.x: 8000, goal.y: 0
y_intercept: 0
[ INFO] [1546959789.259882979]: stop
[ INFO] [1546959789.259969603]: real_path.poses[real_path.size()]: 0.000000

here are the two video stream windows (PATH, CONFIDENCE) appeared after running jpp:
path_screenshot_08 01 2019

@srabiee
Copy link

srabiee commented Jan 13, 2019

It seems like your camera is setup such that the ground plane is not visible. It is probably the reason why JPP cannot find an obstacle free path. By the way, make sure you do extrinsic calibration for your stereo camera as instructed in the README file to calculate the transformation from your camera reference frame to that of the robot's base frame (XR and XT matrices in the calibration file).

@YangSiri
Copy link

Hi, @srabiee

Could you explain me more about the parameters in calibration file?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants