Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Option to connect with gazebo for physics reasoning #11

Closed
sfchik opened this issue Jun 12, 2016 · 5 comments
Closed

Option to connect with gazebo for physics reasoning #11

sfchik opened this issue Jun 12, 2016 · 5 comments

Comments

@sfchik
Copy link

sfchik commented Jun 12, 2016

Dear,

I am currently doing research on path planning in human environment and I am interested to use the feature of connection with gazebo to simulate a real world scenario in my university. May I know is there any tutorial provided for the pedsim_ros to connect to gazebo? Thank you.

Best Regards,
Sheng Fei

@tlind
Copy link
Member

tlind commented Jun 13, 2016

In the SPENCER project, we have developed some code for interfacing pedsim with gazebo. This code is not yet online because we still have to clean it up. Essentially, there is a Python script which, subscribes to the person poses published by pedsim_ros, pauses the Gazebo simulation, manually updates positions of the Gazebo agents, and then unpauses again. Initially during startup, the agents are spawned in the simulation using a human mesh model.

The main drawback in this approach is that to move the agents using services provided by gazebo_ros, Gazebo needs to be paused, which inherently slows things down. Using around 10-15 agents, we are able to run on a high-end computer with physics disabled (instead, we use it just to generate sensor data using the GPU-based 2d laser raycasting) at around 0.5x to 0.8 real-time speed.

If we enable physics, things slow down more. Also, there is an undesired effect that the human meshes bump into each other and start falling (like a domino effect), which is why we currently don't enable it. I guess that a better human mesh, with less polygons and better collision model, could improve things.

@sfchik
Copy link
Author

sfchik commented Jun 13, 2016

Hi tlind,

Thanks for the information. Actually I am trying to generate a local costmap of the human using the laser scan data from gazebo, so that I can perform a local path planning. Which means that is possible right? Just that the physics is disabled and have to run at a slower speed with a high end computer. I saw an interesting picture in the main page, the "costmap.png", I am thinking is that an alternative way to generate a costmap using rviz, or that is also a result from gazebo. Thank you. And ya SPENCER project is impressive.

@makokal
Copy link
Member

makokal commented Jun 13, 2016

You can generate costmaps using the pointclouds from the sensor module of pedsim_ros.
If you use move_base this is rather straightforward. Have a look at https://github.com/makokal/socially_normative_navigation/blob/master/snn_launchers/config/move_base_parameters/lobby/local.yaml for an example how to do this.

If it must be laser then, the gazebo solution is the only option now. Alternatively, one can simulate laser scans since we know where everyone and obstacles are (simply computing some ray intersections), but I am currently too tied to work on this at the moment).

@sfchik
Copy link
Author

sfchik commented Jun 13, 2016

Hi makokal,

Thanks. Alright, I will explore the method using pointclouds and move_base. Everything is very helpful XD. Thanks again.

@silgon
Copy link
Contributor

silgon commented Sep 4, 2016

Hello, sorry, I just saw this post. I don't know how this implementation of stop-spawn-start thing is working at the time. Some months ago I was dealing with a similar problem for some data base. We used in our lab ModelState publisher from gazebo library.

In the link of the following link (http://chronos.isir.upmc.fr/~islas/vids/iit_as_cokes.ogv) you can see the IIT data set where people were projected as coke cans (we didn't continue the work because we were doing some other research).
In short, we were using spawing, deleting and moving tasks with the following base commands:

// move people
publisher_to_gazebo = n.advertise<gazebo_msgs::ModelState>("/gazebo/set_model_state/", 1000);
// service to spawn people
gazebo_spawn_service_client = n.serviceClient<gazebo_msgs::SpawnModel>("/gazebo/spawn_urdf_model");
// service to delete people
gazebo_destroy_service_client = n.serviceClient<gazebo_msgs::DeleteModel>("/gazebo/delete_model");

So, for spawning we include in a for look some code as the following:

gazebo_msgs::SpawnModel srv_spawn_model_msg;

// 1. Set the model name
std::string model_name = "person_";
model_name.append(patch::to_string(tracked_person_id));
srv_spawn_model_msg.request.model_name = model_name;

// 2. Set the model urdf
srv_spawn_model_msg.request.model_xml =   coke_model_xml;

// 3. Set model's coordinates
srv_spawn_model_msg.request.initial_pose.position.x = current_message.tracks[tracked_person_counter].pose.pose.position.x;
srv_spawn_model_msg.request.initial_pose.position.y = current_message.tracks[tracked_person_counter].pose.pose.position.y;

// 4. Set reference frame
srv_spawn_model_msg.request.reference_frame = "world";

// 5.  Call the ROS service
gazebo_spawn_service_client.call(srv_spawn_model_msg);

After that, gazebo_msgs::ModelState is used to move each human.

I don't know if gazebo_msgs::ModelState stops the simulation @tlind is probably could have more info about it. Anyway, it could be maybe a easy way to do it.

At the moment I don't have time to implement it. But maybe it can be useful for you =)

@makokal makokal closed this as completed Jun 18, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants