Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid deleting XmlRpcClient's while they are being still in use on another thread #1013

Merged
merged 2 commits into from
Apr 25, 2017

Conversation

afakihcpr
Copy link
Contributor

@dirk-thomas @mikepurvis @efernandez @jasonimercer

Bug description:

Deleting XmlRpcClient's in XMLRPCManager::shutdown() could lead to crashes or to the process getting blocked if a thread tries to communicate with the master while the clients are being deleted. The issue is in the following code at line 158 of xmlrpc_manager.cpp:

// kill the last few clients that were started in the shutdown process
  for (V_CachedXmlRpcClient::iterator i = clients_.begin();
       i != clients_.end(); ++i)
  {
    for (int wait_count = 0; i->in_use_ && wait_count < 10; wait_count++)
    {
      ROSCPP_LOG_DEBUG("waiting for xmlrpc connection to finish...");
      ros::WallDuration(0.01).sleep();
    }

    i->client_->close();
    delete i->client_;
  }

There are three potential crashes that could result from this:

  1. The clients_ vector is not protected while deleting. A thread can attempt to get a client while the loop is running and it could get a client that has been deleted. This would cause a crash somewhere in
    XmlRpcClient::execute or could cause the node to be blocked at select.
  2. Also, if many clients are being requested repeatedly during the time when shutdown() is sleeping and waiting for a client to become out of use, and being served with that same client, the wait could timeout with the client still in use, and it would get deleted while being in use.
  3. Finally, there's a chance, that getXMLRpcClient would push_back a new client into clients triggering a re-allocation and invalidating clients which would cause a crash in the delete loop itself.

Code to reproduce:

#include "ros/ros.h"
#include <boost/thread.hpp>
#include <boost/date_time/posix_time/posix_time.hpp>
#include <boost/thread/thread.hpp>

void run()
{
  ros::NodeHandle nh("~");
  std::string s = "something";
  const int iterations = 100;
  while (true)
  {
    for (int i = 0; i < iterations; ++i)
    {
      nh.param("param", s, s);
    } 
    boost::this_thread::interruption_point();
  }
}

int main(int argc, char **argv)
{
  ros::init(argc, argv, "test_node");

  boost::thread t(run);
  boost::this_thread::sleep(boost::posix_time::milliseconds(1000));
    
  ros::shutdown();

  t.interrupt();
  t.join();

  return 0;
}

Resolution

  • Acquire clients_mutex_ before deleting the clients
  • Remove the timed wait for the clients to become not in use
  • Only delete and erase clients that are not in use
  • Clients that would still be in use would delete themselves on release

…other thread

   * Acquire clients_mutex_ before deleting the clients
   * Remove the timed wait for the clients to become not in use
   * Only delete and erase clients that are not in use
   * Clients that would still be in use would delete themselves on release
@efernandez
Copy link
Contributor

👍

LGTM and since this also have some testing, I wouldn't do bigger changes.

I wonder how we could adapt the code that reproduces the problem into a unit test, and where it should live. ❓

@dirk-thomas
Copy link
Member

With the current patch the shutdown method doesn't sleep (and therefore yield) anymore if clients are still in use. Isn't that something which would make sense to retain?

@afakihcpr
Copy link
Contributor Author

@dirk-thomas If it is desirable to yield in shutdown for clients that are still in use, we could add after removing the clients Not in use:

for (int wait_count = 0; !clients_.empty() && wait_count < 10; wait_count++)
{	
  ROSCPP_LOG_DEBUG("waiting for xmlrpc connection to finish...");
  ros::WallDuration(0.01).sleep();
}

This will give time to the clients in use to finish and remove themselves from clients_. Does that sound like an acceptable solution?

@dirk-thomas
Copy link
Member

Yes, I think that would be good to closer match the original behavior.

mikepurvis added a commit that referenced this pull request Mar 28, 2017
@mikepurvis
Copy link
Member

We've been running this for a month; I think it's a safe merge.

@dirk-thomas
Copy link
Member

@afakihcpr Thank you for the patch and for iterating on it. @mikepurvis Thanks for testing it.

@dirk-thomas dirk-thomas merged commit 64920f2 into ros:lunar-devel Apr 25, 2017
sputnick1124 pushed a commit to sputnick1124/ros_comm that referenced this pull request Jul 30, 2017
…other thread (ros#1013)

* Avoid deleting XmlRpcClient's while they are being still in use on another thread

   * Acquire clients_mutex_ before deleting the clients
   * Remove the timed wait for the clients to become not in use
   * Only delete and erase clients that are not in use
   * Clients that would still be in use would delete themselves on release

* Wait for clients that are in use to finish in XmlRpcManager::shutdown
dirk-thomas pushed a commit that referenced this pull request Oct 25, 2017
…other thread (#1013)

* Avoid deleting XmlRpcClient's while they are being still in use on another thread

   * Acquire clients_mutex_ before deleting the clients
   * Remove the timed wait for the clients to become not in use
   * Only delete and erase clients that are not in use
   * Clients that would still be in use would delete themselves on release

* Wait for clients that are in use to finish in XmlRpcManager::shutdown
dantwinkler added a commit to 6RiverSystems/ros_comm that referenced this pull request Dec 28, 2017
* fix open mode on Windows

* Fix BZip2 inclusion

* Respect if/unless for roslaunch-check.

* fix rosmsg show from bag

* fix rosbag::View::iterator copy assignment operator (ros#1017)

* refactor test_rosbag_storage

* fix rosbag::View::iterator copy assignment operator

the compiler-generated copy assignment operator did lead to segfault and
memory leaks.

* Add subscriber to connection log messages. (ros#1023)

* ensure cwd exists

* Sleep in rospy wait_for_service even if exceptions raised

* Avoid deleting XmlRpcClient's while they are being still in use on another thread (ros#1013)

* Avoid deleting XmlRpcClient's while they are being still in use on another thread

   * Acquire clients_mutex_ before deleting the clients
   * Remove the timed wait for the clients to become not in use
   * Only delete and erase clients that are not in use
   * Clients that would still be in use would delete themselves on release

* Wait for clients that are in use to finish in XmlRpcManager::shutdown

* Abort topic lookup on connection refused

In a multimaster environment where a topic has multiple publishers,
when a node drops out abruptly (host is shutdown), a single subscriber update on
that topic will cause multiple threads to be created (one for each host) in order to
resolve the topic location. This cause a thread leak as host which are turned off
will not respond and when they come back online, the xmlrpc URI is changed causing a
connection refused error at the socket layer.

This fix catches the connection refused error and terminate the thread with the understanding
that if the connection is refused, the rosnode cannot be reached now or never. This effectively
prevents thread leak.

Note: if the remote host where the rosnode is thought to be never comes back up,
then the thread will still be leaked as the exception received is a host unreachable type.
This is intentional to avoid abruptly terminating the thread in case of a temporary DNS failure.

* Fix bug in transport_tcp (ros#1050)

* Fix bug in transport_tcp

It assumes that the `connect` method of non-blocking scoket should return -1 and `last_socket_error()` should return `ROS_SOCKETS_ASYNCHRONOUS_CONNECT_RETURN`(=`EINPROGRESS`). 
But a non-blocking `connect` can return 0 when TCP connection to 127.0.0.1 (localhost).
[http://stackoverflow.com/questions/14027326/can-connect-return-0-with-non-blocing-socket](http://stackoverflow.com/questions/14027326/can-connect-return-0-with-non-blocing-socket)

* Modify code format

Modify code format

* Fix race condition that lead to miss first message (lunar) (ros#1058)

* Fix race condition that lead to miss first message

Callback queue waits for callback from "callOne" method.
When internal queue is not empty this last method succeeded even if id
info mapping does not contains related callback's id.
In this case, first callback (for one id) is never called since
"addCallback" method first push callback into internal queue and *then*
set info mapping.

So id info mapping has to be set before push callback into internal
queue. Otherwise first message could be ignored.

* Added test for addCallback race condition

* ensure pid file is removed on exit

* Changed the check command output to be a little bit more specific.

* [roslaunch] Fix pid file removing condition

* [rospy] Add option to reset timer when time moved backwards (ros#1083)

* Add option to reset timer when time moved backwards

* refactor logic

* add missing mutex lock for publisher links

* [rospy] Improve rospy.logXXX_throttle performance

* Added logging output when `roslogging` cannot change permissions (ros#1068)

* Added logging output when `roslogging` cannot change permissions

Added better differentiated logging output to `roslogging` so that
problems with permission is made clear to the user. Accompanying test
have also been added.

* Removed testing, updated warning message and fixed formatting

Removed testing since test folder should not be stored together with
tests. Since testing group permission requires intervention outside the
test harness the test it self is also removed.

Updated the warning message to include `WARNING` and updated to using
`%` formatting.

* [rostest] Check /clock publication neatly in publishtest (ros#973)

* Check /clock publication neatly in publishtest

- Use time.sleep because rospy.sleep(0.1) hangs if /clock is not published
- Add timeout for clock publication

* Add comment explaining use of time.sleep.

* Use logwarn_throttle not to flood the console

* A fix to a critical stack buffer overflow vulnerability which leads to direct control flow hijacking (ros#1092)

* A fix to a critical stack buffer overflow vulnerability which leads to control flow hi-jacking.

* Much more simple fix for the stack overflow bug

* only launch core nodes if master was launched by roslaunch

* Made copying rosbag::Bag a compiler error to prevent crashes and added a swap function instead (ros#1000)

* Made copying rosbag::Bag a compiler error to prevent crashes

* Added Bag::swap(Bag&) and rosbag::swap(Bag&, Bag&)

* Fixed bugs in Bag::swap

* Added tests for Bag::swap

* [roscpp] add missing header for writev().

After an update of gcc and glibc roscpp started to fail builds with the error:

    /home/rojkov/work/ros/build/tmp-glibc/work/i586-oe-linux/roscpp/1.11.21-r0/ros_comm-1.11.21/clients/roscpp/src/libros/transport/transport_udp.cpp:579:25: error: 'writev' was not declared in this scope
         ssize_t num_bytes = writev(sock_, iov, 2);
                             ^~~~~~

According to POSIX.1-2001 the function writev() is declared in sys/uio.h.

The patch includes the missing header for POSIX compliant systems.

* Add SteadyTimer (ros#1014)

* add SteadyTimer

based on SteadyTime (which uses the CLOCK_MONOTONIC).
This timer is not influenced by time jumps of the system time,
so ideal for things like periodic checks of timeout/heartbeat, etc...

* fix timer_manager to really use a steady clock when needed

This is a bit of a hack, since up to boost version 1.61 the time of the steady clock is always converted to system clock,
which is then used for the wait... which obviously doesn't help if we explicitly want the steady clock.

So as a workaround, include the backported versions of the boost condition variable if boost version is not recent enough.

* add tests for SteadyTimer

* [test] add -pthread to make some tests compile

not sure if this is only need in my case on ROS indigo...

* use SteadyTime for some timeouts

* add some checks to make sure the backported boost headers are included if needed

* specialize TimerManager threadFunc for SteadyTimer

to avoid the typeid check and make really sure the correct boost condition wait_until implementation is used

* Revert "[test] add -pthread to make some tests compile"

This reverts commit f62a3f2.

* set minimum version for rostime

* mostly spaces

* Close CLOSE_WAIT sockets by default (ros#1104)

* Add close_half_closed_sockets function

* Call close_half_closed_sockets in xmlrpcapi by default

* fix handling connections without indices

* fix rostopic prining long integers

* update tests to match stringify changes

* ignore headers with zero stamp in statistics

* Improves the stability of SteadyTimerHelper.

Due to scheduling / resource contention, `sleep`s and `wait_until`s may be delayed. The `SteadyTimerHelper` test class was not robust to these delays, which was likely the cause of a failing test (`multipleSteadyTimeCallbacks` in `timer_callbacks.cpp`:220).

* Improve steady timer tests (ros#1132)

* improve SteadyTimer tests

instead of checking when the callback was actually called,
check for when it was added to the callback queue.

This *should* make the test more reliable.

* more tolerant and unified timer tests

* xmlrpc_manager: use SteadyTime for timeout

* Replaced deprecated lz4 function call

* Removed deprecated dynamic exception specifications

* only use CLOCK_MONOTONIC if not apple

* Improved whitespace to fix g++ 7 warning

.../ros_comm/tools/rosbag_storage/src/view.cpp:249:5: warning: this ‘if’ clause does not guard... [-Wmisleading-indentation]
     if ((bag.getMode() & bagmode::Read) != bagmode::Read)
     ^~
.../ros_comm/tools/rosbag_storage/src/view.cpp:252:2: note: ...this statement, but the latter is misleadingly indented as if it were guarded by the ‘if’
  boost::function<bool(ConnectionInfo const*)> query = TrueQuery();
  ^~~~~

* Using poll() in favor of select() in the XmlRPCDispatcher (ros#833)

* Using poll() in favor of select() in the XmlRPCDispatcher

* ros#832: initialize pollfd event with 0. Added check for POLLHUP and POLLNVAL

* Fix syntax

* poll flags emulate select, verify requests, sync/init sources/fds

This commit makes sure that the poll flags emulate select (which it replaces).
It also double-checks event types to make sure they were requested (e.g. POLLERR might trigger both).
It keeps track of fd/src relationship through two parallel arrays, instead of an iterator / array hybrid.

* Fix rostopic hz and bw in Python 3 (ros#1126)

* string.atoi is not present in Python3, just use int(x)

* rostopic bw: set default value of window_size arg to -1 instead of None

* Check for window_size < 0 when constructing ROSTopicBandwidth object

* Revert "Check for window_size < 0 when constructing ROSTopicBandwidth object"

This reverts commit 86a2a29.

* Revert "rostopic bw: set default value of window_size arg to -1 instead of None"

This reverts commit 4c74df9.

* Check for argument != None before calling int(options.window_size)

* Properly check for options.window_size != None before converting to int

* Don't direct users to build rosout with rosmake. (ros#1140)

* Don't direct users to build rosout with rosmake.

* Eliminate special case for rosout being missing.

* use not deprecated console_bridge macros and undefine the deprecated ones

* Fix rosbag API for Python 3

Closes ros#1047

* Sort the output of rosnode info.

* Minor fixes for compatibility with both Python 2 & 3

* [bug] fixes ros#1158 (ros#1159)

* [bug] fixes ros#1158

XmlLoader and LoaderContext no longer share the param list to child 'node' contexts.
This was causing the creation of unintended private parameters when the tilde notation was used.

* added test cases for tilde param in launch

* added test cases for tilde param in python

* fixed tilde param issue for group tags

Issue ros#1158 manifested for group tags that appear before (but not containing) node tags.

* added one more test case for issue ros#1158
used param tag to make sure we test the proposed fix

* Added negative tests for extra parameters

Some parameters should not be present at all.

* rosconsole: replaced 'while(0)' by 'while(false)'

* Change rospy.Rate hz type from int to float (ros#1177)

* Don't try to set unknown socket options (ros#1172)

* Don't try to set unknown socket options

These are not avaible on FreeBSD, for example

* individualize ifdefs

* fix whitespace

* rosnode: Return exit code 1 if there is an error. (ros#1178)

* Fixed an out of bounds read in void rosbag::View::iterator::increment() (ros#1191)

- Only triggered if reduce_overlap_ = true
- When iters_.size() == 1 and iters_.pop_back() gets called in the loop,
  the next loop condition check would read from iters_.back(), but
  iters_ would be empty by then.

* Test bzfile_ and lz4s_ before reading/writing/closing (ros#1183)

* Test bzfile_ before reading/writing/closing

* Test lz4stream before reading/writing

* More agile demux. (ros#1196)

* More agile demux.

Publishers in demux are no longer destroyed and recreated when switching, which results in much faster switching behavior. The previous version took even 10 seconds to start publishing on the newly selected topic (all on localhost).

Please, comment if you feel the default behavior should stay as the old was, and this new behavior should be triggered by a parameter.

* update style

* catch exception with `socket.TCP_INFO` on WSL (ros#1212)

* catch exception with `socket.TCP_INFO` on WSL

fixes issue ros#1207
this only catches socket error 92

* Update util.py

* Update util.py

* avoid unnecessary changes, change import order

* fix path check

* fix error message to mention what was actually tested

* fix roswtf test when rosdep is not initialized

* update changelogs

* 1.12.8

* backward compatibility with libconsole-bridge in Jessie

* update changelogs

* 1.12.9

* backward compatibility with libconsole-bridge in Jessie (take 2)

* update changelogs

* 1.12.10

* Revert "Replaced deprecated lz4 function call"

This reverts commit a31ddd0.

* update changelogs

* 1.12.11

* backward compatibility with libconsole-bridge in Jessie (take 3) (ros#1235)

* backward compatibility with libconsole-bridge in Jessie (take 3)

* copy-n-paste error

* remove fragile undef

* dont rely on existence of short version of the macros, define the long with API calls instead

* remove empty line

* update changelogs

* 1.12.12

* place console_bridge macros definition in header and use it everywhere console_bridge is included (ros#1238)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants