Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

access violation on disconnect #293

Closed
otaras opened this issue Sep 25, 2013 · 10 comments
Closed

access violation on disconnect #293

otaras opened this issue Sep 25, 2013 · 10 comments

Comments

@otaras
Copy link

otaras commented Sep 25, 2013

When tsl connection is used there could be a case when connection is destructed already while read operation is not finished yet.

The root cause looks to be in websocketpp\transport\asio\connection.hpp file where void async_read_at_least method uses plain this pointer to bind handle_async_read operation. There are a lot of such binds that looks potentially dangerous.

@zaphoyd
Copy link
Owner

zaphoyd commented Sep 26, 2013

I've pushed an update to master that switches all uses of raw pointers sent into asio async calls to shared pointers. Can you confirm that this fixes any issues you are seeing?

@otaras
Copy link
Author

otaras commented Sep 27, 2013

Hello Peter,
I spent some time for testing since it was not an easy issue to catch and
so far no more problems,

Thanks a lot,
Taras

On Thu, Sep 26, 2013 at 6:45 AM, Peter Thorson notifications@github.comwrote:

I've pushed an update to master that switches all uses of raw pointers
sent into asio async calls to shared pointers. Can you confirm that this
fixes any issues you are seeing?


Reply to this email directly or view it on GitHubhttps://github.com//issues/293#issuecomment-25164410
.

@otaras
Copy link
Author

otaras commented Oct 2, 2013

Hello Peter,

Unfortunately we again experienced a crash.

Application crashes if client forcibly closes connection in active reading
and writing stage (system error 10054 "An existing connection was forcibly
closed by the remote host")

The reason is that both async reading and async writing operations fail at
that same time and try to shutdown connection simultaneously while "void
connection::terminate(const lib::error_code & ec)" method is not
thread safe.

OS: Windows Server 2008 R2

Logs:
...
10/01/2013 10:38:56.814433 ThreadID=5520 websocketpp [info]: asio
async_read_at_least error: system:10054 (An existing connection was
forcibly closed by the remote host)
...
10/01/2013 10:38:56.814982 ThreadID=7416 websocketpp [info]: asio
async_write error: system:10054 (An existing connection was forcibly closed
by the remote host)
...

Exception: 0xC0000005: Access violation writing location 0x0000000000000024

Call stack:

msvcr110.dll!memcpy() Line 358 Unknown
libeay32.dll!bio_write(bio_st * bio=0x0000000000000045, const char *
buf=0x0000000000000000, int num_=114439632) Line 413 C
libeay32.dll!BIO_write(bio_st * b=0x00000000015df370, const void *
in=0x000000000577b930, int inl=91732272) Line 249 C
ssleay32.dll!ssl3_write_pending(ssl_st * s=0x00000000015df370, int
type=0, const unsigned char * buf=0x00000000015df370, unsigned int
len=91732272) Line 885 C
ssleay32.dll!ssl3_dispatch_alert(ssl_st * s=0x00000000015df370) Line 1495
C
ssleay32.dll!ssl3_shutdown(ssl_st * s=0x0000000000000000) Line 4112 C
webapisrv.exe!boost::asio::ssl::detail::engine::do_shutdown(void *
__formal=0x0000000000000028, unsigned __int64 __formal=109941576) Line 293
C++
webapisrv.exe!boost::asio::ssl::detail::engine::perform(int (void *,
unsigned __int64) * op=0x0000000001fde4e0, void * data=0x0000000001fde530,
unsigned __int64 length=22744256, boost::system::error_code & ec={...},
unsigned __int64 * bytes_transferred=0x0000000000000000) Line 231 C++
webapisrv.exe!boost::asio::ssl::detail::io_opboost::asio::basic_stream_socket<boost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp

,boost::asio::ssl::detail::shutdown_op,boost::function<void
__cdecl(boost::system::error_code const & __ptr64)>
::operator()(boost::system::error_code ec, unsigned __int64
bytes_transferred=22744256, int start=0) Line 136 C++
webapisrv.exe!boost::asio::ssl::detail::async_ioboost::asio::basic_stream_socket<boost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp
,boost::asio::ssl::detail::shutdown_op,boost::function<void
__cdecl(boost::system::error_code const & __ptr64)>
(boost::asio::basic_stream_socketboost::asio::ip::tcp,boost::asio::stream_socket_service<boost::asio::ip::tcp
& next_layer={...}, boost::asio::ssl::detail::stream_core & core={...},
const boost::asio::ssl::detail::shutdown_op & op={...},
boost::function<void __cdecl(boost::system::error_code const &)>
handler={...}) Line 322 C++
webapisrv.exe!websocketpp::transport::asio::connectionwebapi::WebSocketConfig::transport_config::async_shutdown(boost::function<void
__cdecl(boost::system::error_code const &)> callback={...}) Line 819 C++
webapisrv.exe!websocketpp::connectionwebapi::WebSocketConfig::terminate(const
boost::system::error_code & ec={...}) Line 1475 C++
webapisrv.exe!websocketpp::connectionwebapi::WebSocketConfig::handle_read_frame(const
boost::system::error_code & ec={...}, unsigned __int64
bytes_transferred=33420720) Line 854 C++
webapisrv.exe!websocketpp::transport::asio::connectionwebapi::WebSocketConfig::transport_config::handle_async_read(boost::function<void
__cdecl(boost::system::error_code const &,unsigned __int64)> handler={...},
const boost::system::error_code & ec={...}, unsigned __int64
bytes_transferred=33420896) Line 711 C++
...

Thanks,
Taras

On Thu, Sep 26, 2013 at 6:45 AM, Peter Thorson notifications@github.comwrote:

I've pushed an update to master that switches all uses of raw pointers
sent into asio async calls to shared pointers. Can you confirm that this
fixes any issues you are seeing?


Reply to this email directly or view it on GitHubhttps://github.com//issues/293#issuecomment-25164410
.

@zaphoyd
Copy link
Owner

zaphoyd commented Oct 3, 2013

Can you describe your thread setup in a little more detail? There should be no way for the async read and async write handlers to be running concurrently.

@otaras
Copy link
Author

otaras commented Oct 3, 2013

External io_service is used to share a common application thread pool.

Could you please describe what should prevent both methods from
simultaneous waiting in io_service? I didn't find usage of strands.

Taras
On Oct 3, 2013 6:03 AM, "Peter Thorson" notifications@github.com wrote:

Can you describe your thread setup in a little more detail? There should
be no way for the async read and async write handlers to be running
concurrently.


Reply to this email directly or view it on GitHubhttps://github.com//issues/293#issuecomment-25614795
.

@zaphoyd
Copy link
Owner

zaphoyd commented Oct 3, 2013

The present asio transport isn't set up for use with thread pool based io_service. At present I recommend multiple endpoints, each with a single threaded io_service. I do intend to ship an asio transport update or variant that works with thread pools but need to do some more testing before I feel comfortable publishing that one. More information on the current state of thread safety can be found at http://www.zaphoyd.com/websocketpp/manual/reference/thread-safety

@otaras
Copy link
Author

otaras commented Oct 3, 2013

Thanks for information.

Unfortunately this seriously limits server abilities to process many
connections at once when it listens on a single port and serves hundreds of
connections.

One more thing that I do differently from samples is that I set callbacks
directly to connections end not to an endpoint so to bypass the need of
having a single container of connection handlers as a potential bottleneck,
but using a single thread per endpoint eliminates this benefit as well.

Please consider usage of strand objects (
http://www.boost.org/doc/libs/1_53_0/doc/html/boost_asio/reference/io_service__strand.html)
per connection to avoid connection operations concurrency (and necessity of
extra mutex locks) but still have an ability to process different
connections in parallel.

Thanks,
Taras

On Thu, Oct 3, 2013 at 8:39 AM, Peter Thorson notifications@github.comwrote:

The present asio transport isn't set up for use with thread pool based
io_service. At present I recommend multiple endpoints, each with a single
threaded io_service. I do intend to ship an asio transport update or
variant that works with thread pools but need to do some more testing
before I feel comfortable publishing that one. More information on the
current state of thread safety can be found at
http://www.zaphoyd.com/websocketpp/manual/reference/thread-safety


Reply to this email directly or view it on GitHubhttps://github.com//issues/293#issuecomment-25625856
.

@zaphoyd
Copy link
Owner

zaphoyd commented Oct 6, 2013

I've pushed an update that should eliminate some of the issues related to using io_service thread pools. Let me know if it helps with the crashing with multithreaded io_services.

Please note a few things:

While this means that it becomes safe to use multiple run methods on an io_service it does not mean that connections themselves are thread safe. It is still absolutely not safe to work with connection_ptr objects or call methods directly on connections outside of a that connection's handlers. To perform an action on a connection from another thread (for example, send a message from a worker thread, or send a message to connection B from a handler for connection A) you must use the wrapper methods that endpoints provide, such as endpoint::send. The overhead from the endpoint wrapper methods is very small. These are constant time lookups with very minimal locking.

Using an io_service thread pool is not a substitute for pushing non-network related work into its own processing thread. If you are doing substantial work in handlers, especially work that is not algorithmically constant in time, io_service thread pools will not prevent unfair scheduling and blocked servers.

Regarding setting callbacks directly on connections... this is already the default behavior of the library. When an endpoint creates a connection it initializes the connection settings with copies by value (including all handlers). Once a connection is created it has no further direct interactions with its endpoint. There are no shared data structures within the endpoint that connections compete for access to.

@otaras
Copy link
Author

otaras commented Oct 10, 2013

Thank you Perer, this was really - really helpful.

The reason I use connection directly for callbacks is less related to
timing but more for removing bottleneck points then several connections
work simultaneously (using endpoint means create a single application queue
for dispatching with a global lock while attaching connection handlers
directly allows to avoid this dispatching). I'm aware that connection is
thread unsafe for setting callbacks but there are no concurrent attempts to
do it, at the same time connection looks safe from sending messages
perspective, there is a mutex that protects sending queue and the endpoint
does not have locks or look-ups, it only converts connection handler (weak
ptr) to a connection so sending via endpoint definitely does not have any
overhead.

Using a single io_service thread pool for the main processing routine
allows to avoid unnecessary falling into OS scheduler (if number of
threads equal to the number of cores) while gives ability to utilize whole
CPU. Having two thread pools with a size of half CPU cores number will not
give ability to utilize CPU if only one pool is working at the moment,
having two pools with sizes equal to a number of CPU cores means create
concurrency if both pool are busy, of course the threads must not be
blocked for any sync IO operations so a core is simply lost for this time.
I completely agree that doing a lot of work in handlers directly is not a
good approach but this is handled by scheduling async operations (either to
the same or another pool).

Thanks again,
Taras

On Sun, Oct 6, 2013 at 2:43 PM, Peter Thorson notifications@github.comwrote:

I've pushed an update that should eliminate some of the issues related to
using io_service thread pools. Let me know if it helps with the crashing
with multithreaded io_services.

Please note a few things:

While this means that it becomes safe to use multiple run methods on an
io_service it does not mean that connections themselves are thread
safe. It is still absolutely not safe to work with connection_ptr objects
or call methods directly on connections outside of a that connection's
handlers. To perform an action on a connection from another thread (for
example, send a message from a worker thread, or send a message to
connection B from a handler for connection A) you must use the wrapper
methods that endpoints provide, such as endpoint::send. The overhead from
the endpoint wrapper methods is very small. These are constant time lookups
with very minimal locking.

Using an io_service thread pool is not a substitute for pushing
non-network related work into its own processing thread. If you are doing
substantial work in handlers, especially work that is not algorithmically
constant in time, io_service thread pools will not prevent unfair
scheduling and blocked servers.

Regarding setting callbacks directly on connections... this is already the
default behavior of the library. When an endpoint creates a connection it
initializes the connection settings with copies by value (including all
handlers). Once a connection is created it has no further direct
interactions with its endpoint. There are no shared data structures within
the endpoint that connections compete for access to.


Reply to this email directly or view it on GitHubhttps://github.com//issues/293#issuecomment-25776335
.

@zaphoyd
Copy link
Owner

zaphoyd commented Feb 22, 2016

Releases between this point, through 0.7 have greatly improved the safety of Asio's thread pool mode. Going to close this now. If there are new specific problems with Asio thread pool mode, please open as new issues.

@zaphoyd zaphoyd closed this as completed Feb 22, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants