Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not catched exception in connection::handle_read_frame #110

Closed
megabyte1024 opened this issue Apr 24, 2012 · 10 comments
Closed

Not catched exception in connection::handle_read_frame #110

megabyte1024 opened this issue Apr 24, 2012 · 10 comments

Comments

@megabyte1024
Copy link

The handle_read_frame method has has the following code

try {
  m_processor->consume(s);
  ....
} catch (const processor::exception& e) {
  ....
}

It catches only processor::exception, but the consume method is able to throw a websocketpp::exception class instance in the connection::get_data_message method, which called by the hybi::process_data_header method.

I don't know either it is a bug or a feature. If it is the feature, then how is possible to catch this exception inside of multi-threaded client. Probably multit-hreaded servers have similar problem.

@zaphoyd
Copy link
Owner

zaphoyd commented Apr 24, 2012

I haven't decided if this is a bug or a feature. =) I haven't set processors up to catch and rethrow websocketpp::exceptions because I am hoping to eliminate all code that throws websocketpp::exceptions from processors entirely. Right now this would simply end up passing the exception back to handle_read_frame which would just rethrow it as a websocketpp::exception again.

As far as the behavior for multi-threaded endpoints where WebSocket++ is managing the thread pool.. Ideally i'd like to use C++11 thread exception propogation on compilers that support it (Clang 2.9+, g++ 4.4+, VCPP2010+) and I haven't quite decided how much I want to do for other setups.

@megabyte1024
Copy link
Author

Clear.. Now I avoided this bug-o-feature ;) by using the following code

void process_data_header() {
...
try {
m_data_message = m_connection.get_data_message();
}
catch(...) {
m_data_message.reset();
}
...

So I think it is possible to close the issue. It would be nice if this or similar temporary fix will be included in the master.

@zaphoyd
Copy link
Owner

zaphoyd commented Apr 24, 2012

do you know what specific exception you are getting? Is it "endpoint was destroyed", "invalid state", or "no outgoing messages available"?

@megabyte1024
Copy link
Author

Yes, fore sure. It's the "endpoint was destroyed". In my code have possibility to reconnect to a server with a timeout. If the client can't connect to the server in the timeout, it calls the connection::close method, which posts to the queue the terminate method call, because the connection is the CONNECTING state. Inside of the terminate, the m_detached flag is set to true. In parallel the connection is established (the state is OPENED) and the client receives a data portion from the server. As soon as the connection is detached, the exception is thrown.

A short question not linked with the topic. Are there plans to implement a multi-frame data send possibility? It would be a really handy feature, because in this in this case is really easy to implement timeouts which have dependency on frame size and channel bandwidth.

@zaphoyd
Copy link
Owner

zaphoyd commented Apr 25, 2012

That is what I suspected. The connection detachment pattern turned out to interact very poorly with the rest of the system. I am scrapping it in 0.3 in favor of making connection 100% independent of endpoint. This should eliminate all situations where the "endpoint was destroyed" exception is thrown.

Regarding multi-frame. I am going to split that out into another issue, but first I want one clarification as there are several types of multi-frame. Are you interested in sending a fixed size fully buffered message in multiple frames or sending an unknown size partially buffered message out in frames as data is received?

@megabyte1024
Copy link
Author

When is possible to expect the version 0.3?

Let's make a separate thread for the multi-frame issue. I have fixed size fully-buffered data, i.e. the 1st case.

@zaphoyd
Copy link
Owner

zaphoyd commented Apr 25, 2012

I've made a separate issue #111 for fixed size fully buffered outgoing fragmentation. #35 already exists for unknown size partially buffered.

0.3 is still in the experimental stages right now. The major changes are a redesign of the core policy and locking systems. The original design of 0.2 did not lend itself well to the addition of locking. It required complicated recursive locks that are hard to understand and hard to debug. 0.3 has been re-architeched to use a much simpler non-recursive locking system that should fail less spectacularly and be easier to maintain. I also have access to a VCPP2010 compiler now and am re-working the policy templates to be warning free on VCPP2010.

The 0.3 core changes will have a few other nice side-effects. In particular it will now be possible to use the C++11 STL in place of boost for most utility functionality (shared pointers, bind, date/time, random, regex, threads, mutexes, etc). Basically everything but ASIO (where there is no STL option). I have also been playing with an extra-experimental ASIO-free branch of 0.3 that can decode and process websocket data from an arbitrary istream.

Hopefully it will take no more than a month or two to complete and merge, it may be faster depending on unrelated work and class schedules. In the meantime I will keep fixing bugs and refining functionality to the parts of 0.2 that aren't affected by the 0.3 changes or are easily portable. The handler and connection interface for 0.3 shouldn't change at all, the endpoint interface may change slightly to accommodate some of the issues brought up regarding the monolithic listen functions and desires for using external io_service objects.

I wish GitHub had a blog or news feed or something... Would a WS++ development blog apart from random comments-in-issues be useful?

@megabyte1024
Copy link
Author

Thank you very much for your effort!

I've made a separate issue #111 for fixed size fully buffered outgoing fragmentation. #35 already exists for unknown size partially buffered.

I have already posted an answer.

Would a WS++ development blog apart from random comments-in-issues be useful?

I think it would be useful, if it has an email subscription to news.

Will the version 0.3 support also the multi-threaded client and server?

@zaphoyd
Copy link
Owner

zaphoyd commented Apr 25, 2012

Version 0.3 will support all of the threading modes that 0.2 does (single threaded async client/server, single threaded async client/server with user generated worker threads, and thread pool server). 0.2 technically supports a thread-pool client but has no end user accessible interface for it. If you are feeling brave you can play with it by changing num_threads=1 to another value in client.hpp client::run method. It is nearly identical to server thread-pool.

Version 0.3 will have an interface for client thread-pool. I also plan to offer some additional concurrency policies that will allow end users to customize this a bit further. In particular a "no locking" policy that makes the locks no-ops for reduced overhead in situations where you can guarantee single threaded operation (single threaded servers and multi-process fork servers). I am experimenting with a thread per connection policy as well.

@megabyte1024
Copy link
Author

If you are feeling brave you can play with it by changing num_threads=1 to another value in client.hpp client::run method. It is nearly identical to server thread-pool.

Already did. :) I declared the num_threads as the run method input parameter. Till now my code works fine but it still in debug phase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants