Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Non-blocking Future #9

Closed
guersam opened this issue Aug 31, 2013 · 8 comments
Closed

Non-blocking Future #9

guersam opened this issue Aug 31, 2013 · 8 comments

Comments

@guersam
Copy link
Contributor

guersam commented Aug 31, 2013

Hi, I'm looking for an ES client and your DSL-based approach looks very promising. Thanks in advance :)

One thing makes me worry is about the current execute implementation.

https://github.com/sksamuel/elastic4s/blob/master/src/main/scala/com/sksamuel/elastic4s/Client.scala#L39-L41

It takes per-client ExecutionContext and returns a Future by wrapping blocking actionGet() with a future {} block. It doesn't block the current thread but actually blocks the dedicated one, so it might reduce overall throughput and be a bottleneck.

Suggestion is attaching an ActionListener[T] using this and returning a Future from our own Promise, for example (not tested, just a concept):

def execute[A <: ActionRequest, B <: ActionResponse, C <: ActionRequestBuilder[A, B, C]](action: Action[A, B, C], req: A): Future[B] = {
  val p = Promise[B]()
  client.execute(action, req, new ActionListener[B] {
    def onResponse(response: B) = p.success(response)
    def onFailure(e: Throwable) = p.failure(e)
  })
  p.future
}

It'll make the API require an ExecutionContext for every request. If you accept this change I'll open a PR when time allows.

@sksamuel
Copy link
Collaborator

sksamuel commented Sep 1, 2013

I guess the idea is you can pass in your own execution context if you think it might become a bottleneck.

If we move to using a Promise that we complete using an action listener, then the bottleneck moves to the underlying elasticsearch client executor, right? If so, what's the advantage?

I'm not against the idea of doing it this way, I just need to better understand the advantage.

@guersam
Copy link
Contributor Author

guersam commented Sep 2, 2013

ES java client manages it's own thread pool. If we make a blocking call in a single client-wide ExecutionContext, the intermediate thread will be blocked and the ES thread pool won't be fully utilized.

@sksamuel
Copy link
Collaborator

sksamuel commented Sep 2, 2013

I see that we are thinking about slightly different cases.
I am using the case that people will use the client as a non data node (using transport client for example), in which case the client has its own thread pool that is no different to the a thread pool you can specify to the e4s client. I think you're using the case where your client is also a data node, in which case using ES's own thread pool might be better.
Perhaps we can solve this by adding in the callback methods you suggest, but keeping the "internal thread pool" ones as well? What do you think ?

@guersam
Copy link
Contributor Author

guersam commented Sep 2, 2013

In usual, when you send an async request via a transport client, it registers a callback on it's internal thread pool and the pool will notify when it's done. Additional requests will be handled in the same manner.

However, by blocking request inside a shared execution context, additional ones won't be sent until the previous one is done and the thread pool will hold only one request at once like this:

drawing1

@guersam
Copy link
Contributor Author

guersam commented Sep 2, 2013

There should be Elastic4s client between 'User' and 'ExecutionContext'..

@sksamuel
Copy link
Collaborator

sksamuel commented Sep 2, 2013

Yes I see what you mean. I'll follow up tonight with some conversion work, or you can start on a branch if you want.

@sksamuel
Copy link
Collaborator

sksamuel commented Sep 2, 2013

take a look over what I've done, feedback very much welcome.

@guersam
Copy link
Contributor Author

guersam commented Sep 2, 2013

Thanks :) I'll send a following PR in a few days.

@guersam guersam closed this as completed Sep 2, 2013
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants