Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement a sticky resource pool #25

Closed
pchiusano opened this issue Jul 28, 2015 · 2 comments
Closed

Implement a sticky resource pool #25

pchiusano opened this issue Jul 28, 2015 · 2 comments

Comments

@pchiusano
Copy link
Member

Like #24, this is a fun, important, fairly self-contained project that we aren't blocked on right now, and requiring minimal background. Wanna help out with Unison development? This could be a good project!

Also like #24, this project will be an important component of the distributed systems API and reading or at least skimming that post is probably good background.

In the distributed systems API, all communication takes place over very short-lived logical connections. You open a connection to another node, send a computation to another node for evaluation, then close the connection immediately and register a callback to be invoked when a response comes back. So, at the 'logical' level, we are opening and closing a connection for each response. But at the runtime level, we'd like these connections to be sticky, and hang around even for just a couple seconds, since many times the response will come back right away, or two nodes will be talking to each other quite frequently.

This is actually a very general idea, and it can be implemented with a really generic interface:

module Unison.Runtime.ResourcePool where
-- acquire returns the resource, and the cleanup action ("finalizer") for that resource
data Pool p r = Pool { acquire :: p -> IO (r, IO ()) }

pool :: Ord p => Int -> (p -> IO r) -> (r -> IO ()) -> IO (Pool p r)
pool maxPoolSize acquire release = _todo

So, internally, pool will keep a Map p r (p for 'parameters'). When acquiring from the pool, if it already has a resource in the Map, it returns that. The finalizer it returns just adds the resource back to that Map and schedules a task to delete from that Map and actually run the finalizer after a few seconds. (This could be another parameter to pool to specify this delay period). If another resource with the same parameter p gets acquired before that happens, great! We just return the cached 'already open' resource from our Map.

A couple notes:

  • When a resource is acquired, it is temporarily removed from the Map. This is important, since we shouldn't in general assume that multiple threads can safely access an r.
  • The returned IO () finalizer action should check that the pool size does not exceed the max bound - in that case finalizer can be run immediately, so we can be assured the pool doesn't grow too large.

This library is nicely generic but it will be used by the node server to massively speed up the inter-node protocol! And it becomes especially important when the inter-node protocol requires some handshaking to establish an encrypted, forward-secret connection (like TLS or even something more lightweight like Noise pipes).

If you are interested in this project and have questions (or suggestions), please post them here, or come discuss in the chat room.

@steveshogren
Copy link
Contributor

I put some effort into this today. master...steveshogren:51c2d5e0e3234c5c1841c7ee5bff8da12768cc37
It appears to correctly cache resources and clean them up on calling the "cleanCache" function.
Does this look like an acceptable path so far? Any changes or suggestions?

@steveshogren
Copy link
Contributor

At this point, the only thing I am stuck on is what should happen if two threads request the same resource, so (T1) P1 -> R1 then (T2) P1 -> ??? before T1 attempts to finalize. The map could include the thread id into account as part of the key?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants