You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Like #24, this is a fun, important, fairly self-contained project that we aren't blocked on right now, and requiring minimal background. Wanna help out with Unison development? This could be a good project!
Also like #24, this project will be an important component of the distributed systems API and reading or at least skimming that post is probably good background.
In the distributed systems API, all communication takes place over very short-lived logical connections. You open a connection to another node, send a computation to another node for evaluation, then close the connection immediately and register a callback to be invoked when a response comes back. So, at the 'logical' level, we are opening and closing a connection for each response. But at the runtime level, we'd like these connections to be sticky, and hang around even for just a couple seconds, since many times the response will come back right away, or two nodes will be talking to each other quite frequently.
This is actually a very general idea, and it can be implemented with a really generic interface:
moduleUnison.Runtime.ResourcePoolwhere-- acquire returns the resource, and the cleanup action ("finalizer") for that resourcedataPoolpr=Pool{acquire::p->IO (r, IO()) }pool::Ordp=>Int-> (p->IOr) -> (r->IO()) ->IO (Poolpr)
pool maxPoolSize acquire release = _todo
So, internally, pool will keep a Map p r (p for 'parameters'). When acquiring from the pool, if it already has a resource in the Map, it returns that. The finalizer it returns just adds the resource back to that Map and schedules a task to delete from that Map and actually run the finalizer after a few seconds. (This could be another parameter to pool to specify this delay period). If another resource with the same parameter p gets acquired before that happens, great! We just return the cached 'already open' resource from our Map.
A couple notes:
When a resource is acquired, it is temporarily removed from the Map. This is important, since we shouldn't in general assume that multiple threads can safely access an r.
The returned IO () finalizer action should check that the pool size does not exceed the max bound - in that case finalizer can be run immediately, so we can be assured the pool doesn't grow too large.
This library is nicely generic but it will be used by the node server to massively speed up the inter-node protocol! And it becomes especially important when the inter-node protocol requires some handshaking to establish an encrypted, forward-secret connection (like TLS or even something more lightweight like Noise pipes).
If you are interested in this project and have questions (or suggestions), please post them here, or come discuss in the chat room.
The text was updated successfully, but these errors were encountered:
I put some effort into this today. master...steveshogren:51c2d5e0e3234c5c1841c7ee5bff8da12768cc37
It appears to correctly cache resources and clean them up on calling the "cleanCache" function.
Does this look like an acceptable path so far? Any changes or suggestions?
At this point, the only thing I am stuck on is what should happen if two threads request the same resource, so (T1) P1 -> R1 then (T2) P1 -> ??? before T1 attempts to finalize. The map could include the thread id into account as part of the key?
Like #24, this is a fun, important, fairly self-contained project that we aren't blocked on right now, and requiring minimal background. Wanna help out with Unison development? This could be a good project!
Also like #24, this project will be an important component of the distributed systems API and reading or at least skimming that post is probably good background.
In the distributed systems API, all communication takes place over very short-lived logical connections. You open a connection to another node, send a computation to another node for evaluation, then close the connection immediately and register a callback to be invoked when a response comes back. So, at the 'logical' level, we are opening and closing a connection for each response. But at the runtime level, we'd like these connections to be sticky, and hang around even for just a couple seconds, since many times the response will come back right away, or two nodes will be talking to each other quite frequently.
This is actually a very general idea, and it can be implemented with a really generic interface:
So, internally,
pool
will keep aMap p r
(p
for 'parameters'). When acquiring from the pool, if it already has a resource in theMap
, it returns that. The finalizer it returns just adds the resource back to thatMap
and schedules a task to delete from thatMap
and actually run the finalizer after a few seconds. (This could be another parameter topool
to specify this delay period). If another resource with the same parameterp
gets acquired before that happens, great! We just return the cached 'already open' resource from ourMap
.A couple notes:
Map
. This is important, since we shouldn't in general assume that multiple threads can safely access anr
.IO ()
finalizer action should check that the pool size does not exceed the max bound - in that case finalizer can be run immediately, so we can be assured the pool doesn't grow too large.This library is nicely generic but it will be used by the node server to massively speed up the inter-node protocol! And it becomes especially important when the inter-node protocol requires some handshaking to establish an encrypted, forward-secret connection (like TLS or even something more lightweight like Noise pipes).
If you are interested in this project and have questions (or suggestions), please post them here, or come discuss in the chat room.
The text was updated successfully, but these errors were encountered: